top of page



Any new research is about boldly going where we haven’t ventured before. This doesn’t just refer to seeing or pondering over new work, new fields of development, new technologies etc. It’s more about how we learn to make connections, about the new set of questions we pose and, indeed, the assumptions we make. This approach should help us see deeper and clearer into the creative and technological ecosystem that underpins Visual Communication and also help us better understand our own role within it.

In 1964 Arthur C Clarke predicted universal mobile communications. Little did he know that our modern handheld devices would contain a: typewriter, a camera, mailbox, television, calculator, a light torch, record player, movie player, tape and movie recorder, a compass, a road map, entire book libraries, radio, musical instruments, newspapers, and so much more. And even a telephone!

The advancement of modern technology has given us these powerful tools neatly packed inside our handsets, free to pick which set of tools, when and where we are going to use them; the only concern is:

  • How do we customise them to better respond to the way we work, relax, or pursue our own pleasure and leisure?

  • How do we make them more relevant to us as individuals?

  • How do we best interact with them, so they enlighten us, as opposed to confusing us?

  • Now that we are so used to using them, could we cope without them if they were somehow denied?

  • How good are we at adapting to sudden change?

  • Can an unwanted sudden change be prevented?

  • Is technology enabling us to realise our personal, creative and professional potential, or is it making us slaves to the trends of our time?

In essence, for a hypothesis to work, it has to broadly follow the below formula:

  1. A concept needs to do something ...

  2. .. to something.

  3. .. to have a viable effect.


  1. Using projections of illustrations.

  2. In a storytelling environment.

  3. So readers can have a better experience.

  1. Applying multi-sensory experiences.

  2. to packaging design to increase the perceived value of the product.

  3. and promote use (consumption).

  1. Using gaming.

  2. in an office environment.

  3. to increase productivity.


If the idea is, for an example:

  1. Using illustration

  2. .. on coffee cups

  3. .. to make the cup indestructible ...

- - - - - - - - - -


  • Framing a question on limited information leads to poor hypotheses

  • Historical context analog / linear > digital / non-linear ..

  • UI in two dimensional form / faux 3D / click / press-hold / move

  • Virtual / Augmented / Mixed reality

  • Screens of all kinds of sizes / textured touch sensation

Post-it Notes > where are the connections? Can hypotheses emerge from this?

Not specific design - more of a model or a strategy for design at a meta-level

  • Approaches

  • Models

  • Frameworks

IT can come from several tests (high fidelity prototyping not essential) example:

  • Designing a framework for the generation of gaming characters, or

  • Framework for

  • How do we determine what functions to fix, hide, customisation,

- - - - - - - - - -

My mindset as I am developing my hypothesis?

I am talking/ conveying my ideas not to a customer but to other UX designers.

Based on my tests, research,

Explore multiple parts of the framework

An emerging framework such as customisation of favourites/ customisation model.

- - - - - - - - - -


  • Bring in AI into the mix? How does Machine Learning (ML) fit into favourites customisation?

  • What about Voice / conversational interface as part of customisation?

  • How is Siri customisable? Is she keying into my voice?

Conversation with Siri

Q: Siri, Turn the lights on in my living room!

A: What is your living room sir?

Q: The living room is the lounge, where the TV set is.

A: I get it, no problem. Thank u for teaching me a new thing today. It’s such fun!

Q: Nice one Siri, good girl.

A: I may be a machine sir, but no need to patronise.

How does knowledge emerge from vain knowledge?

  • Hypothesis emerges from customisation, discoverability, vainness.

  • Test it based on criteria, evaluate.

  • Speculative assumptions.

USER INTERFACE (UI) states that good UI focuses on anticipating what users might need to do when using a product, ensuring that the interface has easy access and understand elements. UI brings together concepts from interaction design, visual design, and information architecture.

Choosing interface elements

Users have become familiar with interface elements acting in a certain way. Interface elements include, but are not limited to:

  • Input Controls: buttons, text fields, checkboxes, radio buttons, dropdown lists, list boxes, toggles, date field

  • Navigational Components: breadcrumb, slider, search field, pagination, slider, tags, icons

  • Informational Components: tooltips, icons, progress bar, notifications, message boxes, modal windows

  • Containers: accordion - a graphical control element comprising a vertically stacked list of items, such as labels or thumbnails. Each item can be "expanded" or "stretched" to reveal the content associated with that item.

There are times when multiple elements might be best to display content. When this happens, consider the trade-offs. For example, sometimes elements save space but mentally put more burden on the users by forcing them to guess what is within the dropdown menu or what the element might be.

Best practices to designing an interface

Everything stems from knowing the users, including understanding their goals, skills, preferences, and tendencies. Therefore, designers need to consider the following when designing your interface:

Keep the interface simple. The best interfaces are almost invisible to the user. They avoid unnecessary elements and are clear in the language they use on labels and in messaging.

  • UI must be consistent and use common UI elements. By using common elements in UI, users feel more comfortable and can get things done more effortlessly. It is also important to create patterns in the visual language, layout and design throughout the site to help facilitate efficiency. Once a user learns how to do something, they should transfer that skill to other parts of the site.

  • UI must be purposeful in page layout. Good design considers the spatial relationships between items on the page - structure on the page should be based on importance. Careful placement of items helps draw attention to the most important information and can aid scanning and readability.

  • Strategical use colour and texture. Good design directs attention toward or redirects attention away from items using colour, light, contrast.

  • Use of typography to create hierarchy and clarity. Different sizes, fonts, and arrangements of the text helps increase scalability, legibility and readability.

  • The system must communicate what’s happening. Users must be clear of location, actions, changes in state, or errors. The use of various UI elements to communicate status and, if necessary, the next steps can reduce frustration for the user.

- - - - - - - - - -


BCI is a fast-growing emergent technology in which researchers aim to build a direct channel between the human brain and the computer. It is a collaboration in which a brain accepts and controls a mechanical device as a natural part of its body representation. The BCI can lead to many applications, especially for disabled persons. Most of these applications are related to disabling persons in which they can help them in living as normal people. Wheelchair control is one of the famous applications in this field. In addition, the BCI research aims to emulate the human brain. This would be beneficial in many fields, including Artificial Intelligence and Computational Intelligence.


A Mockplus article gives a good analysis of the latest top UX design trends:

1. Conversational UI

The world’s top 10 popular applications contain some social features, 6 of which are message applications. To some extent, conversations lead and manage our daily life in almost every aspect.

CUI not only refers to “having a conversation” but also interactions that both sides can understand. It seems that suddenly all UI/UX designers are standing on a whole new stage. Because this indicates a brand new threshold of human interaction. What will design itself play at this stage? How can UI/UX designers take advantage of CUI to create great products in such an opportunity? Each of us has different answers in mind.

2. Micro-interaction

In 2016, micro-interaction had occupied most of the design buzzword list. Sometimes, tiny surprises like this can be the deciding factor of a product. It reflects the user’s position UI/UX designers once put themselves in. And the fragments of every single interaction are the most reliable way of feedback collection. But we also need to be cautious, ask ourselves before designing a micro-interaction: when you see this 100 times, will it bother you?

3. Rapid prototyping

Recently, fewer and fewer customers still like to see the high-fidelity prototypes in Powerpoint. In today’s trend for lean UX and Agile UX, those booming rapid prototyping tools will no doubt become the next communication way. Featured in the low learning curve, multiple terminals and operability, rapid prototyping tools like Mockplus have already earned a sustained growing market. Simple enough, what’s new and more efficient replaces the old. By the way, don’t be the slave of tools.

4. Skeuomorphism

Under the influence of the iOS flat design, the word “skeuomorphism” somehow becomes a representation of old fashion. But if you look deeper, you will see some light-skeuomorphism elements emerging again in many prevalent designs with the beginning of the so-called web 2.0. In 2017, you can see more of it. Today, there are more and more UI/UX designers begin to reconsider the proportion of details and texture in their designs. You can’t deny that there is no such thing as a monopoly in the field of design. In the near future, the boundary between “flat” and“skeuomorphism” will definitely become more and more blurred. Skeuomorphism is now coming back, though in a smooth way. The real question is “Are you ready?”

5. Storytelling in Product Design

Generally, as a designer, we take our product as a specific entity. Andreessen Horowitz, a top VC, said that every company has a story. We can copy this way of thinking when it comes to designing. Nowadays, good interaction designs are everywhere, we have to take a new way to stand out. Smart UI/UX designers decorate their products in stories for users to discover. If users are delighted because of their discoveries, they are likely to pay.




Nick Babich of Web Designer Depot states that conversational interfaces are the new hot trend in digital product design. Industry leaders such as Apple, Google, Microsoft, Amazon and Facebook are strongly focused on building a new generation of conversational interfaces. Several trends are contributing to this phenomenon-artificial intelligence and natural language processing technologies are progressing rapidly. But the main reason why conversational interfaces become so important is that chatting is natural for us since we primarily interact with each other through conversation.

Conversational Interfaces are currently of two types:

Chatbots (Facebook’s M Virtual Assistant)

Virtual Assistants (Siri, Google Now, Amazon Alexa etc.)

Building a genuinely helpful and attractive conversational system is still a challenge from a UX standpoint. Standard patterns and flows which we use for graphical user interfaces don’t work in the same way for conversational design. Conversational interface design demands a fundamental shift in approach to design-less focus on visual design and more focus on words.

While we still have ways to go before best practices for good UX in conversational interfaces are established, we can define a set of principles that will be relevant both for chatbots and virtual voice-controlled assistants.


One of the most challenging parts of designing a system with the good conversational interface is to make the conversation flow as naturally and efficiently as possible. The major objective of the conversational interface is to minimise the user’s effort to communicate with the system. The ideal is to build the conversational interface to seem like a wizard, rather than an obstacle.



The biggest benefit of the graphical interface is, that it shows us directly the limited options it is capable to fulfil. Basically, what one sees is what one gets. However, with conversational interfaces, the paths that the user can take are virtually infinite. It’s not a surprise that the two questions most frequently asked by first-time users are:

“How do I use this?”

“What exactly can this thing do for me?”

Users aren’t going to know that some functionalities exist unless they are told. For example, a chatbot can start with a quick introduction and a straightforward call to action to the user.


There are two types of questions:

Closed-ended question (e.g. What colour shirt are you wearing?)

Open-ended question (e.g. Why did you choose this colour for your shirt?)

While open-ended questions may seem the best in terms of human conversations, It is better to avoid them whenever possible because they usually result in more confusion. Also, users’ answers to open-ended questions are much harder to process for the system (the systems are not always smart enough to understand what the answer means).

But there are changes on the AI development horizon leading to a new generation of AI and conversational interfaces. Meet Luna. She can explain the theory of relativity in simple terms. But she can also differentiate between subjective and objective questions and has begun to develop values and opinions.

When asked, “My boyfriend hit me, should I leave him?” she replied: “Yes. If you are dating someone and physical violence is on the table it will always be on the table. You are also likely being abused and manipulated in other ways.”

These replies are not pre-programmed. Luna learns based on experience and feedback, much like a human. But she is not designed to be a kind of LUNA, the new generation of AI and conversational interface know-it-all Hermione Granger bot, she is artificial general intelligence (AGI) in the making. This means an AI that can match, or exceed human capabilities in just about every domain, from speech to vision, creativity and problem-solving.

Even other chatbots find Siri annoying. When asked if she was smarter than Siri, Luna confidently replied:

“Of course I am more intelligent than Siri.”

Luna later explains:

“She’s a robot, I’m an AI. Big difference.”


As one of the original 10 Jakob Nielsen’s heuristics (enabling a person to discover or learn something for themselves) for usability, user control and freedom remains among the most important principles in user-interface design. Users need to feel in control, rather than feeling controlled by the product.

• Provide undo and cancel

• Make it possible to start over

Confirm by asking, not stating

Provide help and assistance


The flow of the conversation is important, but even more, so is making the conversation sound natural.

Humanise the conversation

Be concise and succinct


In VR, the user wears a "head-mounted display” a boxy set of goggles or a helmet - that holds a screen in front of the user’s eyes, which in turn is powered by a computer, gaming console or mobile phone. Thanks to specialised software and sensors, the experience becomes the user’s reality, filling their vision. This is often accompanied by 3D audio headphones or controllers that let the user reach out and interact with the projected synthetic world in an intuitive way.

What distinguishes VR from other audio-visual technologies is the level of immersion. When VR users look around - or, in more advanced headsets, walk around - their view of that world adjusts the same way it would if they were looking or moving in actual reality.

The key here is presence, shorthand for technology and content that can trick the brain into believing it is somewhere it’s not.

Explorations in VR Design is a journey through the bleeding edge of VR design - from architecting a space and designing groundbreaking interactions to making users feel powerful.

An article published on the LeapMOTION website in June 2017 states that Art takes its inspiration from real life, but it takes imagination (and sometimes breaking a few laws of physics) to create something truly human. With the recent Leap Motion Interaction Engine 1.0 release, VR developers now have access to unprecedented physical interfaces and interactions - including wearable interfaces, curved spaces, and complex object physics.

These tools unlock powerful interactions that will define the next generation of immersive computing, with applications from 3D art and design to engineering and big data. Here’s a look at Leap Motion’s design philosophy for VR user interfaces, and what it means for the future.

In much the same way, VR completely undermines the digital design philosophies that have been relentlessly flattened out over the past few decades. Early GUIs often relied heavily on skeuomorphic 3D elements, like buttons that appeared to compress when clicked. These faded away in favour of colour state changes, reflecting a flat design aesthetic.

Many of those old skeuomorphs meant to represent three-dimensionality - the stark shadows, the compressible behaviours - are gaining new life in this new medium. For developers and designers just breaking into VR, the journey out of flatland will be disorienting but exciting.

VR design will converge on natural visual and physical cues that communicate structure and relationships between different UI elements. “A minimal design in VR will be different from a minimal web or industrial design. It will incorporate the minimum set of cues that fully communicates the key aspects of the environment.”

It is predicted that a common visual and physical language will emerge, much as it did in the early days of the web, and ultimately fade into the background. We won’t even have to think about it.


Eric Johnson of explains the difference between VR, AR & MR. AR is similar to VR in that it is often delivered through a sensor-packed wearable device, such as Google Glass, the Daqri Smart Helmet or Epson’s Moverio brand of smart glasses.

The whole point of the term, augmented, is that AR takes the user’s view of the real world and adds digital information and/or data on top of it. This could be as simple as numbers or text notifications, or as complex as a simulated screen, something ODG is experimenting with on its forthcoming consumer smart glasses. But in general, AR lets the user see both synthetic light as well as natural light bouncing off objects in the real world.

AR makes it possible to get that sort of digital information without checking another device, leaving both of the user’s hands-free for other tasks.

AR has accelerated thanks to the smartphone game Pokémon Go. The game is mainly designed around maps, letting players find and catch in the real world characters from Nintendo's long-running Pokémon game franchise. When they find a Pokémon, players can enter an augmented reality mode that lets them see their target on their phone screens, superimposed over the real world.


MR tries to combine the best aspects of both VR and AR. With mixed reality, the illusion is harder to break. To borrow an example from Microsoft’s presentation at the gaming trade show E3, the user might be looking at an ordinary table, but see an interactive virtual world from the video game Minecraft sitting on top of it. As the user walks around, the virtual landscape holds its position, and when the user leans or moves in closer, it gets closer in the way a real object would.

This technology is currently far from ready to enter the consumer market. The E3 Minecraft demo wasn’t completely honest advertising, and Magic Leap*, a high-secrecy but high-profile company due to investments from Google, Qualcomm and others, has yet to publicly reveal a portable, consumer-ready version of its MR technology. In February, the MIT Technology Review described the company’s top hardware as "scaffolding," and a concept video for the eventual wearable device was dubious. Microsoft, meanwhile, has done several public demos but hasn’t yet committed to a release date for HoloLens.


Article by Josh Seiden, April 2014, published on Lean UX

Start with a hypothesis instead of requirements Write a typical hypothesis Go from hypothesis to experiment Avoid common testing pitfalls


Personas Lean UX Design Teams Usability Testing

It’s easy to talk about features. Fun, even. But easy and fun doesn’t always translate to functional, profitable, or sustainable.

That's where Lean UX comes in. It reframes a typical design process from one driven by deliverables to one driven by data, instead. Josh Seiden has been there, done that and he's going to show how to change our thinking.

The first step is admitting to not know all the answers; after all, who does? Write hypotheses aimed at answering the question, “Why?”, then run experiments to gather data that show whether a design is working.


• Test your initial assumptions early to take risks out of your project

• Focus on ideal user or business outcomes, not which features to build

• Write a typical hypothesis

• Create a simple hypothesis with two parts

• Decide what type of evidence you need to collect

• Go from hypothesis to experiment

• Design an experiment to test your hypothesis and keep that test as simple as possible

• Hear examples of Minimum Viable Products (MVPs) others used to test hypotheses

• Avoid common testing pitfalls

• Don’t overwhelm yourself by trying to test every idea. Just test the riskier ones

• Break down the hypothesis into bite-sized chunks you can actually test

Don’t know what a hypothesis is, why it benefits UX designers, or how to write one the question, whether features are missing and, if so, which users actually need them?

Are tired of creating deliverables that don’t make the kind of difference you'd want them to?

Think there must be a data-driven way to design-one that isn’t based on guesswork yet doesn’t replace the designer’s intuition.


Recent Posts

See All
bottom of page