A puzzling observation: the progress epitomized by Moore’s law of integrated circuits never resulted in an equivalent evolution of user interfaces. Over the years, interaction with computers has evolved disappointingly little. The mouse was invented in the 1960s, the same decade as hypertext. Push buttons and the QWERTY layout existed in the 19th century and the display-plus-keyboard setup was used in the Apollo program. The menu, one of the most prevalent ways to present options in a user interface, was used by merchants in Imperial China to present food options to busy customers. We use the same technique of the menu in apps, operating systems, and consumer electronics.
Another puzzling observation: during the three decades of the modern graphical user interface, we have not become very proficient at designing usable interfaces. A recent example: on January 13, 2018, a false alert of an inbound ballistic missile was issued in the state of Hawaii, asking people to seek shelter. Panic ensued. Scrutiny of the event exposed a poorly designed menu: two similar options but with a critical difference—drill and no drill—were shown close to each other.
Is the user interface becoming a bottleneck to the development of information technology?
While algorithms can be studied using theorems and circuit can be analyzed using circuit theory, such as Kirchhoff’s current and voltage laws, there is no equivalent formal foundation for user interface design. Consequently, the study and design of user interfaces veered to a different direction, becoming more of a craft or practice, driven by heuristics, personal experience, empathy, mimicry, and—most dominantly—relentless trial and error. Design thinking has little room for engineering science. It offers no proven way to describe or explain essential aspects of interaction, derive good solutions, or predict key properties. The contrast to the way bridges and engines are designed is stark.
Computational interaction is the idea of using algorithms to analyze, generate, evaluate, and make user interfaces “alive.” It involves formal representation of the way in which a human uses or experiences technology, and some way of reasoning how this interaction should be organized. It is the act and process of obtaining a satisfactory (or optimal) solution against some computationally implemented objective or evaluative function.
The idea is not new. August Dvorak, the pioneer behind the Dvorak Simplified Keyboard, derived its layout from first principles, in his case experimental effects pertaining to the efficiency and ergonomics of typewriting. The history of research in human-computer interaction reveals repeated attempts at this idea, at times using cognitive models, which would predict user performance, or optimization to derive optimal design combinations. If it these approaches were unsuccessful, why bother?
“Is the user interface becoming a bottleneck to the development of information technology?”
One rationale is that we need more efficient solutions, as the complexity of user interface technology is rapidly increasing. A regular webpage may involve several technologies to implement, for example, a grid of options shown to a user. Another reason is opportunity: we can now benefit from not only algorithmic advances in, for instance, machine learning, but also associated software (e.g., programming libraries), dataset, networking (e.g., cloud computing), and hardware (e.g., GPU) advances. In addition, advances in cognitive, economic, and behavioral sciences allow us to more elegantly express the rules that govern human performance and behavior, essentially equipping the computer with an ability to predict human responses to designs.
Progress has been rapid. For example, just eight years ago solutions in user interface optimization were limited to keyboard layouts and simple widget layouts. We can now optimally present visualizations such as scatterplots to the human perceptual system, allow users to enter text using computer vision sensors and hand gestures, and calculate how web layouts should be structured and presented. We can optimize menus for different goals, such as easy access or learnability. User interfaces can be tailored, for example, to groups who have special needs, such as dyslexics or people with motor tremor. The structure and content of user interfaces can be learned from observations, potentially on a massive scale.
Formal, probabilistic, economic, and optimization methods can compute optimal trade-offs among choices, provide guarantees, identify confidence intervals, and even derive proofs for user interface properties. In the Hawaii incident, had the user interface underwent testing using formal verification methods, the menu would not have made it into production. Probabilistic methods can be used to decode what a user intends when inputs are noisy or uncertain.
Besides practical uses, computational modeling may drive new discoveries about the very nature of interaction. Research on computational rationality is an example of this. A user’s behavior is described as a decision problem under bounds. The system estimates what a rational user would do with a user interface given certain goals and abilities, such as memory, attention, or motor skills. Models can then successfully predict the consequences of subtle changes to a task or interface design. For example, how does a users’ visual search strategy change if the ordering of a menu is changed, or different colors are used in a graphical user interface.
Perhaps the most daring proposition is that, beyond improvements to existing interfaces, innovation in this space—which in the literature is considered a through-and-through humane and creative activity—can be abstracted, decomposed, and solved in code and in silico.
Code could become a substrate and a nexus for scientists and designers to work together to improve the way we use computers.
Featured image credit: “together now” by John Schnobrich. Public Domain via Unsplash.
Recent Comments
There are currently no comments.