by Ashlyn Stewart
Josh Clark is a UX designer and design leader who focuses on the interaction between the physical and digital world. His clients have included Samsung, Time Inc, ExxonMobil, eBay, and Entertainment Weekly. He spoke to the MFAD first year class about the designer’s role in the implementation of artificial intelligence and how artificial intelligence interacts with the world at large.
He began with the story of Juicero, a Wi-Fi connected juicer that was widely mocked for being able to squeeze juice out of a packet—something users could do with their own hands without the $400 price tag. Juicero is one story in a long line of digital products that promise to improve user’s lives but don’t, in fact, they change the process they’ve set out to improve. Josh followed this story with two questions: “How do we use technology in a way that is useful? And how do we use technology to amplify human potential? As designers and front end developers, we aren’t always sure what our role is in this.”
Rather than being fearful that AI and machine learning will replace us, Josh proposes we begin thinking about machine learning as a design material. In the same way that design software, CSS, or user data can be the tool that leads a design, he proposes designers should add AI to their toolkit. This involves getting to know how AI should be used, and what the technology is good at.
He proposes 5 ways that machine learning can be utilized:
We are all aware of the ways technology has become pervasive in our lives, but AI is entering our day-to-day world in a similar—but sometimes less apparent—way. Things like predictive text in an email, or Google maps identifying when you are going home, show us some of the ways computers are learning to respond and interact with individuals. Josh acknowledges that these small things aren’t what we generally associate with AI technology.
“These seem like awfully mundane examples… [but] one thing I really want to emphasize is that it is okay for us to get comfortable with casual uses of machine learning,” Josh said. “We need to get as comfortable designing for the algorithm as we have gotten designing for the small screen. The role of design and particularly of UX is to figure out, what is the problem? And where can we have the maximum impact?… It is about identifying where the human need is, and how can machines help to solve that human need. These are the problems design teams are always asking… The new part is learning to apply this unprecedented scale.”
Computers are able to speed up the work of gathering data, quickly finding patterns, and assembling those patterns into cohesive narratives. Humans are able to utilize those patterns to create products and services that improve people’s lives. In this way, machine learning becomes a necessary asset to creation. But in order to implement these processes effectively, we must learn the unique qualities of this technology.
Josh reminded the audience of an important distinction: computers can help us do more of what we do best. “Machines can find and set up the interesting [work] for us. They can take over the time-consuming, repetitive, detail-oriented, error-prone, joyless parts of jobs… and let people do what they do best. Almost never are the things that humans do best and computers do best the same thing.”
As we begin surrendering tasks from humans to computers, there are inevitable failures. Computers misunderstand things that can seem very basic to us, but rather than rejecting computer learnings as useless, Josh sees them as opportunities.
“Machines study the world, sift through enormous amounts of data, and come to some very surprising conclusions… People are difficult to understand. How can we expect the machines to perfectly interpret us?” Josh points out that “machines are weird, but it is because in large part, we are weird.” Sometimes the incorrect conclusions of a computer can be really eye opening to a process, a human bias, or simply just a creative way of seeing the world.
The acronym B.A.S.A.A.P. (Be As Smart As A Puppy) by Matt Jones creates a helpful framework for how we should enter dialogue with computer “minds.” Josh points out that this technology is still very new, and in many ways has to be treated like a child that is learning something, rather than an all knowing brain. “We need to be thinking about computers as brute-force pattern matching, not intelligence.” We are always refining their pattern matching abilities, but it is a process.
Another way Josh sees designers improving AI interactions is through better articulation about how computers see our world. Instead of machines giving a confident (but wrong) answer, machines should be given the opportunity to say “I think I know what this is.” As strange as it is to think about creating human uncertainty for a computer, more sophisticated language could better engage users on how the computer is making assumptions. He states that “machine learning sees the world in shades of grey. These are probabilistic systems. Allowing computers to say ‘I think I know’ is better than a wrong answer.” So far, this hasn’t been explained well to users; Siri promises to answer any question, but cannot actually do so. The creation of better language for computers can help bridge the gap between humans and machines, particularly as we introduce AI to higher stakes products.