The Evolving Role of Humans with AI
Although many jobs are getting displaced by AI, there are still clear limitations and biases to these systems – particularly with automated testing, which necessitates human minds to serve as checks and balances
The Evolving Role of Humans with AI
There are still clear limitations and biases to AI systems – which necessitate human minds to serve as checks and balances. AI-based machines may be fast, but they lack human emotions and cultural context. Jonathan Lupo, VP of Experience Design at EPAM Systems, Inc., takes a closer look at the human-AI relationship and predictions for the future.
Stories of machines obtaining sentience and lashing out against their human masters dominate the realm of sci-fiction – while outlandish, at a micro level, these narratives ask what the appropriate relationship between humans and artificial intelligence (AI) should be. Today, AI is unavoidably becoming more intrinsically part of our day-to-day activities, from social media and email communications to music recommendations and web searches. Likewise, AI and intelligent systems continue to replace human employees in manufacturing, service delivery, recruitment and the financial industry.
Although many jobs are getting displaced by AI, there are still clear limitations and biases to these systems – particularly with automated testing, which necessitates human minds to serve as checks and balances. AI-based machines may be fast and accurate, but they lack human emotional and cultural context.
More AI = More Human Checks
As AI is introduced into more systems, the greater the need for human checks on the AI – namely, because AI is not perfect. Frequently, AI behaves in ways that don’t work for the target user. Secondly, AIs are biased in favor of their trained datasets, which can inadvertently create a dangerous algorithmic bias that needs to get checked by people. From an inclusion perspective, humanity has a long way to go before it relies on AI-driven models – given that people don’t fully understand the impact machine learning can have on diverse minority populations.
Beyond social issues, AI algorithmic biases can also have unintentional economic consequences. For instance, when an AI business was testing a voice recognition algorithm for a German car repair company, the testing conclusively demonstrated that those with a Bavarian accent could not set an appointment. The AI did not recognize the South German dialect at all. Had the testing not uncovered the limitations of the AI, the car repair company would have unintentionally prevented an entire part of Germany from becoming its customer.
See More: How Embracing AI Solutions Can Help You Effectively Communicate with HCPs
Machines Know the “What” But Not the “Why”
To date, machines may have the ability to tell humans the ‘what,’ as in, if a product failed or succeeded. But they don’t understand ‘why’ a product failed or succeeded – especially from an emotional perspective – which is not very helpful for a business. Truly effective AI design and testing deeply understand human needs and their functional, cognitive or emotional problems. Nevertheless, there are confines to what an analytically-driven system can measure in terms of human needs. Businesses must rely on humans and their empathy to understand, in a more qualitative way, how a product is being used and, more importantly, why it’s not being used.
Many different types of testing can discover why a consumer is or isn’t using a product or service. Two key attributes that companies can test for to accurately measure the customer experience are usability and resonance. Usability relates to how well the user’s experience with the product or service conforms to their mental model or image. Essentially, a usability test seeks to discover if the intended audience could easily use the product or service in question. Resonance attempts to determine how emotionally engaged the target user base was with the product or service. Usability and resonance tests are pivotal when a business builds a new and unused product or service.
Unfortunately, the world is accelerating, and developers have found it difficult to run these tests. Today, many organizations see a fast time-to-market strategy as highly advantageous, causing them to release software rapidly. While a fast time-to-market approach may be beneficial, accelerated development speeds make it almost impossible for developers to complete more intensive and involved tests like usability and resonance. By depriving developers of the ability to perform the necessary due diligence, the product or service becomes far less robust or refined regarding the target user’s needs.
The Benefits and Limitations of Automated Testing
Developers have turned to automated testing and analytics to keep up with this accelerated pace. With AI, businesses don’t have to spend time creating moments in the product life cycle for testing and validation; instead, these checks are continuous and occur in real-time. Despite the time-saving benefits of automated systems, they can only measure how closely the software or product conforms to the tests. Automated tests cannot uncover how well the product or service accomplishes the task the user wants or gauge how engaging the experience was – those things remain within the domain of humans.
Moreover, companies developing a new product or incrementally enhancing an existing one need to determine the earliest place to get user-oriented feedback into the product life cycle. The earlier developers can do that, the better they can continuously improve these usable and resonant interfaces.
Partners for Life
Currently, AI is far more limited than people might expect. It is often challenging to teach an AI to do something that humans don’t know how to do themselves. Humans, therefore, will always have a role to play in training and assisting AI and automated systems. These AI-driven models are dependent on context and a deep level of input that can only be brought about or introduced with human understanding, not only for product development but also for societal and cultural situations.
How can human-AI collaboration be improved in the years to come? Tell us what you think on LinkedIn, Twitter, or Facebook. We’d love to get your take on this!