No matter how sophisticated, artificial intelligence systems still need human oversight
Artificial intelligence and machine learning models can work spectacularly — until they don’t. Then they tend to fail spectacularly. That’s the lesson drawn from the COVID-19 crisis, as reported in MIT Technology Review. Sudden, dramatic shifts in consumer and B2B buying behavior are, as author Will Douglas Heaven put it, “causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should.”
Machine-learning models “are designed to respond to changes,” he continues. “But most are also fragile; they perform badly when input data differs too much from the data they were trained on. It is a mistake to assume you can set up an AI system and walk away.”
It’s evident, then, that we may be some ways off from completely self-managing systems, if ever. If this current situation tells us anything, it’s that human insights will always be an essential part of the AI and machine learning equation.
In recent months, I had been exploring the potential range of AI and machine learning with industry leaders, and what role humans need to play. Much of what I heard foreshadowed the COVID upheaval. “There is always the risk that the AI system makes bad assumptions, reducing performance or availability of the data,” says Jason Phippen, head of global product and solutions marketing at SUSE. “It is also possible that data derived from bad correlations and learning are used to make incorrect business or treatment decisions. An even worse case would clearly be where the system is allowed to run free and it moves data to cold or cool storage that causes loss of life or limb.”
AI and machine learning simply can’t be dropped into an existing infrastructure or set of processes. Chris Bergh, CEO of DataKitchen, cautions that existing systems need to be adapted and adjusted. “In traditional architecture, an AI and machine learning system consumes data environments to fulfill the data needs,” he says. “We need a slight change to that architecture by letting AI manage the data environment. This transition must be done smoothly in order to prevent catastrophic failures in the existing systems as well as to implement robust systems.”
AI and machine learning systems “being developed to manage data environments must be considered as mission-critical systems, and the development must be carried out very carefully,” Bergh continues. “Since data is the driving force of present-day business decisions, data environments will be the heart of the business. Therefore, even a slight failure in data management will incur a significant cost to the business by loss of operational time, other resources and user trust.”
Bergh also points to the “knowledge gaps of data professionals and AI and machine learning experts in the areas of AI and machine learning and data management, respectively.”
The bottom line is that skilled humans will always be key to managing the flow and assuring the quality and timeliness of data being fed into AI and machine learning systems. The mechanics of data management will be autonomous, but the context of the data needs human involvement. “We can look at examples like self-driving cars and data center energy optimization using DeepMind at Google and be fairly confident that there will eventually be some parallel opportunities in database management,” says Erik Brown, a senior director in the technology practice of West Monroe Partners, a business/technology advisory firm. “However, fully autonomous databases are likely a stretch in the near future; human involvement should become more strategic and focused in areas where humans are best equipped to spend their time.”
Fully autonomous data environments “will likely take many years to achieve,” agrees Jeremy Wortz, a senior architect in West Monroe’s technology practice. “Machine learning is far from solving complex wide problems. However, an approach that develops narrow and deep use cases will make a difference over time and will start the journey of a self-managing system. Most organizations can take this approach but will need to ensure they have a way to enumerate the narrow use cases, with the right tech and talent to realize these use cases.”
The more organizations depend on AI, the more humans will need to step up and oversee the data that is moving into these systems, as well as the insights that are being produced. Eighty percent or more of the effort in AI and machine learning “is often data sourcing, translation, validation and preparation for complex models,” says Brown. “As these models are informing more critical business use cases — fraud detection, patient lifecycle management — there will continue to be more demands on the stewards of that data.”
Few data environments outside of the Googles and Amazons of the world are truly ready, Brown says. “This is a huge opportunity for growth in most industries. The data is there, but collaborative, cross-functional organizational structures and flexible data pipelines aren’t ready to harness it effectively.”
One does not have to be a degreed data scientist to manage AI systems — what is needed is an interest in learning and leveraging new techniques. “AI-powered technology is fueling the citizen data scientist trend, which is a game-changer,” says Alan Porter, director of product marketing at Nuxeo. “In the past, these roles have required deep technical knowledge and coding skills. But with advances in technology — many of the tools and systems do the heavy technical lifting for you. It’s not as critical for people to fill these positions to have technical knowledge, instead organizations are looking for people who are more analytical with specific business expertise.”
While people with technical and coding skills will still play a critical role within organizations, Porter continues, “a big piece of the puzzle is now having analysts with specific business knowledge so they can interpret the information being gathered and understand how it fits into the big picture. Analysts also have to be good at communicating their findings to stakeholders outside the analytics team in order to effect change.”
In his MIT piece, Heaven concludes that “with everything connected, the impact of a pandemic has been felt far and wide, touching mechanisms that in more typical times remain hidden. If we are looking for a silver lining, then now is a time to take stock of those newly exposed systems and ask how they might be designed better, made more resilient. If machines are to be trusted, we need to watch over them.” Indeed.