Artificial intelligence (AI) sector has in the last several years been dominated by worries over ethics and fairness. Concurrently, the world has now awakened to the intrusive, structural problems of racial injustice. These two are extensively linked.
AI is one of the most notable technological transformations. It is a segment of a thread that starts with the rise of the personal computer and then runs through the explosion of the internet. This technology is seeming to gain mass adoption through the mobile revolution with many devices incorporating AI within their designs.
This technology has the power to do several great things but it is also equally dangerous. One of the ways that the nascent industry can mitigate the possible harms of AI is to create diversity, equity, and inclusion (DEI) at all steps in the process of making and deploying the technology.
Most of the AI developers within the enterprise, tech startups, and SMEs of all kinds understand why DEI is crucial for moral and practical reasons. However, operationalizing DEI is a major challenge and that is why it needs thorough planning and excellent designs to cater to all these needs.
The old hypothesis of moving fast and breaking things has already expired. The senior director of AI software products at Intel, Huma Abidi said:
“I think there should be a new mantra: Move fast and do it right. The very notion of “breaking things” is dangerous because the stakes in AI are so high. AI for all is only possible when technologists and business leaders consciously work together to create a DEI workforce.”
The VP of North America go-to-market, global markets, at IBM, Rashida Hodge, commented:
“As a Black woman in tech, I understand the harsh realities of what happens when we neglect to do the real work, and the real work is ensuring that the conversation is not just about the algorithm. Technology serves as a mirror for our society. It reveals our bias, it reveals our discrimination, [and] it reveals our racism.”
Technologies are shaped by the people who develop them and in most cases, they are not affected by the systemic effects of operating within an atmosphere that is not inclusive or diverse.
Hodge also added that there should be a shift in focus from fixing things by addressing underlying algorithms to recruiting diverse talent and retaining it.
The program management lead for the ML fairness and responsible AI at Google, Tiffany Deng, said that people bring out all their personalities when it comes to artificial intelligence. That is believed to serve as a guide for how to think about the developer.
Creating AI is not a siloed process. Deng explained:
“Going into those communities, understanding how they’re using technology, understanding how they can be harmed, understanding what they need for it to be better, for it to be more impactful for their lives is key to creating AI. And it’s a perspective you’re missing if you don’t have a diverse workforce.”
Therefore, developers should shun the old mindset and approach to technological developments. Also, business owners and developers need to move away from the tech silo and reach out to communities that might be affected by their AI to determine the possible challenges and real needs that exist.
Build The Right Staff
Business owners need to ensure that their workforce represents the people they are trying to serve. Through this strategy, AI developers solve different challenges that affect the society that they serve. Through solving real problems, the business owners and developers achieve their desired goals seamlessly.
For anyone seeking to create something for the education sector, they should bring in domain experts and educators in their artificial intelligence projects and rely on their expertise. In general, it is always prudent for the developer to consult with experts in the field in which they want to introduce an AI project.
Addressing bias in AI is necessary since the biases in the real world can be mimicked and amplified through this technology. Therefore, although the domain experts are important for creating AI systems, a team made up of people from different areas of life is important to come up with a technology that is functional and effective. Hodge commented:
“You also need consumer advocates, public health professionals, industrialist designers, policymakers — all of them tying into the diverse workforce, which is … representative of the population that solution will be serving.”
Setting Up The Right Workflows In Artificial Intelligence
The right workflows are necessary with the right workforce in place. Conceptually, the first thing to think about is the ‘why.’ It is important to understand what problem the AI project aims to solve. This clarity around the initial approach is vital.
Although artificial intelligence can help transform all industries, the right workflow is critical. Then before proceeding, the developer must determine whether their project is sensical or essential for the existing problem, and how technological development may cause harm.
Finding answers to these critical issues from the outset of a project, the answers may enable the developer to shut down whole workflows that would have poor outcomes. From a practical perspective, there is no singular starting point for any AI project. The starting point is determined by a company’s structure, business problems, needs, and what in-house experts are available.
Abidi says that it is important to define and build clear standards and processes that are quantifiable and have definite measurements of robustness and quality.
One example is Datasheet for Datasets which is a paper spearheaded by Gebru that espouses the requirement for better documentation in artificial intelligence. Based on this paper’s abstract:
“every dataset [should] be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on.”
In another documentation project by Gebru, Model Cards for Model Reporting:
“Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information.”
Just like all of the other software projects, AI projects need to be robust with equitable, diverse, and inclusive processes and standards. Hodge supports a careful and iterative approach to developing artificial intelligence technology. She said:
“With AI, change doesn’t have to happen in one swoop. Some of the best AI projects that I’ve been involved in … MVP their way to scale. They use incremental sprints, which is important because there’s nuance in this work, and that requires feedback, and more feedback, and more data, and so on.”
Just like people process information and nuance after visiting many places and reading more information to have different perspectives, they should use the same perspective to look at AI applications. Please note that any shortcuts in the development work of any artificial intelligence project are shortcuts to failure. Developers need to think of AI development as a continuous business process with a definite lifecycle, which they need to revisit regularly.
In general, artificial intelligence needs training, appropriate design, expertise, and diverse data. Thus, when measuring results, developers should avoid getting caught up in ‘accuracy.’ Instead, they should understand what they are trying to solve, examine what is useful and relevant from their project, and weigh the successes on a case-by-case basis.