Saturday, September 23, 2023

Keep people engaged with artificial intelligence

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

In the 1950s, Alan Turing proposed an experiment called the imitation game (now called the Turing test). In it he posited a situation where someone – the interrogator – was in a room separated from another room with a computer and a second person. The purpose of the test was for the interrogator to ask questions of both the person and the computer; the purpose of the computer was to trick the interrogator into believing it was human. Turing predicted that computers would eventually be able to successfully mimic human behavior and fool interrogators much of the time.

Turing’s prediction has yet to come true, and it’s a good question whether computers will ever be able to actually complete the test. However, it is both a useful lens to see the dynamics of how people see the potential possibilities of artificial intelligence and a source of irony. While AI has amazing capabilities, it also has limits. Today it is clear that no one knows the full workings of the AI ​​we create, and the lack of “explainability” and people in the loop causes problems and missed opportunities.

Whatever the future holds, one thing is clear: human decision-making must be included in the loop of AI functioning. If it is a “black box”, it leads to biased decisions based on inherently biased algorithms, which can then lead to serious consequences.

Why AI is often a black box

There is a common perception that people know more about and have more control over AI than they actually do. People believe that because computer scientists wrote and compiled the code, the code is both knowable and verifiable. However, that is not necessarily the case.

AI can often be a black box, where we don’t know exactly how the final outputs are constructed or what they will become. This is because the code is set in motion, and then – almost like a wheel rolling down a hill on its own momentum – it continues, taking in information, adapting and growing. The results are not always foreseeable or necessarily positive.

AI, while powerful, can be imprecise and unpredictable. There are multiple cases of AI failures, including serious car accidents, arising from AI’s inability to interpret the world in the way we predict. Many drawbacks arise because the origin of the code is human, but the progression of the code is self-directed and unfettered. In other words, we know the starting point of the code, but not exactly how it grew or progressed. There are serious questions about what goes on in the mind of the machine.

The questions are worth asking. There are spectacular downsides to incidents like car crashes, but subtler ones like trading computer flash raise questions about the algorithms. What does it mean to have set these programs in motion? What is at stake in using these machines and what precautions should be taken?

AI must be understandable and can be manipulated and handled in a way that puts end users in control. The beginning of that dynamic starts with making AI understandable.

When to press AI for more answers

Not all AI needs are the same. For example, in low-stakes situations, such as image recognition for non-critical needs, it probably isn’t necessary to understand how the programs work. However, it is critical to understand how code works and continues to evolve in situations with important outcomes, including medical decisions, hiring decisions, or car safety decisions. It is important to know where human intervention is needed and when it is necessary for input and intervention. In addition, because educated men mainly write AI codeaccording to (appropriately) the Alan Turing Institute, there is a natural inclination to reflect the experiences and worldviews of those programmers.

Ideally, coding situations where the end goal involves vital interests should focus on “explainability” and clear points where the coder can step in and either take control or modify the program to ensure ethical and desirable end performance. Further, those developing the programs—and those reviewing them—must ensure that source inputs are not targeted at particular populations.

Why focusing on ‘explainability’ can help users and programmers refine their programs

“Explainability” is key to making AI both assessable and adaptable. Companies, or other end users, need to understand program architecture and end goals to provide developers with crucial context on how to modify inputs and limit specific outcomes. Today there is a movement in that direction.

For example, New York City has introduced a new law that requires a bias audit before employers can use AI tools to make hiring decisions. Under the new law, independent reviewers must analyze the program’s code and process to report the program’s disparate impact on individuals based on immutable characteristics such as race, ethnicity and gender. The use of an AI program for hiring is specifically prohibited unless the report of the program is displayed on the company’s website.

When designing their products, programmers and companies should focus on anticipating external requirements, such as those above, and plan for downside protection in litigation where they need to defend their products. Most importantly, programmers should focus on creating explainable AI because it contributes to society.

AI using “human in the loop designs” that can fully explain source components and code progressions will likely be needed not only for ethical and business reasons, but also for legal reasons. Companies would be wise to anticipate this need and not have to adjust their programs afterwards.

Why developers need to be diverse and representative of wider populations

To move beyond the need for “explainability,” the people making the programs and inputs need to be diverse and develop programs that are representative of the wider population. The more diverse the perspectives included, the more likely it is that a genuine signal emerges from the program. Research by Ascend Venture Capital, a VC firm that supports data-driven businesses, found that even the giants of the AI ​​and technology world, such as Google, Bing, and Amazon, have flawed processes. So there is still work to be done on that border.

Promoting inclusiveness in AI needs should be a priority. Developers must proactively work with the communities they influence to build trust with the communities they influence (for example, when law enforcement uses AI for identification purposes). When people don’t understand the AI ​​in their world, a fear response ensues. That fear can cause a valuable loss of insight and feedback that would make programs better.

Ideally, programmers themselves are a reflection of the wider population. At the very least, an aggressive focus should be placed on ensuring that all programs do not exclude or marginalize users – intentionally or otherwise. In the rush to create advanced technology and programs, programmers should never lose sight of the fact that these tools are meant to serve people.

The Turing test may never pass, and we may never see computers exactly matching human capabilities. If that is true, as it is now, then we must prioritize preserving the human purpose behind AI: advancing our own interests. To do that, we need to generate explainable, verifiable programs in which each step in the process can be explained and controlled. Further, those programs should be developed by a diverse group of people whose experiences reflect the wider population. Upon reaching these two items, AI will be refined to continue to advance human interests and cause less harm.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Contents