Artificial Intelligence seems to be everywhere, today we can have AI write our blogs, trade our stocks and kill our enemies. Recently Elon Musk said that he is far more worried about AI than he is about nuclear weapons. This will be a problem that the world will need to get a handle on sooner rather than later.
Today we talk to Peter Scott a futurist and technology expert. Peter is the author of the book -“Artificial Intelligence and You” . He speaks widely about AI to people who know more than us, this is what he had to say…
E-Crypto News
-
Is Artificial Intelligence a threat?
Every technology is dangerous if abused or misused. What makes artificial intelligence extraordinary is the staggering range of applications it can be put to. The threat lies not in AI, but in us: AI is like a chainsaw, but are we a lumberjack or a toddler? Without understanding what we’re doing, we might cut our legs off. Yet AI will certainly bring us benefits equally numerous and impactful, all the way up to cures for cancer and answers to climate change. So it’s not AI that needs to be restrained; it’s us.
-
Do you find most AI Bias towards a certain group and will that create even more inequality?
Bias in today’s AI applications is a highly active topic. AI is at its heart, machinery: algorithms and mathematics, within which there is no place for bias. But those algorithms work by using data that we feed them: data from the real world that we use to teach AI what to do. If that data contains bias, the AI will produce biased answers. Debiasing data is very difficult; sometimes even complete and accurate data will still let us down. For instance, an AI designed to predict the next President of the United States given all the past data could only pick a man; the training data contains no women. Amazon shut down its resume-evaluating AI because it downgraded candidates that included words like “women’s” (as in “captain of women’s soccer team”), because most Amazon engineers have been men. So we have to teach AI not just the best that we have done in the past, but the best that we want to be in the future.
-
Ai and Advanced weapons, will this be a good marriage or will this be a problem for our future?
This is a very interesting question. Proponents of using AI to make weapons autonomous (able to make their own decisions on targeting) such as former deputy defense secretary Robert Work say that this reduces collateral damage and bystander casualties by making smart weapons smarter; only killing exactly the people they should. Arguing against them are many scientists including professors Stuart Russell of the Campaign to Stop Killer Robots and Peter Asato of the International Committee for Robot Arms Control. They are concerned that the technology will be subverted or copied by rogue states or terrorists and unleash unstoppable waves of urban warfare conducted by devices like autonomous mini-drones that would have us in the middle. This disagreement is not going to be resolved soon.
-
High Frequency Stock Trading – is AI a help for traders or will it create herd mentality
I think this is better addressed to a trader. High frequency trading has been around for several decades and already caused more than one flash crash. AI has as much potential to destabilize that environment as many others., but we can be certain that the most advanced AI available is already being deployed to find any and every gain possible in trading markets.
-
What kind of regulation do you see for AI around the world?
Very little. The most advanced government in that respect might be the European Parliament, which keeps writing very forward-looking white papers about advanced AI and policy. But part of the problem is that governments have not figured out where to draw the line between data processing and artificial intelligence. The conclusions that AI can reach from its data are complex enough to be qualitatively different from what data processing does, but there is a continuum from one to the other that is very hard to legislate boundaries inside. Companies such as IBM are developing introspective technologies that enable more ethical use of AI, and there are some new international bodies like the Global Partnership on AI, but little of their work involves regulation.
-
Is there a way that companies can track the AI model’s performance, from their AI Rollouts?
A company shouldn’t buy an AI implementation that doesn’t come with a way to track its performance! But what will take more work is determining just what metrics constitute the proper way to measure performance. A vendor may supply all kinds of impressive-looking metrics that do the client no good, but are easy to compute. A company that asks this question should have a Chief Data Scientist or Chief AI Officer, or it is unlikely to know what it’s getting into. If they don’t have their data house in order before they deploy AI, they will just make the same mistakes faster.
-
Do AI programs have ‘placebo-like tests’ to see if they are doing even better than ‘regular workers’?
Pass. I don’t know what a placebo-like test is in the context of comparing AI to humans.
-
Is Google LaMDA AI Sentient?
Ha. Look, almost no one except Blake Lemoine thinks that LaMDA is sentient, but what neither he nor they have is a test for sentience or even a definition of sentience that would help. So this question will remain as catnip for talk shows wanting to ignite debate, because they can be sure that science isn’t about to halt it with a definitive answer.
-
What would it mean if it were sentient?
This is an interesting question. Sentience is the capacity to feel, so we believe that many animal species are sentient, although lacking a definitive test, this is still subject to debate (otherwise countries would not be permitted to continue harvesting octopuses). This is not humans’ defining quality: we are homo sapiens, where sapience is thinking. So sentience actually doesn’t have to mean more than the ability to feel an external stimulus. It doesn’t necessarily even include emotions. Given that, any AI being sentient isn’t clearing a very high bar. Most people think of sentience as including self-awareness and/or consciousness and/or intelligence, all terms that are even harder to define or measure. So the main thing that an AI being declared sentient would mean would be mass confusion 😊 However, we should get ready for that, because as far away as LaMDA is from that bar right now, within a few years there will be AIs that leave the question of their sentience far more open to debate.
Peter Scott is a futurist and technology expert on a mission to help us to get along with artificial intelligence. His latest book is Artificial Intelligence and You: What AI Means for Your Life, Your Work, and Your World