We should treat AI like our own children — so it won’t kill us

Are you ready for Skynet? How about the Holodeck-meets-Skynet universe of Westworld (returning March 15 to HBO)? What about synths destroying the colonies of Mars as seen in Picard? With so much fiction bleeding apocalyptic images of artificial intelligence (AI) gone wrong, we’ll take a look at some possible scenarios of what could actually happen in the rise of artificial intelligence.

While many researchers and computer experts aren’t worried, new technologies need risk-assessment. So what’s the risk of AI breaking bad and turning into an episode of Westworld? The consensus is mixed. But, some high profile scientists like Elon Musk and the late Stephen Hawking sounded the alarm years ago, and there is some reason for concern.

We should treat AI like our own children — so it won’t kill us 1
Westworld — a gripping story of artificial intelligence gone bad — returns for its third season on HBO March 15. Image credit: HBO/Westworld

Deaths have already occurred and will continue to occur from both robots and artificial intelligence, but these are accidental. Whether it’s self-driving cars, assembly-line robotic-arms, or even older technologies like airplane and automobile malfunctions, deaths related to technological break-downs have been with us for over a century.

I, for one, welcome our new robotic overlords

Many would agree that the benefit from most existing technologies outweighs the risk. Reduced human mortality due to improvements in medicine, safety, and other areas more than offsets any loss of life.

Society does a lot to reduce machine-related deaths, like seat-belt laws, but the benefit is so great that most are willing to accept some loss of life as part of the cost. Still, any loss of life is a tragedy, so there will always be some concern as each field matures. Fear plays an even larger factor.

But what happens when the deaths are no longer accidental? If we’re talking about intentional sabotage and harmful programming, that threat has always existed and will never go away. But what is the likelihood that artificial life could develop sentience? What is the likelihood self-aware AI will go outside their original programming and intentionally harm people?

[embedded content]

The short answer is that most scientists believe sentience is possible, but it will need humans to design it that way. Will AI intelligence exceed our own and develop the capability to think for itself? Assuming it does, it still needs to take the next step to harm humans.

Most fear relates to Terminator-style extinction events. I think these, like concerns for advanced alien-life on other planets wiping us out, are overblown. Some may disagree, but intelligent creatures will be more evolved and understand higher concepts like cooperation, trust, and synergy so they will be less likely to kill us.

But even if large-scale extinction is off the table, there is the possibility that individual systems, whether networked or in isolation, could intentionally cause harm. This is conjecture, but I suspect much of this could come from self-preservation, much like backing a human into a corner. But this is true of any living creature, intelligent or otherwise.

Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect. 

― Arthur C. Clarke, 2010: Odyssey Two

Thinking of robots like your own children

What’s the solution? How can society limit the risk associated with rogue AI on smaller scales? The answer lies in shifting perspective. Why do people still have children? They are capable of causing great harm, but we do it anyway. If we begin to think of AI as human, once they achieve sentience, then it’s easier to get a sense of the solution.

We should treat AI like our own children — so it won’t kill us 2
Stories of robots striking out against humans have been around since at least 1920, with the play R.U.R., written by Karel Čapek. Public domain image.

There will come a point when society must assess AI for sentience. If they meet that threshold, courts will award them rights. We must expect this and expect to observe, train, and teach them like we do our children. This will be done through programming, laws, and human interaction.

Once society understands this, most companies and developers will put in place safeguards to prevent AI from becoming sentient so they can still use it without those restrictions. But I suspect there will be tests developed to check. Governments will likely regulate developers to help ensure people are honest actors.

But like everything else, failures — both intentional and accidental — are bound to occur. Before long, artificial intelligence will, likely, be advanced enough to develop sentience. The question remains if humans will be intelligent enough to avoid domination by our robotic creations.

This article was originally published on The Cosmic Companion by James Maynard, an astronomy journalist, fan of coffee, sci-fi, movies, and creativity. Maynard has been writing about space since he was 10, but he’s “still not Carl Sagan.” Also, Roy Huff authored this article, and is a Best-selling author, scientist, & teacher. Optimist. Life-long learner. Hawaii resident, book lover and fan of all things science fiction & fantasy. Find out more at royhuff.net. The Cosmic Companion’s mailing list/podcast. You can read this original piece here.

Published March 15, 2020 — 13:00 UTC

About the author

E-Crypto News was developed to assist all cryptocurrency investors in developing profitable cryptocurrency portfolios through the provision of timely and much-needed information. Investments in cryptocurrency require a level of detail, sensitivity, and accuracy that isn’t required in any other market and as such, we’ve developed our databases to help fill in information gaps.

Related Posts

E-Crypto News Executive Interviews

Crypto Scams

The Largest Crypto Scams Of 2022 (So Far)
The Largest Crypto Scams Of 2022 (So Far)
June 14, 2022
How Do Scammers Entice Their Prey?
May 10, 2022
Beanstalk Farms Loses $80M In A Massive DeFi Governance Flash-Loan Hack
Beanstalk Farms Loses $80M In A Massive DeFi Governance Flash-Loan Hack
April 23, 2022
Joon Pak Head of Crypto at Prove talks to Us about Crypto Fraud And More
April 11, 2022
Mintable CEO Zach Burks Talks to Us about the Opensea Stolen NFTs and Their Recovery
March 21, 2022

Automated trading with HaasBot Crypto Trading Bots

Blockchain/Cryptocurrency Questions and Answers

Roundtable Interview-What is the Effect of The Russia-Ukraine War on Cryptocurrency Prices?
March 4, 2022
How Does Bitcoin Casino Work + 2021 Beginner’s Guide
November 8, 2021
How to Buy and Sell Cryptocurrency
November 8, 2021
What Are Bitcoin Futures And How Will They Work In 2022?
November 4, 2021
The Unconventional Guide to Ethereum
October 28, 2021

CryptoCurrencyUSDChange 1hChange 24hChange 7d
Bitcoin21,481 0.07 % 2.63 % 4.92 %
Ethereum1,231.8 0.47 % 7.06 % 13.24 %
Tether1.001 0.04 % 0.13 % 0.01 %
USD Coin0.9989 0.16 % 0.08 % 0.04 %
BNB240.58 0.52 % 3.95 % 11.30 %
XRP0.3704 0.56 % 1.53 % 14.92 %
Binance USD1.002 0.24 % 0.04 % 0.09 %
Cardano0.9566 0.22 % 0.68 % 6.96 %
Solana42.13 0.18 % 9.88 % 36.46 %
Polkadot8.240 0.50 % 3.49 % 12.52 %

Bitcoin (BTC) $ 21,485.00
Ethereum (ETH) $ 1,234.85
Tether (USDT) $ 0.999827
USD Coin (USDC) $ 1.00
BNB (BNB) $ 241.06
XRP (XRP) $ 0.372093
Binance USD (BUSD) $ 1.00
Cardano (ADA) $ 0.504638
Solana (SOL) $ 42.30
Polkadot (DOT) $ 8.27