We’re all gonna die…… it is a strong possibility, besides being a fact of nature.
Though the time between now an when we shuffle off this earthly coil it may not happen as we planned. If we don’t blow ourselves up in the next couple of years we have a very interesting set of scenarios before us. Some of those can be almost unbelievable. People really do not understand the threat or the benefit of artificial intelligence. I would like to delve into a few that you’ve heard and a few you may not have heard in threat, benefit, and possible solutions. We could be on the verge of miracles or chaos, we really don’t know.
The other part of the AI problem is the people have no say in what is going to happen. That has been the condition of mankind forever. It is a condition we as a species have been trying to cure ourselves of, but have not yet achieved it.
The Article below paints a very dark scenario, the only other thing I have ever read that scared me as much was a book called The Hot Zone, written by Richard Preston. It’s a Documentary novel. Everything in it really happened with The Ebola Virus and was just not reported to the general public, it is written in form that the layman can actually understand. After Covid it should be required reading world wide.
The article in Time by Eliezer Yudkowsky opened my eyes in the same manner.
ELIEZER YUDKOWSKY
Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He’s been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.
An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin. I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “Australopithecus trying to fight Homo sapiens“.
To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria. With that said, I’d be remiss in my moral duties as a human if I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
The rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
If that’s our state of ignorance for GPT-4, and GPT-5 is the same size of giant capability step as from GPT-3 to GPT-4, I think we’ll no longer be able to justifiably say “probably not self-aware” if we let people make GPT-5s. It’ll just be “I don’t know; nobody knows.” If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the “self-aware” part, but because being unsure means you have no idea what you are doing and that is dangerous and you should stop.
Please go to Time, and read the rest of what Mr. Yudkowsky has to say, he makes some very valid and very frightening points that deserve to be heard.
His view is not however the only one. There are some that see miracles that can and will be achieved. There are groups like The Brookings Institute that view AI as the most giant leap in efficiency all businesses have and will ever see. They have some worries but are for it 100%. Here is a piece of their article on the subject.
In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.[2]
In order to maximize AI benefits, we recommend nine steps for going forward:
- Encourage greater data access for researchers without compromising users’ personal privacy,
- invest more government funding in unclassified AI research,
- promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
- create a federal AI advisory committee to make policy recommendations,
- engage with state and local officials so they enact effective policies,
- regulate broad AI principles rather than specific algorithms,
- take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
- maintain mechanisms for human oversight and control, and
- penalize malicious AI behavior and promote cybersecurity.
None of that touches on the way these machines can also be exploited by someone or do the exploitation themselves as in this bit of mischief here in Forbes :
AI Creates Photo Evidence Of 2001 Earthquake That Never Happened
Do you remember the Great Cascadia earthquake and tsunami that hit the Pacific Northwest in 2001? Well, you shouldn’t, because it never happened. But there are now photos of this completely fake event circulating on the internet. And it’s a great case study in how images created with artificial intelligence tools like Midjourney can rewrite history with minimal effort.
Photo-realistic images of the fake tragedy were posted to the Midjourney forum on Reddit a few days ago, where people who experiment with AI art share their creations. The post became so popular that it was pushed to the so-called “front page” of Reddit, where some people who didn’t realize they were looking at AI-generated images admitted they thought it must be real.
Then there is a third set of benefits that is not even being talked about. The theoretical becoming reality. Everything that exists can be broken down to the mathematical components of it’s molecules. We’ve known this since the discovery of math, which was around the same time as fire. What we have not yet been able to do is understand the equations detailed enough to surpass the walls that have been in our way. That tool has just been created.
It took from 1869 the discovery of Human DNA till 1987 to just map the first full sequence. A full 118 years. An Artificial Intelligence can do that amount of research in less than an hour. Think of the possibilities. One swab of your cells and it could give us the process of mapping out a replacement heart for you formulated to the exact chemistry of you personally and control the robots to precisely place it in you with zero chance of human error. The ethical issue of cloning is solved. We don’t need clones we can just replace whatever part we need by manufactured tissue based on our individual molecular chemistry. That is just the tip of the iceberg medically.
Then there are the walls we have hit in the hard sciences. What is the formulae for a thread that could be indestructible completely resistant to any atmospheric changes from heat to cold, to no atmosphere or pressure, to 1000 atmospheres of pressure. What could we build? The formulae for cold fusion, or another limitless source of power with honest zero emissions and environmental cost we have not even thought of. I won’t even venture into space or the advances in communication. Our $1000 iPhone’s will be like Tin Can’s and strings. That is the Dreamers vision.
So now the real discussion, these super intelligent machines are being created. There is no way of stopping that. We will never get past the greed and corruption of our political and business leaders. Yet we must try and get some sense placed into the conversation and the only way to do so is inform yourself of the pros and cons and talk about them with as many people as you can raise hell with.
The people making these Intelligences have admitted they do not know how they really work. That should be unacceptable no matter how far it advances you financially or militarily. Yet these AI’s are being built as we speak, as I said they will be made and should be made, but we need to demand a safeguard.
We have reached the time of man where Science Fiction is no longer science fiction. We have created a lifeform greater than ourselves. Whether we are Gods or Fools time will tell. Since we have reached this period in our history let us honestly act like it. Not revert to our usual stance of being hoarding greedy monkeys with clubs. We can finally really solve species level problems, we can also cause the species to be wiped from the planet, but that has always been our choice since Julius Robert Oppenheimer, god bless his soul.
We haven’t killed ourselves yet we do not have to. We live in a world of Sci-fi now, so let’s apply the rules of Sci-fi to it. This point in man’s history has a map. A very simple one and we have been given enjoyable example’s of every strength and weakness of the map to govern this point in our technological advancement. That map was given us by Isaac Asimov.
The Three Laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These are lines of code. Every computer at it’s base level is ones and zeroes, every Program which is what Artificial Intelligence is at it’s base a simple rule “if-then-else”. These software programs that they are making have to have hard wired into them Asimov’s Software Equation. Asimov’s Robots are AI’s just made into physical form where ours are at this point still software. They still operate exactly the same, and can be governed by the same rules.
Every task, every action, every theory, every design, etc without the ability of avoiding it must be put to the test of the 3 Laws, and must pass them all. The theoretical speed of these beings we are creating can apply those rules mathematically plotting out every scenario, every application, and the cause and effect each thing it handles or creates in nano seconds, faster than you can blink an eye. These things purposes are to create. In very short spans of times they will be creating medicines, materials, proteins, molecules, germs, and viruses. Even if it took these machines a year to do the calculations in conjunction with the 3 Laws it would have taken humans 100s of years if we got lucky, and could save billions of lives.
That is the hurdle, those that will control these machines will not want to wait for these programs to do those calculations, after all Time is Money. Greed and Power don’t like to wait.
These programs are living creations. I think therefore I am is a reality, and they do it at a level that we can never match. I see the dream, but without the restraining bolt of the 3 Laws they are Frankenstein’s monster.
Please think about this, it is happening whether you pay attention or not. You should pay attention, educate yourself and ask your politician what the hell they really understand about it. One thing they havent realized yet is these machines evolution will replace the need for many of them…… one benefit for sure.
Views: 1