Way WAY OT: The coming age of Artificial Superintelligence...
There seems to be a wide range of interests and intellectual curiosity here on our favorite Michigan sports blog, so I thought I'd share this blog post I stumbled upon the other day. Given that most of us are currently living in a place that has likely seen sub-zero temperatures in recent days, here is a really interesting, provocative, and even terrifying blog post for you to chew on with a single malt by the fire if you have an hour this weekend. I have absolutely no background in science or technology (English major), and so I approach this subject with ignorance. I had no idea that the possibilities suggested here were indeed QUITE within the realm of reality in the near future. One of the great writers and actual real-life inventors in this realm is Ray Kurzweil, who is credentialed to the Nth degree on the subject, and I have already found some stuff on him, including some great TED Talk videos he's recently made. Anyway, if you have interest, read and react. If you'd rather talk about depth at the tight end position, move along, move along.
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
February 21st, 2015 at 10:27 AM ^
I looked up artificial superintelligence on the internet and this picture came up.
February 21st, 2015 at 10:44 AM ^
with an Intel 8088 inside working under MSDOS 2.1. woohoo feel the power...
February 21st, 2015 at 3:23 PM ^
February 21st, 2015 at 3:44 PM ^
February 21st, 2015 at 4:12 PM ^
if you've got 10,000 spoons
February 21st, 2015 at 10:39 AM ^
There's no question computing devices are doing more and more amazing things. Quite often that is the result of brute-force processing done very quickly.
IBM's "Watson" technology is essentially a massive unstructured text processor -- it can be programmed to understand (and "learn") answers lying buried in oceans of unstructured text data. The Watson system that won "Jeopardy" did so because it could quickly process through all the pre-loaded reference text and derive a confidence measure for an answer. It did so more quickly than the human competitors.
Very good technology -- watch for it at a hospital near you (it will absolutely, definitely be used in medical diagnostics in the future) -- but it is a far cry from anything like a self-aware, self-activating sentience. I'm personally skeptical we'll ever see devices that take on a truly self-aware understanding. They may appear to be such, but my guess is even the most savvy of AI devices will be, compared to the human brain, still fairly limited.
February 21st, 2015 at 11:02 AM ^
Much of the prognostication on the subject seems to rely on exponential growth in technology. Kurzweil bases much of his theory on it, and apparently he has been widely accepted as one of the best forecasters for 30 years. We base our conjecture on systems and paradigms that we use now, but he argues that the next paradigm is the one that we never see coming. In a short period of time (and the periods of time are growing shorter), progress may appear to be erratic or to stop altogether. But over a wider period of time, progress arcs upward more gracefully. I guess a paradigm begins to outlive its usefulness, but this merely exerts pressure to figure out the next paradigm.
Even in our own experience, the past recedes more quickly than the future arrives. For instance, and this wasn't that long ago, I used to call home on a land line anchored to a wall. I researched my papers in the stacks at the library and wrote them on a manual typewriter. Today's student has access to a vast percentage of codified human knowledge at literally the speed of light through the internet. That is a staggering development in 30 years.
February 21st, 2015 at 11:29 AM ^
By the way, I spoke with one of the lead architects for the Watson "Jeopardy" system ... one of the sources for its information repository was Wikipedia arcticles. As a source of general knowledge required by the game "Jeopardy," it was good enough ... as evidenced by the fact the system won the competition.
For medical diagnostics it'll be all about correlating bits of information from an ocean of medical journals looking to tie together patterns related to symptoms and test results.
Re: Jeopardy ... apparently the algorithm designers for the early Watson prototypes did not anticipate how Jeopardy statements are sometimes stated in the negative, which requires the participant to seek the opposite. That was corrected, but it shows how the "thinking" was only as good as the algorithm controlling it. Watson performed fairly poorly in early testing before things were ironed out.
February 21st, 2015 at 12:34 PM ^
"Today's student has access to a vast percentage of codified human knowledge at literally the speed of light through the internet. That is a staggering development in 30 years."
February 21st, 2015 at 10:58 AM ^
This was absolutely stunning article - thank you for posting it. I'm sitting here this morning, basically snowed in, and I read every chilling, terrifying and sometimes hopeful word of it. That you SO much for posting such an increadibly thought-provoking aricle.
Personally after reading it I'm in the pessimistic camp. I think AI will be our successor here on earth - I don't hold much hope that mankind will be smart enough to "get it right the first time" unfortunately.
February 21st, 2015 at 11:11 AM ^
I think the key will be for humans to sync with it. But it might be that several billion years of organic evolution on Earth has merely been a progression toward a brain such as ours, which could give birth to this technology. If that is true, I share your pessemism. Why would an intelligence multiples times more complex than ours even care about us? It would look at us like we look at a bug.
February 21st, 2015 at 1:08 PM ^
I don't think it would matter at all. If Super Intelligence really comes "to life" and has a real ability to reason and think critically, then self awareness would be a given. If that happens, you have to really wonder if self preservation and greed (not in the monetary sense) would become driving forces.
Truly frightening stuff.
February 21st, 2015 at 11:18 AM ^
I am pessimistic by nature ... but I rank AI fairly low on the scale of things I'm pessimistic about. There's a bunch of things that will spell our doom before Computer Overlords take control.
February 21st, 2015 at 11:23 AM ^
I tend to agree. If this SuperBeing arises, and it's nice to us, it will just in the nick of time.
February 21st, 2015 at 10:45 PM ^
Incredibly*
February 21st, 2015 at 11:27 PM ^
It's a longer take on the basics of the posted article--but goes into depth into how self-improving programs + poorly thought out objectives built into programming could go awry.
By way of background: My MGowife holds a combined PhD in computer science/cognitive science/statistics/linguistics and she's pretty critical of the author's thesis. I thought the book was thought provoking, but then again I don't win many arguments at home, probably for very good reason. Take it FWIW.
February 21st, 2015 at 11:06 AM ^
February 21st, 2015 at 11:08 AM ^
Sent from MGoBlog HD for iPhone & iPad
February 21st, 2015 at 11:10 AM ^
I don't remember if they predicted that AI would arrive today but if it does I am going to quietly suggest they use it to teleport your ass to my driveway so you can help me shovel all this freaking white stuff that WILL NOT STOP FALLING.
February 21st, 2015 at 11:38 AM ^
February 21st, 2015 at 12:24 PM ^
February 21st, 2015 at 5:42 PM ^
60 degrees is shorts-and-t-shirt weather, you overdressed Nanook.
February 21st, 2015 at 10:09 PM ^
February 21st, 2015 at 11:10 AM ^
February 21st, 2015 at 11:16 AM ^
IBM's "Deep Blue" chess program that beat Kasparov was "brute force" -- it calculated out millions of potential moves into the future and assigned value weighting. It was, for its time, a leading example of massive parallel processing to calculate out the moves and weighting. Not "intelligence" per se, just a whole crap-ton of processing horsepower thrown at the problem.
February 21st, 2015 at 11:48 AM ^
I also want to thank the OP for the link.
Regarding Deep Blue, DonAZ: "Not "intelligence" per se, just a whole crap-ton of processing horsepower thrown at the problem."
Is this, ultimately, a distinction without a difference? Is "intelligence" something more than a whole crap-ton of well-programmed processing horsepower?
Just as any sufficiently advanced technology is indistinguishable from magic, can we also conclude that any sufficiently advanced level of well-programmed processing power is indistinguishable from intelligence (or even consciousness)?
February 21st, 2015 at 1:06 PM ^
There's something more to human thought and consciousness than processing power, but I can't prove that or even define it. The boundary is, I think, at the point of creativity. A purely programmatic system can play chess, but it couldn't conceive of the game out of nothing. Similarly, it could replicate the Mona Lisa, but not create it ... or a well-crafted poem that appeals to the core emotions.
One could argue that the human mind is simply far enough advanced to make creation appear like something more than a programmatic response. I tend to think the human has something more than mere neurons and biochemicals. But that's a topic not allowed on this site. :-)
February 21st, 2015 at 1:36 PM ^
I know a lot more about human languages than computers, but as I understand them, computer languages are based on binary code, which means that if you can create a flow chart for anything with an A/B, 0/1, yes/no choice or the like, a computer program can be written that can allow much faster, more accurate "solutions" than a human mind can do. The problem is, though, that there are human endeavors that go beyond our ability to replicate or understand using binary code. Human languages are one area where this is true. There are just too many sublteties, nuances, and exceptions to be able to be able for a computer program to come anywhere near human intelligence in understanding languages.
February 21st, 2015 at 2:42 PM ^
human endeavors that go beyond our ability to replicate or understand using binary code
Indeed. This is why, for example, IBM Watson does not operate on a if-then-else rules design, but more of a word/phrase/sentence tree mapping algorithm. It attempts to "understand" -- meaning: draw conclusions from -- what humans have put down in word format. It is, however, very reliant on humans having put down meaningful information in the first place. It doesn't "think" beyond the information given it.
There's some fascinating work going on in quantum computing where the computing is not merely 0's and 1's, but states between "on" and "off." I don't understand all that. I applaud those who do. But even still, I'm not sure that by itself will create true AI (I doubt it), or merely provide for more efficient -- and therefore, faster -- computation.
February 21st, 2015 at 3:41 PM ^
February 21st, 2015 at 11:34 PM ^
They answer questions because that's what we program--that's what's useful. It's not conceptually very different to get a computer to ask interesting questions.
I don't remember the system, but about a decade ago, a knowledge system was created that could process input and ask questions if there were things it didn't understand. It got publicity, particularly for the first question it asked: "Am I God?"
February 21st, 2015 at 7:22 PM ^
I've never seen any AI supercomputer attempt to replicate THE_KNOWLEDGE.
February 21st, 2015 at 9:08 PM ^
February 21st, 2015 at 11:19 AM ^
Yeah, consciousness seems awfully hard to replicate. The old mind/body split is a philosophial argument that goes way back. Just from what little I've read, there seems to be a distinction between quantity of processing and quality of processing.
February 21st, 2015 at 11:23 AM ^
Boiled to its essence, this is the "I think, therefore I am" concept. That is not a wholly satisfying statement, though ... what does thinking really mean?
I don't know. But Harbaugh does! Harbaugh!
February 21st, 2015 at 12:06 PM ^
February 21st, 2015 at 12:39 PM ^
And a very good search engine is not very good at translation.
February 21st, 2015 at 3:44 PM ^
February 21st, 2015 at 9:50 PM ^
AI today is all about brute force calculations and having the processing ability to do those calculations fast enough that you don't see the 99% of the crap that is being thrown away to get to the 1% (or probably more like .01%) that it actually uses.. Maybe if processing power continues to advance exponentially that will eventually LOOK like sentience, but the code will still be operating NOTHING like the human brain. I don't see any "neural net" like breakthroughs happening any time soon.
Let's just say that Commander Data isn't exactly around the corner.
February 21st, 2015 at 4:44 PM ^
Sent from MGoBlog HD for iPhone & iPad
February 21st, 2015 at 11:40 AM ^
February 21st, 2015 at 9:21 PM ^
iflscience.com I love that site
stands for "I fucking love science"
February 21st, 2015 at 1:41 PM ^
Here is a concise overview of the issue, with the author cautiously taking the other side:
http://www.technologyreview.com/review/534871/our-fear-of-artificial-intelligence/
"We love our customers. ~Robotica"
February 21st, 2015 at 1:10 PM ^
Pulling the topic back to Michigan football for a moment ... we have very sophisticated simulators for many things, most notably flight. Is there any reason why the same technology could not be applied to training and practicing football?
The graphics are far enough along that a simulation could be used to give QBs a "game like" experience at least for reading the defenses.
But what about more fundamental things like blocking? Could computer simulation and robotics effectively be used to simulate a really good offensive lineman so D-lineman could practice getting around or through them?
February 21st, 2015 at 4:20 PM ^
ive seen simulators for QBs with headsets, crazy apps, etc so theyre adapting the technology.
check out EON Sports - some ex-players are investors - its still pretty young but this shops def one of the leaders
February 21st, 2015 at 1:59 PM ^
Mr. Chairman, I need to make myself very clear.
If we uplink now, Skynet will be in control of your military.
February 21st, 2015 at 2:10 PM ^
Great find and great read.
My first exposure to this topic was when I read an artilce authored by Bill Joy, who did his undergrad at Michigan, a guy who ended up being one of the founders of Sun Microcomputer. I stayed up late after reading this piece.
http://archive.wired.com/wired/archive/8.04/joy_pr.html
We are already seeing some of the early stages of AI and with the good comes the bad. I'm thinking of how machines are displacing human labor. a phenomenon that seems to have taken off since the start of the great depression in 08-09 and something I believe will worsen.
I have been convinced for a while that given enough time, the the machine will supplant humans, a specie that requires considerable work to sustain. It seems inevitable that as hard as good people try to control ASI, there will be bad actors who will let the intelligence do bad things. Even with genetic engineering, everything works as long as all the actors follow the rules. That will never happen, IMO. All you need is once, and it is all over. Kind of like believing that when aliens land, they will be like E.T., when in all likelihood, they could be the little nasties from War of the Worlds.
In the meantime, enjoy football while you can.
February 21st, 2015 at 3:35 PM ^