Way WAY OT: The coming age of Artificial Superintelligence...

Submitted by yossarians tree on

There seems to be a wide range of interests and intellectual curiosity here on our favorite Michigan sports blog, so I thought I'd share this blog post I stumbled upon the other day. Given that most of us are currently living in a place that has likely seen sub-zero temperatures in recent days, here is a really interesting, provocative, and even terrifying blog post for you to chew on with a single malt by the fire if you have an hour this weekend.  I have absolutely no background in science or technology (English major), and so I approach this subject with ignorance. I had no idea that the possibilities suggested here were indeed QUITE within the realm of reality in the near future. One of the great writers and actual real-life inventors in this realm is Ray Kurzweil, who is credentialed to the Nth degree on the subject, and I have already found some stuff on him, including some great TED Talk videos he's recently made. Anyway, if you have interest, read and react. If you'd rather talk about depth at the tight end position, move along, move along.

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

 

DonAZ

February 21st, 2015 at 10:39 AM ^

There's no question computing devices are doing more and more amazing things.  Quite often that is the result of brute-force processing done very quickly.

IBM's "Watson" technology is essentially a massive unstructured text processor -- it can be programmed to understand (and "learn") answers lying buried in oceans of unstructured text data.  The Watson system that won "Jeopardy" did so because it could quickly process through all the pre-loaded reference text and derive a confidence measure for an answer.  It did so more quickly than the human competitors.

Very good technology -- watch for it at a hospital near you (it will absolutely, definitely be used in medical diagnostics in the future) -- but it is a far cry from anything like a self-aware, self-activating sentience.  I'm personally skeptical we'll ever see devices that take on a truly self-aware understanding.  They may appear to be such, but my guess is even the most savvy of AI devices will be, compared to the human brain, still fairly limited.

yossarians tree

February 21st, 2015 at 11:02 AM ^

Much of the prognostication on the subject seems to rely on exponential growth in technology. Kurzweil bases much of his theory on it, and apparently he has been widely accepted as one of the best forecasters for 30 years. We base our conjecture on systems and paradigms that we use now, but he argues that the next paradigm is the one that we never see coming. In a short period of time (and the periods of time are growing shorter), progress may appear to be erratic or to stop altogether. But over a wider period of time, progress arcs upward more gracefully. I guess a paradigm begins to outlive its usefulness, but this merely exerts pressure to figure out the next paradigm.

Even in our own experience, the past recedes more quickly than the future arrives. For instance, and this wasn't that long ago, I used to call home on a land line anchored to a wall. I researched my papers in the stacks at the library and wrote them on a manual typewriter. Today's student has access to a vast percentage of codified human knowledge at literally the speed of light through the internet. That is a staggering development in 30 years.

DonAZ

February 21st, 2015 at 11:29 AM ^

By the way, I spoke with one of the lead architects for the Watson "Jeopardy" system ... one of the sources for its information repository was Wikipedia arcticles.  As a source of general knowledge required by the game "Jeopardy," it was good enough ... as evidenced by the fact the system won the competition.

For medical diagnostics it'll be all about correlating bits of information from an ocean of medical journals looking to tie together patterns related to symptoms and test results.

Re: Jeopardy ... apparently the algorithm designers for the early Watson prototypes did not anticipate how Jeopardy statements are sometimes stated in the negative, which requires the participant to seek the opposite. That was corrected, but it shows how the "thinking" was only as good as the algorithm controlling it.  Watson performed fairly poorly in early testing before things were ironed out.

UMgradMSUdad

February 21st, 2015 at 12:34 PM ^

"Today's student has access to a vast percentage of codified human knowledge at literally the speed of light through the internet. That is a staggering development in 30 years."

True.  Otoh, for every bit of useful information, there's terabytes of chaff to sort through that often makes the kernals hard to find.
 

mGrowOld

February 21st, 2015 at 10:58 AM ^

This was absolutely stunning article - thank you for posting it.  I'm sitting here this morning, basically snowed in, and I read every chilling, terrifying and sometimes hopeful word of it.  That you SO much for posting such an increadibly thought-provoking aricle.

Personally after reading it  I'm in the pessimistic camp.  I think AI will be our successor here on earth - I don't hold much hope that mankind will be smart enough to "get it right the first time" unfortunately.

yossarians tree

February 21st, 2015 at 11:11 AM ^

I think the key will be for humans to sync with it. But it might be that several billion years of organic evolution on Earth has merely been a progression toward a brain such as ours, which could give birth to this technology.  If that is true, I share your pessemism. Why would an intelligence multiples times more complex than ours even care about us? It would look at us like we look at a bug.

TIMMMAAY

February 21st, 2015 at 1:08 PM ^

I don't think it would matter at all. If Super Intelligence really comes "to life" and has a real ability to reason and think critically, then self awareness would be a given. If that happens, you have to really wonder if self preservation and greed (not in the monetary sense) would become driving forces.

Truly frightening stuff. 

Ray

February 21st, 2015 at 11:27 PM ^

It's a longer take on the basics of the posted article--but goes into depth into how self-improving programs + poorly thought out objectives built into programming could go awry. 

By way of background: My MGowife holds a combined PhD in computer science/cognitive science/statistics/linguistics and she's pretty critical of the author's thesis.  I thought the book was thought provoking, but then again I don't win many arguments at home, probably for very good reason.  Take it FWIW. 

laus102

February 21st, 2015 at 11:10 AM ^

As a CS major here, I've been doing a lot of reading about this subject. One of the people I look up to is Noam Chomsky, and he feels the same way about this issue. DonAz has the right way of thinking about it- computers are far from sentient beings...even some of the most complex algorithms that we use today such as Google translate and even some of the processors that are about to go into the self driving cars of the near future (2018), do nothing but brute force analysis of the world around them. It's just that advances in transistor technology (moore's law / denard scaling) has allowed us to do these brute force algorithms very quickly. If computers were to show how dumb a lot of the algorithms they're running actually are, we would be shocked. And for those who aren't familiar with the term brute force, this is just a term used to reference an algorithm that goes through all possibile outcomes in order to find the right answer. I guess Google translate isn't actually using brute force, but it's definitely not sentient, it acts more like a giant dictionary of languages....it goes through the Internet and searches for phrases that are similar to the one that was just put into the translate engine. Google translate is effectively a very good search engine.

DonAZ

February 21st, 2015 at 11:16 AM ^

IBM's "Deep Blue" chess program that beat Kasparov was "brute force" -- it calculated out millions of potential moves into the future and assigned value weighting.  It was, for its time, a leading example of massive parallel processing to calculate out the moves and weighting.  Not "intelligence" per se, just a whole crap-ton of processing horsepower thrown at the problem.

Alton

February 21st, 2015 at 11:48 AM ^

I also want to thank the OP for the link. 

Regarding Deep Blue, DonAZ:  "Not "intelligence" per se, just a whole crap-ton of processing horsepower thrown at the problem."

Is this, ultimately, a distinction without a difference?  Is "intelligence" something more than a whole crap-ton of well-programmed processing horsepower?

Just as any sufficiently advanced technology is indistinguishable from magic, can we also conclude that any sufficiently advanced level of well-programmed processing power is indistinguishable from intelligence (or even consciousness)?

 

DonAZ

February 21st, 2015 at 1:06 PM ^

There's something more to human thought and consciousness than processing power, but I can't prove that or even define it.  The boundary is, I think, at the point of creativity.  A purely programmatic system can play chess, but it couldn't conceive of the game out of nothing.  Similarly, it could replicate the Mona Lisa, but not create it ... or a well-crafted poem that appeals to the core emotions.

One could argue that the human mind is simply far enough advanced to make creation appear like something more than a programmatic response.  I tend to think the human has something more than mere neurons and biochemicals.  But that's a topic not allowed on this site. :-)

UMgradMSUdad

February 21st, 2015 at 1:36 PM ^

I know a lot more about human languages than computers, but as I understand them, computer languages are based on binary code, which means that if you can create a flow chart for anything with an A/B, 0/1, yes/no choice or the like, a computer program can be written that can allow much faster, more accurate "solutions" than a human mind can do.  The problem is, though, that there are human endeavors that go beyond our ability to replicate or understand using binary code.  Human languages are one area where this is true.  There are just too many sublteties, nuances, and exceptions to be able to be able for a computer program to come anywhere near human intelligence in understanding languages.

DonAZ

February 21st, 2015 at 2:42 PM ^

human endeavors that go beyond our ability to replicate or understand using binary code

Indeed.  This is why, for example, IBM Watson does not operate on a if-then-else rules design, but more of a word/phrase/sentence tree mapping algorithm.  It attempts to "understand" -- meaning: draw conclusions from -- what humans have put down in word format.  It is, however, very reliant on humans having put down meaningful information in the first place.  It doesn't "think" beyond the information given it.

There's some fascinating work going on in quantum computing where the computing is not merely 0's and 1's, but states between "on" and "off."  I don't understand all that.  I applaud those who do.  But even still, I'm not sure that by itself will create true AI (I doubt it), or merely provide for more efficient -- and therefore, faster -- computation.

ZooWolverine

February 21st, 2015 at 11:34 PM ^

They answer questions because that's what we program--that's what's useful. It's not conceptually very different to get a computer to ask interesting questions.

I don't remember the system, but about a decade ago, a knowledge system was created that could process input and ask questions if there were things it didn't understand. It got publicity, particularly for the first question it asked: "Am I God?"

DonAZ

February 21st, 2015 at 11:23 AM ^

Boiled to its essence, this is the "I think, therefore I am" concept.  That is not a wholly satisfying statement, though ... what does thinking really mean?

I don't know.  But Harbaugh does! Harbaugh!

JamieH

February 21st, 2015 at 9:50 PM ^

AI today is all about brute force calculations and having the processing ability to do those calculations fast enough that you don't see the 99% of the crap that is being thrown away to get to the 1% (or probably more like .01%) that it actually uses..   Maybe if processing power continues to advance exponentially that will eventually LOOK like sentience, but the code will still be operating NOTHING like the human brain.  I don't see any "neural net" like breakthroughs happening any time soon.

Let's just say that Commander Data isn't exactly around the corner. 

mgobleu

February 21st, 2015 at 11:40 AM ^

but I've been looking for some good go-tos for daily tech reading/time wasters. Gizmag is a staple for me because it's generally non political but they only post 3-4 articles a day. Anyone have any suggestions?

DonAZ

February 21st, 2015 at 1:10 PM ^

Pulling the topic back to Michigan football for a moment ... we have very sophisticated simulators for many things, most notably flight.  Is there any reason why the same technology could not be applied to training and practicing football?

The graphics are far enough along that a simulation could be used to give QBs a "game like" experience at least for reading the defenses.

But what about more fundamental things like blocking?  Could computer simulation and robotics effectively be used to simulate a really good offensive lineman so D-lineman could practice getting around or through them?

bluebyyou

February 21st, 2015 at 2:10 PM ^

Great find and great read.

My first exposure to this topic was when I read an artilce authored by Bill Joy, who did his undergrad at Michigan, a guy who ended up being one of the founders of Sun Microcomputer. I stayed up late after reading this piece.

http://archive.wired.com/wired/archive/8.04/joy_pr.html

We are already seeing some of the early stages of AI and with the good comes the bad.  I'm thinking of how machines are displacing human labor. a phenomenon that seems to have taken off since the start of the great depression in 08-09 and something I believe will worsen.

I have been convinced for a while that given enough time, the the machine will supplant humans, a specie that requires considerable work to sustain.  It seems inevitable that as hard as good people try to control ASI, there will be bad actors who will let the intelligence do bad things. Even with genetic engineering, everything works as long as all the actors follow the rules.  That will never happen, IMO.  All you need is once, and it is all over.  Kind of like believing that when aliens land, they will be like E.T., when in all likelihood, they could be the little nasties from War of the Worlds.

In the meantime, enjoy football while you can.