OT: Google creates artificial "imagination".

Submitted by Chris-sirhC on July 31st, 2015 at 2:12 PM

Artificial neural networks have been the model for "machine learning" for quite a while now. It is essentially just computer code inspired by the way the central nervous system works in animals (brains). For more information on artificial neural networks, you can go to: (https://en.wikipedia.org/wiki/Artificial_neural_network)

Well, Google recently created a "Deep Learning" algorithm based off research done with Artificial Neural Networks and "trained" it with huge databases of pictures of mammals, people, buildings, cars, etc. For example, the network is shown a thousand pictures of a duck and told repeatedly that this is a duck. When it learns the million things that make a duck distinctly a duck, they move on to other objects. Rinse, repeat and rinse, repeat. When a large database of information is established they can move on with the network and ask it to carry out different tasks with its newly-acquired information. To quote Google's blog here: (http://googleresearch.blogspot.ie/2015/06/inceptionism-going-deeper-into-neural.html)

 

"Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations. If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge. Again, we just start with an existing image and give it to our neural net. We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere."

 

Now recently, they decided to release the code, (http://googleresearch.blogspot.ie/2015/07/deepdream-code-example-for-visualizing.html) so people could see what the trained neural networks were seeing on any image that they wanted.

Inevitably, the internet has done awesome things with it so far, and this is what this post is about. This app (https://dreamscopeapp.com/create) created by a user on reddit allows users to upload any image they want, and run it through the "Deep Dream" algorithm. The results are creepy, awesome, trippy, and eerily resemble what MANY LSD users report when on Acid (seriously, they seem amazed by the similarity). I took the liberty to run a few pictures through the algorithm and posted the results. Feel free to do the same for your own pictures, or simply just discuss the topic at hand.

Are our brains on LSD essentially treating visual input exactly how this neural network perceives still images? Is this computer taking a picture and letting its imagination go wild? Similar to how a child looks at clouds in the sky? Are our brains actually just an advanced artificial intelligence? 

I'm sure it will be a good time. Enjoy. Happy Off-Season.

 

Comments

umbig11

July 31st, 2015 at 3:54 PM ^

We have some "Cyber Machine Learning Algorithms" that were developed about 5 years ago. They have become one of the hottest products in our market.

Sac Fly

July 31st, 2015 at 3:49 PM ^

There's a lot of good stuff in there, but the author seemed to go down the path that most go down trying to picture HLMI, which is create a really, really, really, fast computer.

That's where most people stray because the difference between a super computer and what you would call HLMI is consciousness.

We don't even know what consciousness is, let alone how to create it and put it in a form for it to exist. That's why I believe we still have a hundred years or so before our HLMI overlords wipe us off this rock.

Michigan Arrogance

August 1st, 2015 at 9:21 AM ^

I don't think consciousness is required for super intelligence - just a very good recursive learning algorithm + processor speed + energy + memory.
 
we already have processor speed exceeding that of biological brains (by 9+ orders of magnitude). We likely already have memory that exceeds bilogical brains (tho not at a pricepoint that makes it likely that we develop HLMI in less than 20 years). I think the learning algorithm is inevitable and likely very soon (10-20 years?) to be written. The key will be energy effeciency - the 20 W the human brain uses is miniscule compared to the Megawatts a machine would require. 
 
I was unconvinced that we should worry about this based solely on the consciousness and machine morality arguements, but that article negated both of those for me. The scariest part is not that the machines would likely kill us all (amorally or not), it's that either way it goes ("good" for humans or bad), it will lead to a fundamental change to how life is lived (and defined, really) on this planet. So significant a change that we likely could barely recognise it as life as we know it today.
 
Also, I am now firmly in the camp that alien intelligent life greater than ours does not exist.
 
 

bluebyyou

July 31st, 2015 at 4:02 PM ^

I was just about to post a link to that piece...actually two articles.  Great stuff, just shouldn't be read before you go to bed.

Here's the first item that caught my attention. a piece written a while back by Bill Joy, a founder of Sun Microsystems, and a highly respected computer scientist as well as a Michigan alum:

http://archive.wired.com/wired/archive/8.04/joy_pr.html

Sac Fly

August 1st, 2015 at 5:09 AM ^

Not a ton of thought/research put into that section. A doomsday scenario involving self-replicating nanotechnology is extremely unlikely. So much so that Eric Drexler, the guy who coined the phrase, has retracted most of his hypothesis.

First there isn't really a reason for it to exist in the first place. A process that works properly would be extremely inefficient, because it would need a massive fuel supply to meet demand for an endless amount of machines.

If the fuel were to run out the machine would either die or consume itself. The result would be a machine that's very complex, expensive and non-renewable.

In short, it's primitive technology that won't destroy the earth because it won't be built.

Blazefire

July 31st, 2015 at 3:02 PM ^

This is not creativity. This is standard deductive reasoning. The computer simply compares images provided to a vast library of images it knows are things, finds the most similar class of thing, and enhances those qualities. Creativity is representing something you've never seen before, or altering it in a way that is counter to reasoning.

It's cool, but the computer is attempting to turn something it doesn't recognize into something it does. That's like, the opposite of creativity.

Sent from MGoBlog HD for iPhone & iPad

pescadero

July 31st, 2015 at 3:40 PM ^

This isn't necessarily true.

 

Things like self modifying code and evolutionary programming can very much lead to a computer processing information in a very different way than humans programmed it.

 

There is also the argument that creativity IN humans is merely an algorithmic process.

 

For some commercial examples of machine creativity see: 

 

EMI (David Cope)

Iamus

Jape (pun generator)

Song of the Neurons (musical work created by neural net culling/modifying self created hooks based on visually observing listener reactions)

Shimon (robot capable of musical improvisation)

Chris-sirhC

July 31st, 2015 at 3:49 PM ^

"There is also the argument that creativity IN humans is merely an algorithmic process."

I almost went down that path but didn't want to open that can of worms. But I agree completely. There are forms of creativity that require a mind to use information it has already gathered to create something new. (Like this network is doing).  Cloud watching would be an example.

Chris-sirhC

July 31st, 2015 at 3:32 PM ^

Of course it's not creativity, nobody said it was. Your last paragraph hit the nail on the head in that respect. Nobody was insinuating that computers could now create works of art. I thought I went into the details on how it works enough in the OP, but thanks for clarifying.

It IS cool though, and it is revolutionizing computing in my opinion. With technology like this, the standard captcha on websites will not be enough to distinguish man from computer anymore. They will have to start creating captcha that tie into human emotion or something of that nature. 

goblue81

July 31st, 2015 at 6:06 PM ^

 So, essentially they computerized Hunter S Thompson's psyche. But, in all seroiusness, all this really is a filter or template randomized.  I mean in Photoshop I can take picture of Bo Ryan and apply ripples or I could write a script to open pictures in Photoshop and via RNG select random filters and settings to modify a picture.  Its not really imagination.  Imagination is a blank canvas that evolves into something tangibly unique.

maineandblue

August 1st, 2015 at 1:43 PM ^

As a psychologist with a neuroscience background, I'm not impressed. We're still at a very early stage of understanding how the brain works...and this is certainly not how it works.