"Rodrick Williams Jr.'s 10-month old, 2-foot-long savannah monitor named "Kill" gets the RB some strange looks when they go for walks together."
The Ashley Madison hack is the gift that keeps on giving (unless you were a member). Turns out there are a lot of .edu emails addresses registered on the site. This site ranked the top 10 .edu addresses.
There you go, Dantonio, you finally got that #1 ranking. #SpartansWill cheat.
With only a few weeks left, and football news coming in still pretty slow, I'd like to start one of the last OT threads of the summer.
The Mgoblog community is a diverse, intelligent community, with lots of great interests. So my question, what is your geeky thing you are into? Can be games, books, movies, cars, historic timeframe, some collectible you are into.
Here are some of mine;
I played D&D growing up with friends and brothers. I actually highly recommend any parents with kids support this, as there is lots of story telling and character development in this gameplay. We actually were able to play some last year.
X-wing miniatures: Is a table top game with models where you simulate a dog fight between Star Wars space ships. I've recently gotten into this, and I'm currently obsessed
Reading: I've read a lot of books, but are probably not considered as nerdy or geeky as the popularity has grown. Ender's Game, X-wing series, A Song of Ice and Fire series (GoT), Harry Potter.
This can also apply to Michigan Football, as I'd consider WolverineDevotee, Mathlete, WolverineHistorian to have taken their fandom to geek status with the amount of knowledge and devotion they've made to their hobby.
Artificial neural networks have been the model for "machine learning" for quite a while now. It is essentially just computer code inspired by the way the central nervous system works in animals (brains). For more information on artificial neural networks, you can go to: (https://en.wikipedia.org/wiki/Artificial_neural_network)
Well, Google recently created a "Deep Learning" algorithm based off research done with Artificial Neural Networks and "trained" it with huge databases of pictures of mammals, people, buildings, cars, etc. For example, the network is shown a thousand pictures of a duck and told repeatedly that this is a duck. When it learns the million things that make a duck distinctly a duck, they move on to other objects. Rinse, repeat and rinse, repeat. When a large database of information is established they can move on with the network and ask it to carry out different tasks with its newly-acquired information. To quote Google's blog here: (http://googleresearch.blogspot.ie/2015/06/inceptionism-going-deeper-into-neural.html)
"Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations. If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge. Again, we just start with an existing image and give it to our neural net. We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere."
Now recently, they decided to release the code, (http://googleresearch.blogspot.ie/2015/07/deepdream-code-example-for-visualizing.html) so people could see what the trained neural networks were seeing on any image that they wanted.
Inevitably, the internet has done awesome things with it so far, and this is what this post is about. This app (https://dreamscopeapp.com/create) created by a user on reddit allows users to upload any image they want, and run it through the "Deep Dream" algorithm. The results are creepy, awesome, trippy, and eerily resemble what MANY LSD users report when on Acid (seriously, they seem amazed by the similarity). I took the liberty to run a few pictures through the algorithm and posted the results. Feel free to do the same for your own pictures, or simply just discuss the topic at hand.
Are our brains on LSD essentially treating visual input exactly how this neural network perceives still images? Is this computer taking a picture and letting its imagination go wild? Similar to how a child looks at clouds in the sky? Are our brains actually just an advanced artificial intelligence?
I'm sure it will be a good time. Enjoy. Happy Off-Season.
The trailer for the latest 007 installment has been released. Daniel Craig reprises his role as Bond in Spectre. The movie is scheduled for release on November 6.
Missed last week, sorry about that. Anyway, I know we're all thrilled to see where the swoosh goes, but back to important topics:
What is your next car purchase? Why?
Could be a beater for the kids in high school, that new 'Slade to tow your boat, etc. etc. What's next?
So, late again. My mistake. This week - what was your favorite car advertisement of all time? Promo? Print? TV? Radio? Was it great because the car was great? Because the ad was outstanding?
I have a few that come to mind. I really enjoyed the Honda Rube Goldberg commercial, but the one out right now that gets me is embedded in the first comment