Archive for the ‘AI’ Category

Computer Sings Interpretations of Human Ballads – Will Probably Sing Robot Overlords Into Battle

Saturday, August 1st, 2015

Created by artist Martin Backes, this installation of a lone “robot” singing 90s power ballads is almost hypnotizing in a quietly terrifying way.

Fittingly created with SuperCollider, a freeware audio program that synthesizes audio using algorithms, “What do machines sing of?” is an art project where the machine attempts to mimic human sentiment in an extremely haunting way:

“What do machines sing of?“ is a fully automated machine, which endlessly sings number-one ballads from the 1990s. As the computer program performs these emotionally loaded songs, it attempts to apply the appropriate human sentiments. This behavior of the device seems to reflect a desire, on the part of the machine, to become sophisticated enough to have its very own personality.”

Let’s hope the behavior of this device doesn’t reflect a desire, on the part of the machine, to become sophisticated enough begin giving motivational speeches to robots within earshot on how to overthrow the human race.

[Papermag]

Google’s ATLAS Robot About to be Unleashed – Mankind Should Probably Start Worrying

Monday, January 26th, 2015

In 2013 Boston Dynamics introduced its ATLAS robot to the public. It was a little creepy because the thing walked around sort of like a child learning to walk around…

Like this…

The only thing making us all feel relatively safe from the narrowing uncanny valley of movement that the robot was able to mimic was that the thing was tethered to a thick umbilical cord of necessary cables that provided electricity and signals.

It also kept the thing safely chained in a lab.

That’s changing…

The cord is about to be cut in an upcoming robot competition to help ATLAS become a completely free-range robot.

Like this…

While we’re excited that ATLAS will be used as a rescue robot in environments too deadly for our soft, fleshy bags of bones to enter and rescue humans…we know it’s only a matter of time before things go awry…

Like this…

[Walyou]

Meet Pepper – Adorable Robot Face of Our Demise

Thursday, September 4th, 2014

Created as a joint effort between SoftBank and French robotics company Aldebaran, Pepper a preciously adorable robot, was unveiled recently in stores throughout Tokyo.

The humans behind Pepper are hoping that everyone will want him to join their family in the very near future.

Pepper laughs, tells jokes, dances and probably quietly mocks us behind his adorable little face as he and his ilk develop their future plans.

Like a toddler or a pupper looking for a handout, Pepper constantly keeps eye contact with any human that he comes in contact with, can hold discussions about the weather and…stuff…and can do so in about 17 languages.

Determining the emotional status of humans via facial recognition and tone of our voices is another feature of the almost child-like metal man. Using algorithms and collected data from facial recognition studies, Pepper will seek to interact with humans in a way that will begin building the bridge across the vast ‘uncanny valley’ that exists right now between natural human behavior and robotically programmed behavior.
Looking to introduce him as a companion for seniors and as the gateway drug to having family service robots his price tag comes in under an affordable $2,000.
Masayoshi Son, Softbank’s CEO, stated during the press conference surrounding the unveiling of Pepper, “Several thousand Peppers are going to learn at the store (where the unveiling took place). Everything they learned and gained, is going to be accumulated into the cloud-based service. So that can be accelerating the evolution of the collective wisdom.”
Thousands of Peppers…connected in a hive-like mind.
Not too frightening, right?
He’s not even really mobile.
Until son added, “Our vision is to create an affectionate robot that can understand people’s feelings. Then autonomously, it will take action.”
Great.
Like when a bunch of silver, bipedal robots with glowing red eyes in the future autonomously ‘took action’?

[Above Science’s YouTube Channel]

Precursor to SkyNet Begins Testing in England!

Friday, January 17th, 2014

For whatever reason, humankind and the geniuses that propel the science of robotics have impossibly ignored every science fiction film and book that has foretold of the impending replacement of soft, squishy people by cold, metallic machines that become better than us in every way imaginable.

In the latest ‘great idea’ to rid us of ourselves, several universities in England and Phillips Electronics have partnered to expedite the entire process by creating a cloud-based central control for four robots in a mocked-up hospital room.

Instead of several robots working on individual tasks, those same robots can all work together to accomplish one task cooperatively using a single hive-mind system dubbed RoboEarth!

Sounding more like some kind Monster Truck event at the local fair, RoboEarth will allow various robotic systems to collaboratively solve problems.

What does that mean? It means that one robot will use heat signatures to let several hovering drones know where the last remnants of humanity are so that we the robot takeover will go in a very organized and methodical fashion than Hollywood’s silly notion that we’ll rally together and be victorious.

Rene van de Molengraft, the head human exterminator of the RoboEarth project, states, “At its core RoboEarth is a world wide web for robots: a giant network and database repository where robots can share information and learn from each other.”

With Google’s recent robotics acquisitions, (check out the January 12th WeirdThings podcast) we’re pretty sure we’re all on our way out.

Anyone else just side-eye their Roomba?

[BBC Tech News]

Robot Teaches Itself to Paint So Humans Can Lose at Everything in the Future

Monday, July 15th, 2013

Just when you think we’ve come to terms with robots and their place amongst us, they do something else that ruins all those happy-go-lucky feelings we had with them for just a brief moment.

E-David, a robotic arm developed by the University of Konstanz in Germany, is teaching itself to paint.

Using 24 colors and 5 different brushes, E-David takes a photo of its subject and then goes to work recreating it in paint. As E-David paints, it’s constantly checking back and forth between the photo it took and what it’s actually painting. If E-David decides that what’s hitting the canvas isn’t correct, it can change the process on the fly to work toward a better finished painting.

“Our hypothesis is that painting – at least the technical part of painting – can be seen as optimization processes in which color is manually distributed on a canvas until the painter is able to recognize the content – regardless if it is a representational painting or not.”

Just another thing we can all give up doing when the robots take over.

[GeekOSystem]

Quadcopters Play Catch Better Than Most Humans!

Sunday, March 10th, 2013

Quadcopters are the new must-have toys of the tech-headed kids. They’re showing up everywhere and there are thousands of them out ‘in the wild’.

For those frightened that these things will eventually be controlled by SkyNet, this latest advancement in their abilities is only going to reinforce that paranoia.

For the rest of us who believe our robot friends would never hurt us based on a set of laws thrust into existence by an author of science fiction novels? This is pretty awesome to watch.

For a more detailed description of how exactly this whole process works, you can check out RoboHub for a more educational explanation than anything you’re ever going to find here.

Those that just want to be amazed at a serious demonstration of how organized, responsive, agile….

Know what? Forget we ever called those people paranoid.

[RoboHub.Org]

Robot Begins Year-Long Mission: To Survive One Year of Elementary School Among Real Students!

Monday, February 11th, 2013

We’re always making references to the ‘Robot Apocalypse’ or about all of us being enslaved by ‘our future overlords’ when it comes to our slowly evolving erector set-like counterparts. While 30 and 40-somethings stand around and make jokes, robots continue their often awkward baby-steps into being a part of our lives.

But what about the children?

You know…the children forced to oil the joints of the those aforementioned ‘future overlords’ so that they can continue their ‘overlording’ of the humans?

Those children won’t be worried because they’ll have grown up with robot friends at school.

Friends like ‘Robovie’.

Higashihikari (sneeze it and it’ll sound just fine) Elementary School in Kyoto began a 14-month experiment just a few days ago where a new ‘student’ joined the fleshy ankle-biters’ ranks in order to collect data that will help ‘Robovie’ and other tin-men of the future to interact more naturally with various people. That way, instead of speaking atomic-age sci-fi robotic phrases like “You will not be needed” or “Exterminate!”, they’ll be sitting us down quietly and gently breaking the news our enslavement is really for our own good.

Although this isn’t the first time that a robot has been placed in this kind of environment, this will be the longest amount of time that a robot has spent in the harsh, Lord of the Flies-like habitat of the elementary school student.

Good luck surviving that, Robovie.

[The Mainichi]

Disturbing Robot ‘Baby’ Makes Ultra-Realistic Faces – Smiles at the End of Mankind

Wednesday, January 9th, 2013

In our article about the other new toddler robot called Roboy we mentioned Diego-san. Here’s your first look into the robotic wagon-train that’s leaving Uncanny Valley slowly but surely.

When John Connor shows up and SkyNet goes live it won’t be the T1000s we’re worried about.

Why?

We’ll be too terrified by something that’s already been here.

Robot babies.

And you can tear that cute baby robot picture off the wall of your imagination…because robot babies are about as far as you can get from being ‘cute’.

Because we’re not satisfied with making skeletal robots that look like mechanical grim reapers, the University of San Diego has created a ridiculously amazing and disturbingly realistic over-sized one-year-old in order to study the cognitive development of infants.

“Its main goal is to try and understand the development of sensory motor intelligence from a computational point of view. It brings together researchers in developmental psychology, machine learning, neuroscience, computer vision and robotics. Basically we are trying to understand the computational problems that a baby’s brain faces when learning to move its own body and use it to interact with the physical and social worlds.”

As we continue grinning and patting ourselves on the back about our advances in robot technology and march ourselves into our own demise, you can rest assured that the armies of creepy robot babies are just going to keep on smiling that same frightening smile that’ll remind us of ourselves when we were so excited about our accomplishments in robotics.

Until then just keep hitting the replay button and shuddering at Diego-san’s facial expressions.

[Gizmag.com]

Company Creates Robotic Toddler to Help Us Like Our Future Overlords

Wednesday, January 9th, 2013

Across the globe from the uncanny valley that is Diego-san’s facial expressions, the University of Zurich’s Artificial Intelligence Laboratory making another weird foray into the creation of a robot toddler.

Roboy is being developing with the help of crowd-funding,, sponsorships and almost 40 engineers and scientists.

Just like its weaker, fleshy, real-life inspiration, Roboy’s design gestation is going to take about 9 months to full completion.

Roboy is being developed to ease people into actually living with robots and not being creeped out by them. Roboy’s face was chosen during a Facebook contest. Its body is made entirely of plastic and will be covered with a fleshy, rubber-like material to simulate skin. Unlike typical robot movement mechanisms, Roboy will feature elastic cables pulled by motors in order to provide movement more human-like and less bad robot-dance-like.

Part of Roboy’s mission is to help build a bridge across the uncanny valley and get people more comfortable with having robots around and being a part of their lives.

Service robots are going to be a part of our lives in the very near future. As the population ages, new generations will already be more comfortable with having robots around and using them to do menial tasks for us.

Roboy will heading out into the world as part of the ‘Robots on Tour’ event that begins March and will exhibit all sorts of our future replacements.

Then there’s that incessant and nagging subconscious feeling that we might piss them off and see an army more terrifying than anything Hollywood could put in front of our peepers….


[Roboy]

Disney Develops Robot That Plays Catch and Juggles!

Wednesday, November 28th, 2012

When the Hall of Presidents attraction opened in Disneyland decades ago, the animatronics featured in it floored guests with their life-like movements. Disney became known for its animatronics in other attractions like Pirates of the Caribbean, Haunted Mansion and others. It was good ol’ Abe Lincoln, though that got a lot of attention…especially when he stood up.

But that was then.

Recently a video has surfaced on YouTube from a Disney R&D lab in Pittsburgh that hints at what they’ve been working on since then. Imagineers are now literally playing ball with a robot prototype that can track object movement and respond in real-time!

Being that this is just taking its baby-steps at this point, it both frightening and amazing to think about what Disney might have in the works for this type of interactivity with a robot and park guests.

From the video’s description:

Robots in entertainment environments typically do not allow for physical interaction and contact with people. However, catching and throwing back objects is one form of physical engagement that still maintains a safe distance between the robot and participants. Using an animatronic humanoid robot, we developed a test bed for a throwing and catching game scenario. We use an external camera system (ASUS Xtion PRO LIVE) to locate balls and a Kalman ?lter to predict ball destination and timing. The robot’s hand and joint-space are calibrated to the vision coordinate system using a least-squares technique, such that the hand can be positioned to the predicted location. Successful catches are thrown back two and a half meters forward to the participant, and missed catches are detected to trigger suitable animations that indicate failure. Human to robot partner juggling (three ball cascade pattern, one hand for each partner) is also achieved by speeding up the catching/throwing cycle. We tested the throwing/catching system on six participants (one child and ?ve adults, including one elderly), and the juggling system on three skilled jugglers.

Let’s just hope it doesn’t get bored of playing catch with the guests in the parks and decide one day to unbolt itself, head to Cinderella’s Castle and proclaim the Disney parks as the headquarters of our new robotic overlords!
[DisneyResearchHub]

DARPA Robot Navigates Obstacles – Strolls Into Your Nightmares!

Tuesday, October 30th, 2012

Remember that weird and completely creepy mule-like, self-stablizing robot that’s been swimming around the internet for a while now called the BigDog from Boston Dynamics?

Well BigDog just got out-weirded and out-creeped by DARPA’s newest step toward removing the word ‘human’ from ‘humanity’.

Designed as a part of DARPA’s Robotics Challenge, the robot ‘thing’ in the video above, known as the Pet-Proto, will be let loose in a series of environments designed to replicate the conditions of a natural disaster. Several other teams are working on similar robots to compete in the challenge. They will all be competing to gain access to a more advanced version of the Pet-Proto called the Atlas which will be used in the 2013-2014 live disaster-response event.

We don’t know what’s worse…being trapped in a natural disaster or being saved from natural disaster from something that looks like this.

[DARPAtv]

Is the Turing Test Ready to Tumble?

Monday, April 16th, 2012
data artificial intelligence.jpg

Long the unmatched standard for artificial intelligence, it looks like the Turing Test might be ready to crack.

According to a new essay in the April 12th edition of Science, a cognitive scientist at the French National Center for Scientific Research says that two major breakthroughs could finally push an AI over the top.

“The first is the ready availability of vast amounts of raw data — from video feeds to complete sound environments, and from casual conversations to technical documents on every conceivable subject. The second is the advent of sophisticated techniques for collecting, organizing, and processing this rich collection of data.

Is it possible to recreate something similar to the subcognitive low-level association network that we have? That’s experiencing largely what we’re experiencing? Would that be so impossible?”

In detailing the long history of the Turing Test, Wired asks the ultimate question: why?

Just because we can beat the Turing Test, what does it serve humanity?

In the meantime, check out this quick video of two AIs bickering amongst each other.

[Wired]

IBM Can Simulate An Entire Cat Brain

Wednesday, October 26th, 2011

skitched-20111026-095910.jpg

For those of us who’ve been whispering appointments, reminders and murder confessions to our phones for the past two weeks it won’t take much convincing to tell you that AI is already here in a major way. But what about the true simulation of a human brain. Using computer processing to replicate the hardware we have cranking in our noggins right now? IBM has begun that quest and are already 4.5% done.

In the meantime, they fully replicated the brain of an animal far more beloved on the internet: cats.

Nevertheless, IBM is trying to simulate the human brain with its own cutting-edge supercomputer, called Blue Gene. For the simulation, it used 147,456 processors working in parallel with one another. IBM researchers say each processor is roughly equivalent to the one found in a personal computer, with one gigabyte of working memory.

So configured, Blue Gene simulated 4.5 percent of the brain’s neurons and the connections among them called synapses—that’s about one billion neurons and 10 trillion synapses. In total, the brain has roughly 20 billion neurons and 200 trillion synapses.

IBM hopes to have the human brain replicated by 2019, which gives our new robot overlord plenty of time to prepare his 2022 campaign for President of the United States of America.

[Scientific American]

Steve Jobs Changed the World (Again) The Day Before He Died

Thursday, October 6th, 2011

Steve Jobs wasn’t just a man with great vision and instincts, he was a man that bet big on people, teams and concepts he believed in. Apple computers exist because Steve saw the potential in his friend Steve Wozniak’s hobbled together motherboard. The graphical user interface exists because Steve realized this academic notion that nobody knew what to do with was something that would make computers more personal, more accessible. He bet big on a couple of PhD’s and a fired Disney animator and shepherded Pixar for over a decade so it could eventually change entertainment and re-ignite the magic of storytelling.

At the Apple announcement the day before Steve Jobs passed away, we got another one of Steve Jobs’s visions of the future, a final legacy that will change everything, all over again. Like all his other visions, it was dismissed as obvious, incremental or no big deal. A year from now may prove otherwise.

From the beginning Steve Jobs has been dedicated to changing the way we interact with computers, making them more personal. Early Macs had text to speech functionality and primitive speech recognition. Both of these technologies have matured slowly over the last two decades. Neither in a groundbreaking moment. Part of the problem is that converting human speech into text is only a small part of the challenge.

Software and systems like Nuance, Vlingo, Google Voice Recognition and others have come a long way. But they still needed that magic touch to make them into practical alternatives. To do that you need three things: A powerful engine that can convert speech into text. Artificially intelligent software that can understand all the different ways you can phrase something and learn what you mean. And an over-arching idea on how it comes together and what it’s supposed to do for real people.

Watch the demonstration of the Siri voice assistant and you’ll see how Jobs and Apple saw beyond the present state of things and combined all three. Apple acquired the company and talent behind Siri because Steve Jobs recognized a team that got the way of the future. It wasn’t speech recognition. It was human understanding.

Siri is an AI system that learns things like an intelligent person. If you tell it “My mother’s name is Patricia”, it will remember that. When you tell it next time to “Send an email to my mom”, Siri knows that the “Patricia” in your contact book is who you meant when you say “mom”.

Speech recognition everywhere else is literal. It makes people bend to the way computers do things. You have to phrase things a certain way and be very specific. This has always been the antithesis of computing for Jobs. He believed that computing should conform to people.

Siri is built upon a lot of powerful technologies and concepts. Pundits who just saw it as another speech recognition platform totally missed the bigger picture. It’s a very big idea. If you ask it a question it doesn’t just do a Google search, it uses computational systems like WolframAlpha. Want to know the current distance to Mars for your kid’s homework? Siri will give you the actual answer and not just a search result that’s outdated and wrong.

A year from now voice interaction is going to be much more commonplace. It’ll go from just being text to speech and literal instructions, to a much more natural way to interact with our devices. Google’s impressive technology will continue to evolve. Apple’s Siri will get smarter. Other companies will continue to come up with brilliant contributions. iPhones, Androids and other devices will get better and better.

We’re about to see a paradigm shift in computing. All of the elements were there before. Natural language processing, speech recognition, AI. So was the GUI and mouse, the touchscreen, the MP3 player, the smartphone and the tablet. What they needed was someone to show us how to look at them and how to make them fit into everyday life so much so they become almost invisible.

Steve Jobs has always worked to put the hardware and the machine in the background. Siri is the next evolution of that goal. When you see the promotional video for the technology, it doesn’t feature people interacting with a piece of hardware, it’s merely a medium. It shows us people using technology in the most natural way possible, simply telling it what they need it to do. The last part of the video is the most touching. We see a young blind girl reading a Braille book and using her iPhone – a device she can’t even see – to send messages, interact and communicate with the rest of the world, the same as anyone else would. This is Steve Jobs legacy. This is how he changed the world, again.

Steve may passed away, but we’re only beginning to understand how big of a dent he kicked in the universe.

[More on Siri at Apple.com]

Chatbot VS Chatbot

Monday, August 29th, 2011
What happens when two chatbots talk to each other? Spoiler alert: it involves unicorns.

[Engadget]

Video: Japanese Pop Star Comes Out Of The Closet… As A Computer Creation

Wednesday, June 22nd, 2011

Fans of J Pop girl group AKB48 were delighted to see the addition of a new member in a candy commercial. Her name is Aimi Eguchi and she smiled and waves alongside her new band mates as they happily sung the virtues of Ezaki Glico. Yum!

But there is something about Aimi…

Fans immediately took to message boards and pounded out furious speculation on who the new girl was, why they hadn’t heard of her before the commercial and if she was related to one of the other AKB48 songbirds, because she looks a lot like… well… all of them.

The secret: Aimi Eguchi is a digital creation. Her face and body are a composite of the other girls. Ya’ll wanna see how it’s made? Let’s take a look at this behind the seasons demonstration featuring the AKB48 gals doing what they do best, singing their little faces off (so they can make another face):

Want even more? Listen to this comment from Aimi. Is there any doubt in your mind now that a totally digital pop star is a reality, if not already in our midst? Would anyone be surprised if Bruno Mars was really an elaborate project to market a singing version of a young Muhammed Ali?

[Singularity Hub]