For a LOT of the more nerdy kids out there, we’re placing bets that many of you pretended you had the ability to throw fireballs, move objects by waving your hands and occasionally even tried in vain to channel the Force.
That was all fun, lots of pretending and wishful thinking…wasn’t it?
Not any more.
Thalmic Labs, the creators of Myo is a company that, with the help of a special armband, could make a lot of those things a reality.
‘As a company, we’re interested in how we can use technology to enhance our abilities as humans – in short, giving us superpowers,’ Stephen Lake, co-founder and CEO of Thalmic Labs said.
Using gesture control, Myo is an armband that registers the electrical activity in your muscle movements that will produce a signal that’s interpreted and sent wirelessly to your phone, television, kitchen or even your personal drone instantly.
At that price why waste time running around in a swamp with an 800 year-old, green, raisin-skinned, wizard clinging to your back and nagging at you or visiting some weird old desert hermit when you can just drop a little cash and skip the middleman?
Let’s just hope people remember to remove it when they’re doing…uh…private stuff involving a lot of gesturing.
We’re pretty sure that this thing can’t help us ‘unsee’ things yet.
We’re always making references to the ‘Robot Apocalypse’ or about all of us being enslaved by ‘our future overlords’ when it comes to our slowly evolving erector set-like counterparts. While 30 and 40-somethings stand around and make jokes, robots continue their often awkward baby-steps into being a part of our lives.
But what about the children?
You know…the children forced to oil the joints of the those aforementioned ‘future overlords’ so that they can continue their ‘overlording’ of the humans?
Those children won’t be worried because they’ll have grown up with robot friends at school.
Friends like ‘Robovie’.
Higashihikari (sneeze it and it’ll sound just fine) Elementary School in Kyoto began a 14-month experiment just a few days ago where a new ‘student’ joined the fleshy ankle-biters’ ranks in order to collect data that will help ‘Robovie’ and other tin-men of the future to interact more naturally with various people. That way, instead of speaking atomic-age sci-fi robotic phrases like “You will not be needed” or “Exterminate!”, they’ll be sitting us down quietly and gently breaking the news our enslavement is really for our own good.
Although this isn’t the first time that a robot has been placed in this kind of environment, this will be the longest amount of time that a robot has spent in the harsh, Lord of the Flies-like habitat of the elementary school student.
Not sure about you…but pretty sure that having the ability to create our own bad-ass appendages like He-Man’s Trap-Jaw would take precedence over things like eating…sleeping…everything…well almost everything.
Ivan Owen created a mechanical hand prop and posted it on YouTube. A couple days later he was contacted by Richard Van As, an amputee and craftsman who admired Owen’s work. Once they put their put their brains together, they created a prosthetic finger for Richard. After many more prototypes, the two learned of an awesome kid named Liam who was born missing the fingers of his right hand. The two men decided to help Liam out.
A few more prototypes later and “Robohand” was born. Crafted for Liam, it took only a few days for him to get adjusted to using his prosthetic.
Most bare-bones, low-end prosthetics can easily set someone back $600 and take weeks to go through the fitting, customizing and refitting process.
Using a 3D printer, Owen and Vas As stripped down those weeks into a matter of hours and that $600 for an arm that was nothing more than a stick with a glove on the end was whittled down to a prosthetic with individually moving fingers for the pocket change of $20.
Using a Replicator 2, Ronning created
During the course of a single day and a couple more twenties? Someone’s eventually going to start tossing out ‘what ifs’.
Next day? Someone’s going to be sporting a grappling hook, a flame-thrower, a buzz-saw, a built-in paintball gun, a slingshot or some kind of ridiculously awesome combination we haven’t even imagined yet.
When John Connor shows up and SkyNet goes live it won’t be the T1000s we’re worried about.
We’ll be too terrified by something that’s already been here.
And you can tear that cute baby robot picture off the wall of your imagination…because robot babies are about as far as you can get from being ‘cute’.
Because we’re not satisfied with making skeletal robots that look like mechanical grim reapers, the University of San Diego has created a ridiculously amazing and disturbingly realistic over-sized one-year-old in order to study the cognitive development of infants.
“Its main goal is to try and understand the development of sensory motor intelligence from a computational point of view. It brings together researchers in developmental psychology, machine learning, neuroscience, computer vision and robotics. Basically we are trying to understand the computational problems that a baby’s brain faces when learning to move its own body and use it to interact with the physical and social worlds.”
As we continue grinning and patting ourselves on the back about our advances in robot technology and march ourselves into our own demise, you can rest assured that the armies of creepy robot babies are just going to keep on smiling that same frightening smile that’ll remind us of ourselves when we were so excited about our accomplishments in robotics.
Until then just keep hitting the replay button and shuddering at Diego-san’s facial expressions.
Harry Potter had one. Frodo Baggins had one. Even Max from Disney Channel’s Wizards of Waverly Place had one.
In fact, just about every single geek on the planet at some point in their life has probably hypothesized about how cool it would be to have some kind of a cape or blanket that you could cover yourself in and become instantly invisible.
Well that might soon become a reality.
While we’re still going to have to keep to our hypothetical invisible scenarios in our grinning heads, it won’t be long until soldiers, special ops agents and even….uh…submarines…begin using something called ‘Quantum Stealth’ to get all Predator-like.
Guy Cramer, the president and CEO of Hyperstealth Biotechnology in Canada, is vaguely but loudly declaring that he’s developed an invisibility cloak-like material!
After checking his site and looking at the ‘mock-up’ photos on display, we’re secretly hoping this is a serious technology that’s about to put old-school camouflage in the closet. Poking around online to see if there was ANY hint at what Cramer is developing turned up nothing that actually shows off the technology. He’s claiming that if a soldier were wearing his top secret material you wouldn’t know he was there until you tripped over him.
When the Hall of Presidents attraction opened in Disneyland decades ago, the animatronics featured in it floored guests with their life-like movements. Disney became known for its animatronics in other attractions like Pirates of the Caribbean, Haunted Mansion and others. It was good ol’ Abe Lincoln, though that got a lot of attention…especially when he stood up.
But that was then.
Recently a video has surfaced on YouTube from a Disney R&D lab in Pittsburgh that hints at what they’ve been working on since then. Imagineers are now literally playing ball with a robot prototype that can track object movement and respond in real-time!
Being that this is just taking its baby-steps at this point, it both frightening and amazing to think about what Disney might have in the works for this type of interactivity with a robot and park guests.
From the video’s description:
Robots in entertainment environments typically do not allow for physical interaction and contact with people. However, catching and throwing back objects is one form of physical engagement that still maintains a safe distance between the robot and participants. Using an animatronic humanoid robot, we developed a test bed for a throwing and catching game scenario. We use an external camera system (ASUS Xtion PRO LIVE) to locate balls and a Kalman ?lter to predict ball destination and timing. The robot’s hand and joint-space are calibrated to the vision coordinate system using a least-squares technique, such that the hand can be positioned to the predicted location. Successful catches are thrown back two and a half meters forward to the participant, and missed catches are detected to trigger suitable animations that indicate failure. Human to robot partner juggling (three ball cascade pattern, one hand for each partner) is also achieved by speeding up the catching/throwing cycle. We tested the throwing/catching system on six participants (one child and ?ve adults, including one elderly), and the juggling system on three skilled jugglers.
Let’s just hope it doesn’t get bored of playing catch with the guests in the parks and decide one day to unbolt itself, head to Cinderella’s Castle and proclaim the Disney parks as the headquarters of our new robotic overlords!
Star Wars uses tractor beams as frequently as newly graduated college kids use U-Haul trailers.
Imagine if, just like in the movies, you could hook up those U-Hauls with a tractor beam instead of trying to get one of those ball-and-cup trailer hook-ups?
That day might not be as far away as it once was according to research at the Department of Physics and Centre for Soft Matter Research at New York University. Although it’s on a significantly smaller scale than trying to yank the Millenium Falcon into your garage using a flashlight, scientists have recently used a beam of light to pull a particle in a line. While researchers of the ideas surrounding what’s being called ‘soft matter’ have used laser ‘tweezers’ to pull along particles, using light alone to move something verged on magical.
By varying the relative phase of the two beams, this technique traps the particle in a moving hologram they call an ‘optical conveyor’ which allows ‘bi-directional transport in three dimensions’.
New Scientist explains how projecting the beams in this way creates a pattern of alternating bright and dark regions. By fine tuning the beam photons in the bright regions which flow past the chosen particle can be made to scatter backwards, hitting the particle and knocking it on towards the next bright region.
Watch the guy in the video explain it in an endearingly enthusiastic nerdy manner…and then explain what he’s talking about to us.
Well BigDog just got out-weirded and out-creeped by DARPA’s newest step toward removing the word ‘human’ from ‘humanity’.
Designed as a part of DARPA’s Robotics Challenge, the robot ‘thing’ in the video above, known as the Pet-Proto, will be let loose in a series of environments designed to replicate the conditions of a natural disaster. Several other teams are working on similar robots to compete in the challenge. They will all be competing to gain access to a more advanced version of the Pet-Proto called the Atlas which will be used in the 2013-2014 live disaster-response event.
We don’t know what’s worse…being trapped in a natural disaster or being saved from natural disaster from something that looks like this.
Because there’s not enough tension already in North and South Korea, a company has now developed what’s being hailed as a ‘super gun’ to help keep an eyeball on the demilitarized zone between the Hatfield/McCoy-style rivalry amongst the two countries.
The Super aEgis II is one of the most intimidating weapons ever to back up someone’s ‘No Trespassing’ policies. Featuring a thermal camera, a laser range-finder and can nail and destroy a human-sized target from almost 2 miles away. Because it’s designed as a modular system, the aEgis II’s ‘gun pod’ can be replaced and fitted with various other life-destroying joys like surface-to-air missiles or similar goodies yet to be revealed by its manufacturer.
What’s disturbing about the Super aEgis II isn’t that it can destroy a target before the target’s even aware it’s being destroyed…it’s that once Skynet takes over or some 12 year-old hacker decides to add them to their toybox? We’re all in a lot of trouble.
Because some people just can’t get the job done while locked in a room by themselves with some fun magazines or just some mental photography, some genius in China has developed something to help those people out…
The lonely Chinese scientist who created this was probably suffering from Carpal Tunnel Syndrome and couldn’t even hold a tablet that was playing his favorite movies any longer without discomfort.
(Insert your sad-face pervy scientist emoticon here)
Now this once-sad scienstist has solved ALL of his problems! This thing even has adjustable controls and a built-in dvd player so you can watch your favorite ‘films’.
Like the krill in Finding Nemo, there’s nowhere for your little swimming future-yous to go but in the perpetually slurping maw of a robot that looks like the original Pong arcade game’s second-cousin from the hills.
Clicking play on that video above will either bring laughter, what some like to call ‘cringy-I-smelled-poop’ face or a look of awe and wonder and possibilities to your precious little faces.
The director of the urology department at Zhengzhou Central Hospital said the machine was being used by infertility patients who are finding it difficult to retrieve sperm the old fashioned way.
A website which is selling the machine for $2,800 promoting it stating ‘it can give patients very comfortable feeling.’
Is this the end of prostitution? As newer versions of this machine hit the market, will the older ones find their way into dark alleys and those fun-smelling booths in the back of porn shops or will they start showing up in brothels to replace human workers as the recession keeps taking a chunk from EVERYONE’S budget?
Only time and enough oddly satisfied customers will tell.
Tech-heavy eyewear has always been something that seems like too good-to-be-true science fiction. Various accessories promising amazing visuals for your peepers have included everything from those vintage ads in comic books for X-Ray Specs to the recently buzzing Google Glasses to quantum HUD display mechanics contained in a single drop of saline dripped onto a contact lens.
Currently in their testing phase, 2AI Labs is developing a pair of glasses that allows you to see what early testers are having a hard time believing until they actually put these things on…and see their veins glowing.
The O2Amp glasses are the creation of neurobiologist Mark Changizi who came up with the idea while studying the development of color vision in primates at CalTech.
So how does this work exactly? Bionics? Special computer-controlled lenses? By bellowing latin phrases and waving a wooden stick?
Nope. Our eyes, using certain filters, are able to do this all on their own. Turns out we just have to amplify the process.
Changizi explains “that color vision evolved to sense oxygenation modulations in the hemoglobin under the skin. Once one understands the connection between our color vision and blood physiology, it’s possible to build filters that further amplify our perception of the blood and the signals it provides. ”
There are currently three different filters for the glasses:
– A vein-finder, or oxygenation-isolator, that amplifies perception of oxygenation modulations under the skin (and eliminates perception of variations in the concentration of hemoglobin),
– A trauma-detector, or hemoglobin-concentration-isolator, that amplifies perception of hemoglobin concentrations under the skin (and eliminates perception of variations in oxygenation), and
– A general clinical enhancer, or oxygenation-amplifier, that combines the best features of the first two; it eliminates neither signal (i.e., it retains perception of both variation in Hemoglobin oxygenation and concentration), and only amplifies perception of oxygenation.
Unlike Google’s somewhat infamous video of promises regarding its magical glasses, these amazing goggles are already out in the world, mainly in the medical field, and being tested by real people working in a real evironment.
The results and feedback from those that’ve worn them? Most are ready to order.
Something straight out of a science fiction story is becoming a reality in Yokohama, Japan right now: regenerative organs.
There have been tons of attempts, theories and even a small handful of groundbreaking work concerning regenerating new organs, veins, tissue and even blood using stem cell research. It often sounds almost fantastical at times considering the small amount of work that’s actually been produced from the field.
Japanese researchers revealed at the International Society for Stem Cell Research last week that they’ve reproduced a liver-like tissue in a dish.
Their findings have yet to be published but there is a lot of buzz taking place on the internet this morning about this news release.
Our imaginations and the media will probably go crazy talking about the possibilities of this breakthrough. The reality is that this is about as crude an example of a regenerated as one could possibly get. It’s still got a long way to.
Using various cell types and what reads like a hipper, less late-night grave-diggy version of Frankenstein, researchers have basically taken human skin cells back to an ‘embryonic state’, reprogrammed them, let them begin to grow, added various other cells to the process and created a very primitive ‘liver bud’, a very early stage of liver development.
As primitive as this ‘liver’ is right now, the tissue does contain blood vessels that worked when the tissue was transplanted under the skin of a mouse.
There’s no doubt where this amazing technology is headed and that its goal of recreating human organs is going to happen given time.
And, George Daly, the director of the stem-cell transplantation program at the Boston Children’s Hospital in Massachusetts in charge of last week’s session, said:
“Hello world. I am tony nicklinson, I have locked-in syndrome and this is my first ever tweet.”
With the exception of a mention of something called ‘locked-in syndrome’ this isn’t a ground-breaking or particularly interesting tweet, right?
But after doing a search to find out what ‘locked-in syndrome’ is? It gets very interesting.
The Twitter post came from Tony Nicklinson (@TonyNicklinson), a 57 year-old man who suffered a major stroke seven years ago that left his body completely paralyzed with one small exception…his eyes.
Using special hardware and software that follow his eye movements, Nicklinson is able to use only his eyes to construct his posts to Twitter.
Nicklinson’s main purpose in learning to do this is somewhat heartbreaking and precedent setting…he wants to die.
Before his stroke, Nicklinson was a doer. He now believes his life is “dull, miserable, demeaning, undignified and intolerable”. He began using Twitter to appeal to the high court that a doctor should be allowed to lawfully end his life.
In less than three days, Nicklinson’s Twitter account has gained over 12,000 followers who are watching this heart-wrenching yet inspiring story unfold.
While many people are still haters of the insanely sky-rocketing advancement of technology right now? Watch the video to see how advanced tech is doing something awesomely weird that simply couldn’t be done before… giving a human being in this condition a voice.
GOOGLE PROJECT GLASS LOOKS AS PROMISING AS MICROSOFT’S VISION OF THE TABLET PC IN 2001…
Google has only released one video and little else on their project for bringing augmented reality to the masses, so it’s hard to cast aspersions on what’s the most vapory of vaporware. That said, I’ll pick apart the video; in that even with the use of After Effects and the potential to show us anything, their vision of the future seems rather timid.
Like the silly Nintendo Power Glove in Minority Report (far less impressive than Microsoft’s Kinect and ideas in the labs when the movie was made), we get a vision of the future that feels dated before it happens.
The future is not run on Nintendo Power Gloves...
Google’s glasses appear to just be a screen in front of your face with eye tracking. And I don’t mean that in the ‘iPhone is just a screen you touch’, way. It feels like Microsoft’s attempt at tablets in the early 2000’s. They figured your finger would just be a pointing device for Windows. Substitute ‘eye’ for ‘finger’ here and you get a shortsighted vision of the potential for this technology.
Google’s Glass doesn’t do anything different than what we do now. The screen is just in a different place. Think of how the iPad changed the way we interact with software or how Microsoft’s Kinect changed gaming. Augmented Reality could be bigger than all of this.
Touch interfaces took off when you realized that the medium had changed. Google’s Glass doesn’t feel that way. I don’t think they get their medium. My first case in point is the map feature:
How does Google envision using augmented reality to show us a map? They just float a regular map in front of you. Why not lay the map over on your field of view and actually show you a path to follow?
Second, let’s look at the trip to the book store. Obviously, Google doesn’t want to scare off brick and mortar partners with flashing deals to buy the book elsewhere. But why not use their already solid image recognition technology to hover reviews of the book or show us augmented publisher information. The same for the concert poster. Make the thing move. Show us what a connected world looks like.
Third, the apps were disappointing. When the girlfriend calls, why not make it look like she’s on the top of the building with him? Don’t just overlay reality, blend it. Why not create artificial elements in real space?
Fourth, show us virtual objects. What’s a virtual ebook look like to read or a magazine? I’d love to see what Google thinks the future of virtual items will be like with augmented reality. I have to image it’s more than a transparent screen.
That said, I’m excited that Google is taking the initiative on this. I’ll leave them with the words of Tom Hardy’s Eames in Inception, “You mustn’t be afraid to dream a little bigger, darling.”
This iBook commercial from the very early Return of Jobs era for Apple demonstrates something very rare for Mac products. Not the telekinetic ability to move physical objects with a mouse or empty your garbage with a click of your OS9 drop down menu.
No, this is an example of one of the last Apple ads where a feature that isn’t actually available is demonstrated. Apple’s recent ads have all religiously opted against metaphoric messages. Instead they’ve highlighted stylized versions of actual usage. Even when Santa is using Siri, everything he does is something a new customer could do right out of the box. Although, we aren’t sure if iCal can handle 3.7 billion contacts on one day.