Aquila running mutiple time-scales recurrent neural network on iCub humanoid robot This video demonstrates how Aquila trains and runs multiple time-scales recurrent neural networks used to control humanoid robot …
Video Rating: 4 / 5

This entry was posted in Neural Networks and tagged , , , , , , , , , . Bookmark the permalink.

15 Responses to Aquila running mutiple time-scales recurrent neural network on iCub humanoid robot

  1. JPxKillz says:

    Stfu dude. (Must admit I did laugh a little though.)

  2. huntmatuk says:

    GPU seems to be the way to go then? I have heard this from people who work on computational fluid dynamics.

  3. Erin Viera says:

    im not so sure robots being able to learn is such a great idea,frankly its scary as hell,its magnificent what u people have accomplished but did u ever stop to think should you. but what do i know. i see the pro’s and really fear the cons.good luck make sure u mass produce personal emp’s before u mass produce learning robots lol

  4. Martin Peniak says:

    Hi 1. Yes 2.You can think about it that way. The sequences recorded during the demonstration were used for backpropagation training. 3.Aquila toolkit, improved version 2,0 coming out in few days – google Aquila cognitive robotics toolkit or visit the facebook page 4.All running on external GPU computer. 5.Both, please visit my website listed above Thanks for your interest!

  5. vortexZXR says:

    Hello. 1) Are you training the robot by example (demonstration) – if so, then isn’t this imitation learning ? 2) What cognitive architecture are you using if any – how do you represent knowledge ? 3) Is the robot just the embodiment ? Are you running the A.I. on a computer ? 4) Do you have any more videos or a paper written on those experiments ?

  6. kingcole219 says:

    grab dick go up and down a few times then put it on repeat.

  7. reptile202 says:

    Can he learn without the wires pulled in his back?

  8. Photon98 says:

    By looking this video, i can imagine how Robots will be learning things from Us, and will surely change the way we live!

  9. Martin Peniak says:

    iCub’s proprioception is represented via a self-organising map that is then used to activate the input neurons, the network then steps and its output depends on MTRNN’s current state, which is the result of it previous activations over time. MTRNN predicts the next movement and generates a new SOM, which is converted to new joint positions. Your questions are very good, and also the one about softer material. We thought about using feedback from force sensors to tackle this problem.

  10. Martin Peniak says:

    Thank you, I was quite happy with the GPU speedup too, however, there is much more scope for optimisation. I created one big MTRNN that learned all those actions, which is possible via parametric bifurcation. CTRNN could do similar job but I preferred MTRNN since Jun Tani showed these allow the network to self organise and segment motor primitives in fast firing neurons while the slow firing neurons could combine this and create novel sequences of previously unlearned behaviours.

  11. CodeJeffo says:

    How is it working then? Did you create one huge MTRNN network? or it is network of networks for each knowledge? Can you store one learned knowledge in MTRNN and on command ask this robot to repeat it let say 5 times … ? Does your iRobot (sorry 😛 ) sense that some object was touched ? How is controlled the pressure to object? What if I create the same box (sizes, colour) but I change that it will be from delicate material? The GPU acceleration looks really impressive. Congra’ts

  12. Martin Peniak says:

    Thanks for your comment. I am currently extending the model with a biologically inspired vision system that uses logpolar transformation so in this video there is no vision yet…just proprioceptive feedback. The purpose of the video was to demonstrate the speedup when using GPUs as well as to show some of my preliminary tests with the MTRNN, which learned 8 different sequences. Simple recurrent neural network could not do this since you need changing activity over time :)

  13. CodeJeffo says:

    Hi Martin. Does your robot generalize captured knowledge ? I mean what will happen if you change color, size of the box or just distance ? And what’s is the advantage using MTRNN and not just simple kind of recurrent ANN? Thank you and good luck!

  14. Martin Peniak says:

    nothing needs to be stored in the memory apart from the neural network itself, which involes the a list of floating point numbers representing thuosands to millions synaptic connections between neurons. This also include storing connection weight of the self-organising map that is a part of the MTRNN system Yes, iCub has a dual core computer (PC104) running Linux, which is as you directly connected to server. This server connects to many other servers with GPU cards.

  15. Ryuuken24 says:

    If you use a GPU to do most thinking work, what about the memory, does it save every motion, I/O command, visual input, motor signal to a Harddrive ? If it has learned a certain task, can it performe the same task if asked, by it simply accessing the stored information, by a random basis ? Does Icub have an inboard computer, or the robot is linked direct to a server like computer, since you can apply GPU and CPU work ? Do you think a biped robot benefits, by having a dual core CPU ? Thank you!

Leave a Reply