1. [Project RoboPainter] Initial Project Idea

[ This post has been moved from the Work page. Last edited: Jan 2018]

What is RoboPainter? RoboPainter is a robot arm capable of drawing and painting shapes. 

Why build RoboPainter? Building RoboPainter combines my passion for robotics, artificial intelligence, and painting. I am also inspired by the annual RoboArt Competition. The goal of the competition is to produce soemthing visually beautiful using robotics. The deadline for an art submission is in April every year since 2016. I am preparing for the competition next year in 2018.

How are you building RoboPainter? To build RoboPainter, I will use the uSwift Pro robot arm with gripper end-effector to hold a pen or brush and combine it with Intel's RealSesne camera for my Linux Box computer for collecting visual images. I will then implement an off-the-shelf algorithm for vision-based, multi-task manipulation using end-to-end learning from demonstration[1]. Following this, I will spend hours training the model with tasks relevant to drawing and coloring shapes. Finally, I will test the algorithm and evaluate the success of the tasks. This may require multiple cycles to implement any necessary modifications and re-evaluate the performance. When successful, I plan to build a modular program that applies transfer learning from one robot to another. This way, a learned model from one robot can be used to derive a policy for another robot.

Why this specific method?  Groups in CMU, UCF and UCB have shown that it is possible to execute end-to-end training using only raw images as input to autonomously accomplish a variety of tasks. There are several challenges to applying end-to-end learning to robotics. First, it is data hungry. It is expensive to collect data for robotics tasks. Since this is a one-person project, it was important for me to find a data-efficient method to train painting tasks. Second, it requires hand-crafting a robust and resilient control strategy that is extremely difficult to capture in multiple manipulation tasks. However, through the technique of learning from demonstration (LfD), demonstrated that it is possible to train robots through a manageable number of training sets. Finally, the code for [1] was available open-source to use as a baseline for this project.

Screen Shot 2018-08-23 at 10.22.36 PM.png
2018-02-08 15.23.42 copy.jpg
Screen Shot 2018-08-23 at 10.35.19 PM.png

Here, is the list of relevant concepts. I will briefly describe each concepts over time.

Convolutional Neural Net (CNN)

Recurrent Neural Net (RNN)

Long Short-term Memory (LSTM)

Variational Autoencoder (VAE)

Generative Adversarial Network (GAN)

Autoencoding with Learned Similarity (VAE/GAN)

Neural Autoregressive Distribution Estimator (NADE)

Imitation Learning; Learning from Demonstration (LfD)

Observations from an expert produce a set of demonstration trajectories, or sequences of states and actions, that are used to shape reward or desired policy directly to generate demonstrated behavior.

Behavioral Cloning

Supervised learning policy using demonstration trajectories as ground truth, mapping states directly to actions.

Inverse Reinforcement Learning

Unsupervised learning policy using demonstrations to learn latent rewards or goals, and training the controller under those rewards to get the policy.

Visuomotor Learning

 

References

[1] Rahmatizadeh et al. Vision-based multi-task manipulation for inexpensive robots using end-to-end learning from demonstration. CoRL 2017. [https://goo.gl/XT6jAU]

[2] Larsen et al. Autoencoding beyond pixels using a learned similarity metric. ICML 2016.

[3] Pinto and Gupta. Learning to push by grasping: Using multiple tasks for effective learning. 2016.

[4] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. AISTATS. 2011

[5] P.Pastor et al. Learning and generalization of motor skills by learning from demonstration. IEEE ICRA. 2009

[6] Kingma et al. Auto-encoding variational Bayes. ICLR. 2014

[7] Goodfellow et al. Generative adversarial nets. 2014