2. Reflecting on My Time at a Korean Startup, and What's Next

Last June, I was fortunate to meet with three co-founders of CMES, a computer-vision company specialized in inspection and scanning. When I met them, they had already done some work related to industrial automation with very well-known global companies and were expanding to other industrial robotics applications. In August 2018, I happily joined as a robot vision engineer.

1. What I did

In the last few months, I designed and developed a software that, given a structured point cloud from an RGBD image, returns a list of detected objects’ positional information in camera’s coordinates. The software can also return the list of objects in an order to be picked up according to the pre-defined order. It would just need a JSON format file that contains a 2-dimensional top-view of objects in different specific arrangements with each object or group of objects numbered to be picked. The program will compare it with the positional pattern of detected objects and chose an appropriate model to use to define picking sequences for each identified object or grouped nearby objects. My program is used by the main server that captures an image and send positional and grasping commands to the robot through PLC.

Before building this software, I had built few prototypes. One was an object detector that returns a location of an object from an image using deep learning techniques. I’ve tried with a few different methods available open-source like Fast-RCNN and RetinaNet. The other was an object detector built using traditional computer vision methods. I first detected edges through Hough’s transform and computed their intersections. Using those intersection points, I found all possible object hypothesis and filtered improbable ones to localize correct objects. Defining objects using edges were possible because an object had a specific geometric shape and the view from the camera was normal to its geometric surface. Building these two prototypes helped me gain some intuition and identify challenging areas for creating an object detector.

Figure1. The work cell at the customer’s logistic center

Figure1. The work cell at the customer’s logistic center

2. Challenges I faced

I had two main objectives during development. First, my software should function as intended or requested by the client. Second, my code should be readable, maintainable and flexible.

Some challenges I faced were due to lack of experience using specific libraries. It's difficult to search online for things you don't know that you don't know. I hadn't grasped all available resources such as classes and functions I could use to write more efficient code. I also wasn’t aware of many useful algorithms that were useful for solving some of my problems until I discussed my solutions with seniors. They include algorithms like iterative closest point (ICP) or principal component analysis (PCA). In those cases, I had come up with solutions, but they were most often less efficient, more complicated to execute, and less reliable to variables in data. Finally, I was applying new technology, deep learning, that was relatively new to me. Over the past year, I had diligently spent many hours studying the foundations and prototyping using tutorials to be more familiar with the concepts. It was much more challenging to apply those concepts to a real-world scenario and develop software that also meets industrial standards.

Furthermore, since I was writing my software from scratch, I had great flexibility to architect my code. That also meant I felt more responsibility to start right. I wanted to write clean code that was maintainable and flexible. It was important to me because I believed a good software engineer should care about others who would be reading and modifying the codebase. My company also had a plan to expand my application to provide for other potential customers. Because I had recently changed the field to software engineering and admittedly lacked in experience, there were times I was unsure which best practices to follow or didn’t even realize I was writing in bad practice until code review.

3. How I overcame my challenges

I spent many hours researching other people’s approach. I wanted to make sure I wasn’t rebuilding the wheel, and even if I were, I wanted to learn from others. I'm very fortunate to live in a time where there are plenty of resources and supportive communities online. Some resources that were helpful as reference were open source library documentation pages and their community site, GitHub repositories and their issues pages, StackOverflow, and “Clean Code” book. I also reached out for discussion when I felt stuck, or when I felt that my approach wasn’t the best solution but couldn’t yet devise on my own. Those were when I learned about algorithms like ICP or PCA that were very useful. Other things I did was engaging with deep learning communities. Not only did I follow well-known scientists online, but I also attended of some events like recent NVIDIA’s AI Conference or their Jetson’s Meetup group in Seoul to gain some insights and network with machine learning and deep learning practitioners. I also took advantages of some job interviews where I got to meet machine learning scientists and engineers and asked them questions to gain their insights. Interestingly, some issues were also left unanswered by many of the practitioners too. Ironically, it helped me to work with some uncertainty knowing that it wasn’t something I faced due to lack of expertise or experience but due to the nature of the field at this time.

I also learned a lot through code review with my seniors. We didn’t have scheduled code review, but I asked for one occasionally. It helped me to identify some bugs. It also helped me to architect and refactor the code for more efficiency and scalability. Additionally, confirmations on good practices or styles I followed were very encouraging. Overall, code review was helpful to correct and update my code while exposing myself to fruitful knowledge gathered from more experienced engineers.

Figure2. A mock work-cell at the customer’s site for an intermediate testing before installing at a real site.

Figure2. A mock work-cell at the customer’s site for an intermediate testing before installing at a real site.

4. What I learned

Overall, I think I maximized my experience in the last five months at the startup.

Two initial goals I achieved are:

  1. Integrating deep learning techniques into a commercial product for a real-world application (i.e., industrial automation)

  2. Designed and developed a fully functioning software in C++

Additional highlights are:

  1. Experience working with client and external collaborator

  2. Experience working on-site; installation, testing and debugging

  3. Experience working at a startup; This means working with a lack of structure compared to bigger companies/teams, quick pace and late-night work cultures, and a range of responsibilities related to coding and non-coding.

5. What I want to improve

First, I'll like to understand better how a computer works, and how the operating system works. I also want to spend more time better grasping C and C++ language, how to use them more efficiently, and read more style guides.

Second, I’ll like to focus more on better understanding the algorithmic side of machine learning and conduct foundational research. Therefore, I plan to review my foundation in mathematics and ultimately in machine learning. Specifically, I’m interested in deep reinforcement learning (DRL) and other topics that are complementary or supplementary to DRL like neuroevolution for robotics application. In the long term, I’ll like to participate in building human-level intelligence or artificial general intelligence that will expand the capabilities of robots.

6. Why?

My initial point of interest to robotics was an introduction to a surgical robot, the da Vinci surgical system, during high school. Now, I see value in its everyday applications as well as in industrial and medical. The potential benefits that robotics can have to our society inspire me. But those benefits usually come with more mobile and smarter robots. It should be able to communicate with others, human or machines, sense the environment, reason for itself, and make a decision. Additionally, it should be able to manipulate itself and objects around it safely and swiftly.

I am especially mesmerized by the mobility of robots. It’s thrilling to watch it move and do something useful. Robots doing something dexterous has always captivated me. It might be because the movement itself symbolizes all possibilities intelligent robots can have to our society. Therefore, topics like deep reinforcement learning to devise autonomous manipulator are interesting to me.

To summarize, my goal is to make an impact in robotics research and push the capabilities of robots for the benefits of our society - towards a healthier, more productive and equal society. I believe machine learning holds one of the keys to expanding robot applications and therefore would like to invest my career in this area.

7. What's next?

I feel fortunate to have had this opportunity to contribute and grow as a software engineer. I feel bitter-sweet to close this chapter in this journey as I've got to know the team better, but I am also excited to start a new chapter as a bit more mature engineer armed with newly gained experience. I am joining Advanced Robotics Lab at LG Electronics as a research engineer. January 21st, 2019 is my first day. :)