Supervised Learning is Critical to the Future of Automation

author avatar
Supervised Learning is Critical to the Future of Automation

Robots will continue to get smarter, but sometimes they will need to learn from the expert…us

The primary goal of automation is to free up people to be more productive. That means that instead of a person doing a dull or repetitive task, a robotic system can be used. This allows the person to accomplish more by focusing on tasks that require more thought, creativity, and problem-solving – tasks that ultimately generate more value. When a robotic system is able to accomplish its task with some amount of autonomy, for instance, picking from a bin of randomly placed parts rather than requiring neat stacks of parts, people are liberated even more. However, robots still don’t get everything right. For instance, the best algorithms for vision, that let a robot see and interact with its environment, are still not 100% accurate. 

These algorithms for reproducing a task, whether it is detecting an object or learning to hold a metal part, learn by repeating the same action over and over again. The more they do the action, the better they get. However, this only works when the task never changes, when the parts don’t vary, or when the parts are presented in the same location and orientation. When any single aspect of a task varies, the algorithm may not be able to cope with the change in circumstances.

Mentorship of a Robot

In order to improve this performance, and deal with these new situations, there is a strategy employed by many algorithms called supervised learning. Since many of these robots are deployed in environments where the people that used to perform the robot’s task are still nearby, the idea behind supervised learning is if the robot encounters a situation where it does not know how to perform because of some condition in the environment it has never seen before – a part it has never seen, two parts stuck together, a jam in the machine, etc. – a person can provide the missing information the robot needs. That information might be in the form of a new label (yes, that is still a bottle of soap, even though it’s upside down) or a new demonstration (you need to pick up this new part here). The user demonstrates, provides the additional information, and the robot is back to work. 

This type of corrective assistance that the person is providing to the robot is also common in how people transfer skills to each other. In many different trades, this type of mentorship has happened for thousands of years and is still a common way new workers are taught (as opposed to, or in addition to a classroom setting). There are even companies that are using AI to better capture this mentorship. 

So do people have time to do this robot mentorship? Well, by freeing up people from repetitive tasks with automation, we inadvertently create a resource of knowledgeable workers, already present alongside the robotic system to perform that mentorship in order to improve performance. Imagine a worker who previously spent all their time working at a single machine, who now manages a team of robots, occasionally tweaking each robot’s performance by providing that extra information from their own know-how. They are more empowered because their productivity is higher, they are freed up from performing an undesirable task themselves, and they are still using their considerable knowledge to continuously improve the robotic systems through mentorship.

Usability is the Key

A key element of designing a robotic system that can effectively be trained by a worker through supervised learning is ensuring that the interface the worker uses to provide that extra information is easy to use. This is especially true if those performing supervised learning are machine operators (which are common) and not robotics engineers (which are exceedingly rare).

If we want to leverage the fact that people are familiar with (and good at) mentorship and apply that to making automated systems better, then the interfaces used to provide that information to the robot need to be similar to how they train other people. For instance, it is often the case that workers demonstrate motions in order to teach a skill. You need to hold the tool like this, and press here. With the availability of “collaborative robots”, people can now demonstrate a motion directly, by moving the arm of the robot. Additionally, we are starting to see vision systems that can watch the motions a person makes, such as painting with a paint sprayer, and use that demonstration to define the motions of the robot. This works well because again, it is very similar to how people demonstrate motions to each other. Teaching the robot in this way is natural, and it enables even those without extensive robotics experience to train robots. This is key to enabling more rapid adoption of automation.

From a learning perspective, a worker might need to show (at least once) how a task is done so the robot can learn from that. The worker defines a set of steps to complete a task and represents those steps in some logical flow. Workers commonly use flow charts to teach or communicate with other workers how a process works step by step. There is now No Code software that provides this same flow chart interface for programming the robot. This means that instead of learning to code, a worker can learn a conversational interface based on something they are already familiar with, with significantly less effort.

No Code software with its significantly decreased learning curve also makes learning about robots easier. The educational barrier to robotics is very high, and much of that stems from the time required to learn to program in the robot’s native programming language. Instead, when programming is rapidly learned, those precious hours of upskilling can be geared toward learning more about the best way the robot can do the task. How does the robot best hold parts? How does the robot deal with variability? How is the robot most likely to fail at the task? The more the worker can get inside the robot’s “head”, the more they can help guide that robot to success. 

Adapting to New Situations

A key piece of a robot’s ability to learn is when that knowledge can be used in a new situation. 

Now, when we are talking about the robot’s ability to learn, we are really talking about the software or algorithm’s ability to learn. This means that for this type of learning to be truly useful, once a skill is learned, it should work no matter what robot – or hardware – is doing the task. This is a simple idea, but it seldom works like this in practice. 

For example, let’s say an algorithm learned to pick cylinders from a bin with a small industrial robot with a vacuum gripper. It knows what the part looks like, and where to place the vacuum cup so the part is held securely. Now, the user wants to use a larger robot with a two-finger gripper. The algorithm still knows how to manipulate the cylinder, but it needs new information about this bigger arm, and where to position the two-finger gripper for the best grasp. This process of learning the fundamentals of a skill, agnostic of the particular hardware, is called abstraction, and people do it all the time. Hand a skilled carpenter five different hammers and they will still be able to hit a nail because they know the task beyond the constraint of a tool. 

So now imagine a machine operator who is in charge of multiple such robots. They must provide this additional information to allow the robot to grab the part under these new circumstances. However, this operator must deal with many such machines and situations, potentially with robots from different brands. It is essential that this user has a consistent interface to provide this extra information, meaning the robots must be running the same overlaying software. This common software layer also benefits the algorithm, since the algorithm now a standardized interface to the robot, and a representation of that robot that is agnostic of its size and configuration. After all, the robot can still move and grab, it just happens to be larger and have a different hand. 

There have recently been technological advances that allow for a learning algorithm such as the one described to work on any brand of robot because they run on a common underlying software platform. Think Windows for PCs – one algorithm can run on many different computers, and all the underlying software just works. In this situation, this common platform makes every robot look the same in the eyes of the algorithm, making the translation of skills between robots much easier.

Conclusion

Robots that can figure things out on their own still have a lot to learn from us, and supervised learning is a way to impart that knowledge. However, in order to truly enable supervised learning, systems need to be usable by the people that are providing that extra knowledge. User interfaces need to be similar to how people already teach each other, and easy to use so that it is as natural to show a robot the ropes as it is to show another person. Additionally, common software platforms for robot hardware can enable supervised learning too much more quickly abstract to new situations because the robot and tools appear the same to the algorithm, even though they may be of different sizes and configurations. Put it all together, and with the right architecture for supervised learning (conversational, intuitive robot programming interface, a common interface across robot models and brands, and software optimized for supervised learning input and teaching) robots don’t just become more effective, they are able to handle tasks that would have been very difficult to program, and the machine operators using the robots are fully leveraged – enabling previously impossible levels of productivity.