Cobots Are Collaborators. AI Will Make Them Partners
Article 7 of our Cobot Series: Cobots have already staked their ground as a “must have” for industrial automation, but they’ve really just scratched the surface of what they can achieve.
This is the seventh article of an eight-part series that explores how collaborative robots or cobots are transforming industrial workspaces. It will survey the technologies that have converged to make robotic influence possible, unpack the unique engineering challenges robotics pose, and describe the solutions that are leading the way to the influential robots of the future.
The articles were originally published in an e-magazine, and have been substantially edited by Wevolver to update them and make them available on the Wevolver platform. This series is sponsored by Mouser, an online distributor of electronic components. Through their sponsorship, Mouser Electronics supports spreading knowledge about collaborative robots.
Introduction
Cobots have come a long way in the few years since they were identified as a truly useful addition to manufacturing. At first, they could perform only a single repetitive task, but today they can perform multiple sets of complex tasks dedicated to specific workstations on a production line. They’ve even been touted as having the ability to “learn”— that is, in the basic sense of the dictionary’s definition of acquiring knowledge (i.e., learning) by being taught or through experience. Cobots have currently mastered the first method of learning, through being taught, but not the second experiential-based method, as it requires artificial intelligence (AI) capabilities that cobots currently do not possess but soon will.
Programming and training
Industrial robots have traditionally been programmed to perform a single function composed of multiple steps. This is accomplished by writing lots of code, which takes a long time, requires formidable programming expertise, and severely limits a robot’s ability to quickly adapt to new situations. Cobots are very different. Although they too must be programmed, the process can be so simple that anyone can do it, much like creating a macro on a PC keyboard.
Rather than writing code, programming a cobot can be as straightforward as a human guiding the cobot through a series of steps using a smartphone or tablet app to mark waypoints and save the results. Armed with multiple skillsets, the cobot can be moved from workstation to workstation on a production line, and recalling its routine can be accomplished by pressing a button.
Cobots today have only a rudimentary form of learning, as their only autonomous capability is provided by sensors that determine distance, speed, proximity, force, and perhaps other variables to keep its nearby human partner safe and to precisely perform their tasks. Now, imagine what cobots could do, as the Scarecrow said, “If [they] only had a brain” and could build on their original capabilities by making their own decisions.
With brain-like power, they could dynamically modify a step (or steps) in a routine based on what they “see” in real time, autonomously optimizing a production process rather than simply following the steps they were taught. They could also move from place to place, even in a large facility, by continuously noting changes to their environments. If a situation stumped them, they could simply ask their human partners what to do and receive the answer verbally rather than through the complex, non-intuitive process of coding or manual instruction. Through the various elements of AI, this and much more is what the cobotics industry hopes to achieve.
Autonomy and cost
Robots with varying levels of autonomy aren’t new, as Sony’s Aibo, Honda’s ASIMO, NASA’s Spirit and Opportunity rovers, iRobot’s PackBot military robots, and many others have been performing impressive feats of autonomy for years now. However, each robot costs tens to hundreds of millions of dollars. To be commercially viable, a cobot generally needs to be closer in cost to tens of thousands of dollars. Thus, creating high-end performance at the industrial cobot level is an enormous challenge. However, as technologists have demonstrated time after time, where there is a lucrative market the money and talent to serve it will appear. This is already taking place in the academia and industry, with “coboticists” working overtime to make cobots all that they can be.
Not So “Artificial” Intelligence
Although AI is typically bandied as a single entity, it’s actually a discipline encompassing a large family of problem-solving subsets, covering: Reasoning, planning, learning, verbalization, perception, localization, manipulation, and others—some or all of which are required depending on the application. The discipline itself draws from other disciplines ranging from statistics, mathematics, general science, and economics to philosophy, psychology, neurobiology, and linguistics. Fortunately, cobots don’t need to address all these areas, at least not yet.
The primary AI-derived skill required of cobots is machine learning: The ability to progressively improve their skills through experience gained over time. Machine learning uses AI algorithms that, through learning and data analysis, enable cobots to make predictions and, thus, make their own decisions.
The next requirement is perception, the ability of a cobot to use the data generated by its sensors to create a “vision” of the world around it. This is crucial for cobots because unlike their caged counterparts they function “hand-in-hand” with humans, so without this ability, safety would be severely compromised.
Motion and manipulation implemented through AI are basic skill requirements that allow a cobot to handle and employ any objects held with grippers and other tools at the end of their arms. Although not necessary to perform manipulative functions, AI is essential for mobile cobots that need to perform navigation, localization, mapping, and planning. As with perception, these skills heavily rely on sensors.
Natural language processing (verbalization) is the subject of intense development, as it represents one of the ultimate goals of AI robotics: To allow cobots to converse with and thus learn from their human partners. Although verbalization skills have been demonstrated, much more work must be done in this area, as achieving it draws on many other areas of AI.
Artificial neural networks and deep learning
Two other areas are important for the future of cobotics: Artificial neural networks and deep learning. An artificial neural network is designed to achieve advanced learning skills without the need for any type of programming, effectively attempting to mimic many of the abilities of the human brain. The goal of this immensely complex discipline is to allow robots to emulate the ability of humans to smoothly integrate inputs with motor responses, even when they experience changes to their environment. Like most other AI subsets or those related to it, neural networks use other resources within the cobot to achieve their goals.
Deep learning is the most advanced form of machine learning; like neural networks, it is concerned with algorithms inspired by the structure and function of the brain. Deep learning gets its name from its large number—or “depth”—of layers and is basically a “deep” neural network. Deep learning has the potential to make creating and using algorithms much easier. Viewed another way, deep learning is a pipeline of trainable modules that perform “deeply,” because it uses many stages to recognize an object, all of which are used for training. For cobotics, deep learning is a future goal rather than immediately obtainable considering it requires truly massive amounts of processing power and data. The more it has of each, the greater its performance will be.
Bleeding-edge Cobotics
The autonomous mobile service robots developed by researchers led by Manuela Veloso at Carnegie Mellon’s Robotics Institute are superb examples of the state-of-the-art cobot. For more than four years, they have been navigating the facility’s multifloored office buildings, including corridors, elevators, and open areas, covering more than 800 miles. They can transfer loads from cobot to cobot, ask people for help to compensate for their limitations, and arrange their delivery schedules to optimize efficiency. As service robots, they have no hands but use a basket for the pickup and delivery of objects or people.
Their scheduling and communication capabilities are remarkable. For example, if a robot is blocked in a hallway, it will wirelessly inform other robots of its predicament, so they can revise their routines to avoid it. If a robot detects a closed door in an office where another robot is scheduled to make a pickup or delivery, it will tell that robot and the scheduler will delay the stop until the office’s occupant has returned.
To accommodate changes in the buildings’ physical environments, such as cafeterias and atriums where tables and chairs are frequently moved, as well as where the movement of people is constant, the team has developed an algorithm based on Episodic non-Markov Localization (EnML). This algorithm makes assumptions about objects without the need to store massive amounts of data, as would be the case for static maps. The EnML approach also eliminates the need to store a robot’s complete observation and functional history from the time of its first deployment.
To empower a cobot’s verbalization capability, sensor data is converted to natural language to enable descriptions of its experiences. Using 2,400 spoken statements collected during programming, cobots can learn language with a level of accuracy greater than 70 percent. The cobot then uses this model to predict what verbalization a person expects and further refines its prediction through continued dialog.
Real-time 3D printing
The Fraunhofer Institute for Computer Graphics Research in Germany is another cobot powerhouse that has just developed the first cobot that can autonomously, optically, and three-dimensionally scan parts and fabricate them with a 3D printer in real time. As 3D printers create parts in small numbers, they are becoming well suited to create parts for classic cars, for example, that are no longer commercially available.
In this case, the original parts are reconnected and placed on a turntable below the robotic arm that houses the scanner. Then the robot takes over by moving the arm around the part to map its geometry and simultaneously uses algorithms to create a three-dimensional image of the object. After simulation tools verify the scan’s accuracy, the part then prints. All of this takes place without any programming, manual training, or computer-aided design (CAD) tools.
A Heady Future for Cobotics
It took 54 years, from the time George Devol patented the first industrial robot arm in 1954 to the 2008 birth of the UR5 cobot at Universal Robots, to form the realm of cobotics. The next big cobot achievements will arrive much faster, and for that, the industry will have AI to thank. Still, considering AI is a work in progress, it will likely enter cobots one step at a time, with incremental advances adding intelligence to the machines, ushering in a new generation that will promote them from collaborators to full partners.
This article was originally written by Barry Manz for Mouser and substantially edited by the Wevolver team. It's the seventh article of a series exploring collaborative robots.
Article One introduces collaborative robots.
Article Two describes the history of Industrial Robots.
Article Three gives an overview of collaborative robots sensor technologies.
Article Four examines the balance between cobot safety and productivity.
Article Five discusses the development of cobot applications in manufacturing.
Article Six explores the challenges of motion control of robotic arms.
Article Eight discusses the social impact of cobots.
About the sponsor: Mouser Electronics
Mouser Electronics is a worldwide leading authorized distributor of semiconductors and electronic components for over 800 industry-leading manufacturers. They specialize in the rapid introduction of new products and technologies for design engineers and buyers. Their extensive product offering includes semiconductors, interconnects, passives, and electromechanical components.