The Use Of Machine Vision For Robot Guidance Offers New Possibilities

12 Aug 2022 SICK

The Use Of Machine Vision For Robot Guidance Offers New Possibilities

By: Jim Anderson

It feels that robots have been a part of the automation landscape forever, but that is only true in a small set of applications. As the use of robots has gained popularity in more industries, the recent pandemic made it clear that robots could be used in many more situations. One of the greatest impacts the pandemic had on the business world has been on the labor force. With unemployment nearing pre-pandemic lows, every industry is still looking for labor. There are over 10 million job openings and no people to fill them. Enter the robot. The term robot comes from the Czech word robota, generally translated as “forced labor.” This describes most robots fairly well. Most robots in the world are designed for heavy, repetitive manufacturing work. They handle tasks that are difficult, dangerous, or dull to human beings. Now robots are being added in any location that add value so the companies can use their human employees in higher value areas and tasks still beyond the scope of machines.

An industrial robot with six joints closely resembles a human arm — it has the equivalent of a shoulder, an elbow, and a wrist. Typically, the shoulder is mounted to a stationary base structure rather than to a movable body. This type of robot has six degrees of freedom, meaning it can pivot in six different ways. A human arm, by comparison, has seven degrees of freedom.

Your arm’s job is to move your hand from place to place. Similarly, the robotic arm’s job is to move an end effector from place to place. You can outfit robotic arms with all sorts of end effectors suited to a particular application. One common end effector is a simplified version of the hand that can grasp and carry different objects. Robotic hands often have built-in pressure sensors that tell the computer how hard the robot is gripping a particular object. This keeps the robot from dropping or breaking what it is carrying. Other end effectors include blowtorches, drills, and spray painters.

The sensitivity of gripper arms is becoming increasingly important, especially when it comes to flexible production and batch size 1. Intelligent object detection is the key to complex automated applications. Light sources and detection functions can be customized and preset in line with special object properties relating to material, surface, or form, for instance.

This is a 2D camera mounted to robot end of arm tool. The PLOC2D gives the position and size of the part so gripper can be used to pick up the object.

Most industrial robots work in auto assembly lines, putting cars together. Robots can do a lot of this work more efficiently than human beings because they are so precise. They always drill in the same place, and they always tighten bolts with the same amount of force, no matter how many hours they’ve been working. But in recent years, the use of robots and vision has expanded dramatically to other areas. In packaging and food processing, robots are used for sorting and grading bulk fruits and vegetables. In health care, robots help keep the inventory of needed medical supplies correct in the patient rooms so humans can concentrate on giving the correct care.

As the application for robots increases, the ability to perceive and interact with surrounding objects and the environment is critical to robotic functionality, and their applications working alongside humans. As technology advances, there will undoubtedly be a need for increased robotics education and literacy, as well as robotics technicians.

The use of machine vision for robot guidance offers new possibilities for industrial operations, but also creates complex safety design challenges. This is because it is applied to machinery that operates with high repeatability and low latency, especially in complex industrial sectors. Machine vision systems must deal with critical industrial issues like lighting conditions, texture-less surfaces, and undetermined and moving objects. Therefore, effective robot guidance requires specific machine vision sensors and techniques.

Image-based solutions that enlarge the field of vision of the robots are increasingly becoming the focus. Thanks to vision technology, the robot localizes and identifies defined objects in advance and decides by itself how to grip the respective part. Even measurements and quality inspections can be carried out beyond just doing the localization task for the robot, adding even more value to the overall solution.

Structured light 3D system for robot guidance and bin picking.

Human and robot collaborative workstation with sensors for item detection and for safe operation to improve overall production.

There are numerous approaches to robot guidance using 3D machine vision techniques, such as stereo vision and photogrammetry, time of flight, structured light, light coding, and laser triangulation. There are several that lend themselves better to 2D machine vision techniques as the parts or tasks require inspections based on contrast in the image and not differences in height.

The technique ideal for a specific application depends on its requirements. To determine the ideal machine vision technique to apply, consider the application’s goal and its type of robot, as well as factors including:

  • The distance the camera will be from the inspection object. The working distance and the environment can have significant impact on the sensor and techniques used.
  • The accuracy and resolution needed to meet the application requirements. The hardware chosen along with the accessories i.e. lighting and lens can greatly impact the real resolution of the system not just based on the number of pixels in the image sensor.
  • The weight of the system. Many applications are best solved with the vision system on the robot near the end effector. Robots have a finite weight capacity and typically cost more as the weight capacity and overall reach increase, so it is important for the vision solution to use the minimal amount of that capacity
  • The processing speed. The overall processing power of the system can have an impact on the overall throughput of the application solution. Some applications such as 3D CAD matching for bin picking are so intensive that these applications often have cameras that have the images processed on off-board controllers or PCs. While there are many others that can have direct communication from the robot and the camera making to system architecture much less involved or complicated.
  • The overall budget. This is often a major decision-making criteria that needs to be considered when picking the ideal system. Sometimes there are multiple solutions that give different performance levels, but it is often the solution choice comes down to the “good enough” option (meets all stated application requirements) and one that might be best (meets all needs and wants or has capability for potential future applications) but costs more.
  • The level of safety required. As robots often handle applications that are considered dangerous due to the actual application (metal forming, pipe bending, moving high temp objects) or based on the overall repetition (pallet loading or unloading, picking parts from a bin), which could cause boredom or injury to the human doing the application. As more robots are working side-by-side with humans, the need for safety considerations is increasing.

There are many ways to keep robot applications safe and that is why it is important to work with a person or company that can provide a documented process and solution including doing a risk assessment of the entire application and the areas in which this solution will interact with people and the other processes in the work area. Working with a company that is certified and has access to the monitoring devices and software can be important and help reduce chances of an accident.

These factors are a starting point for selecting the ideal technique for an industrial vision guided robot application. However, further research can provide more information about techniques that might also work, and it is this area of new application need and technical creativity that is unlocking even more possibilities for robotic applications. The close and, at the same time, safe collaboration between humans and robots is the prerequisite for high productivity, increasing efficiency, and improved ergonomics. The use of machine vision can help give robots more flexibility to work in factories doing dangerous jobs to applications in logistics and food prep that are starting to come on-line as the need for employees increases.

The ability to perceive and interact with surrounding objects and the environment is critical to robotic functionality, and their applications working alongside humans. This can be achieved with an adaptive perception of the environment with the aid of intelligent, rugged, and reliable sensors and safe systems. As technology advances, there will undoubtedly be a need for increased robotics education and literacy, as well as robotics technicians, machine vision technicians, and safety experts to keep everything running smoothly.

The use of machine vision for robot guidance offers new possibilities, especially in the area of automation. The system was developed by researchers from the University of Tokyo and the Japan Institute of Metals and Materials. It can be used to control robots in real time, allowing for a more efficient production line. The team has been working on this project for years, but it only recently started to see some results. They have been able to achieve a level of accuracy that is quite impressive, even though it is still not perfect. However, it is still better than what most other systems can offer today. One major benefit that this new system offers is that it allows for robotic precision on a scale that humans cannot achieve with their own bodies. This means that workers will not need to move around during their shift or take breaks between tasks. Instead, they will simply sit at their desk and control the movement of the machine through a software application on their computer screen. The system uses three cameras located in different parts of a factory floor and then uses artificial intelligence algorithms to analyze them all together as one large image file instead of having three individual ones like before.