Snoeks Automotive is a company that delivers modification concepts for manufacturers of light commercial vehicles. This makes it possible for car manufacturers to modify individual vehicles to the wishes of their clients.
Their products are produced in Czech Republic, by doing so they can reduce the production costs due to cheaper employees. This means most of the work is done manually. The biggest issue is manually sorting and bagging all kinds of small parts for assembling the products. These bags need to contain a certain amount of different parts.
Snoeks Automotive asked the students of Smart Manufacturing and Robotics to automate this operation with a co-bot so that it is more precise than a human.
All sorts of different parts, from bolts to pins to protection strips, are placed in a bag. This is for assembling the products, which means it has to contain a certain amount of parts.
This job is now done manually by employees, but this gives a lot of problems with the accuracy of this process. The employee grabs a rough amount of parts and puts it in the bag. Sometimes this means there are too few parts, which leads to unsatisfied clients.
The robot needs to pick up all sorts of parts and the right amount of those parts. A job consists of a combination of different kind of parts. Jobs can change daily, so it has to be easy to adjust the system for the right job. The bagging of all the parts is out of the scope. For this project a system is made for 3 parts at the time. The parts can differ in shape and size.
First custom-made boxes are filled with only one kind of part for each box. These boxes have a pushing mechanism inside and a QR code on top. When those boxes are filled the boxes needed for that job (or the multiple jobs after each other) can be put in a grid, so that the boxes stay in place. Under the pushing mechanism of the boxes there is a pneumatic cylinder. Every box has its own plate with light underneath. When the cylinder pushes up the pushing mechanism inside the box, parts will fall out on the plate. The camera will see those parts and picks up the parts one at the time until it has the asked amount of parts for that job. The parts will be placed on a slide that leads to the bag, to count if every part is in the bag and give a signal to the robot when it has the right amount of parts. The jobs can be easily added or adjusted in the HMI interface and this solution can work out for a big range of different shaped parts.
The custom-made boxes with the QR codes
The robot used for this system is a co-bot from Universal Robots. The gripper used is from robotiq, the gripper cushion is handmade. It is a soft spongy material with a rubber cover around it. This makes it easier to grip the parts with enough friction.
A device that can run python interfaces with the robot and the PLC.
The vision system works on OpenCV for Python. A regular camera is used and is placed above the boxes and the lightplate.
First the vision will look for QR-codes and read them. It will check if all the right QR-codes are placed and where they are placed. If a box is missing the robot will stop when it reaches the necessary part and will give an error until the right box is placed.
The same camera and system are used for detecting the parts on the plate. A background image is made with no parts on it, where after it will take another picture with the objects on with, and subtract one image from another. Only the parts will still be visible this way. The system will give the coordinates of the detected parts to the robot arm, which will pick up the parts one by one.
An infrared sensor is used for counting the amount of parts.
The lightplate with parts on it
Human machine interface (HMI)
The HMI lets the machine operator communicate with the machine. In the main screen the information about the current status of the robot and jobs is displayed. The user can navigate to different screens. In the screen “Job selection” it’s possible to select different jobs and press transfer to transfer the information to the PLC. When the start button is pressed on the main screen, the robot will execute that specific job. In the screen “Job management” it’s possible to edit, rename or delete existing jobs, or make a new job by filling in the amount for each part of the new job. In “system screens” the robot can be calibrated. In this calibration the QR-codes are being read and the photo of the background will be taken. For the screens “System screens” and “job management” the user needs to be logged in as administrator. This is done so that jobs cannot be modified by a regular user. The users can be managed in the screen user administration, this can be found in system screens.
The main goal of this project was to make a prototype to check the feasibility of automating the manual pick and place job. The prototype that is delivered is capable of this and can be scaled up. The boxes can be placed randomly on the ‘grid’, and when starting the job the correct amount and type of parts will be picked up.
The upscaling of the prototype does come with the cost of requiring a lot of room. To achieve picking up the varied sizes of parts successfully, a lot of extra peripherals are required, which makes the initial idea of just having a robot arm, workbench and some boxes not feasible.
In the end, a complete workspace setup will be required to set up the automation of so many varied amounts of parts.