Students: Gianni Meesters, André Hoogesteger, Diederik Ploem
The Original situation
At the moment Bakon uses a static spray head to spray release agent into trays. This release agent is sprayed into baking molds to make sure the molds release from the product after baking. If this does not happen correctly, the product will be damaged. The release agent also forms a protective layer over the mold to protect it from the process.
Release agent sprayers are static. This means that when spraying there is no easy way to completely cover a mold effectively with release agent.
The solution consists of two parts: vision and a robot.
The robot has a spray gun at the end of its arm. This will spray the molds coming by. In the industrial process, this will happen inline. Therefore, it is mounted on a conveyor belt which will stay running whilst the robot operates. The process is triggered by a signal coming from the sensor in the conveyor. This signal will start the script on the robot and start the communication with the pc and plc. The operator selects a “type” for the molds on the conveyor. The plc will read this information and will send this information via a MQTT protocol to the pc. The PC in its turn will trigger the camera and use the information from the PLC to detect the angle and position of the mold. This will be converted to a path to follow for the robot. This will then be sent via a socket to the robot. The robot will execute the path and spray accordingly. After the robot finished spraying, the next mold can trigger the sensor.
After the sensor in the conveyor is triggered, a message is sent to the laptop. This message activates the camera through a Python script. Then the camera sends back a color-picture and a depth-frame. Depending on the type of program which is selected, the python code uses the color- or depth-frame. For the turban-tray and the bread-tray, the depth-frame is used. A binary image is made by making white all the pixels with the depth of the upper layer of the baking tray. This way, a black and white image is created with the contours of the trays. A couple of OpenCV functions are used to make the image reliable and without “noise”. After this step a few functions recognize and draw the outer contours of the tray. To make a consistent estimation where the corners of the tray are, a rectangle is fitted as close as possible over the contour. Then the angle of the tray is calculated with the corner coordinates of the tray. With the calculated angle and a corner point, a path for the robot to move is calculated. In case of the turban-trays, a circle is fitted around the contour and the center of the circle is sent to the robot. Because the baking tray is flat, the depth frame cannot be used to extract the corner coordinates of the tray. For this the color frame is used to detect the baking tray. First the color-frame is turned into gray shades, then a function to find contours is applied. Because the baking tray is an equal colored tray, the tray will be one big contour. This big contour is drawn in a binary image, and from this point the same way as the bread-trays, the coordinates are extracted out of the image.
During production, we had a lot of difficulty with communication with the PLC and PC. We eventually settled on MQTT after the PLC refused to work with snap 7. This is a lightweight transfer system that is mainly used for IOT applications but is now used to make an industrial process faster.
We also had trouble with connecting to the robot from the PC. we tried URX libraries, but these were not up to the task. We decided to implement or own strategy via socket communication as our robot did not support real-time data exchange.
We also struggled with the rotation of the head. We could not figure out rotation vectors. We eventually found a way to convert RPY to Rotation Vectors and fixed it that way.
We would like to thank the support staff of the Smart manufacturing and robotics minor, as well as Bakon food equipment and it’s employees for their support.