Update May 2020: Kickstarter Campaign for Make a Robot Kit featured in this article is now live! Check it out here.
First article in a series about Line Following and Lane Following with CV
Original article by Dmitry Maslov published on Hackster.io
Things used in this project:
- Seeed Sipeed MAIX-I module WiFi version
This is going to be a series of articles about using computer vision for line following and lane keeping. We’ll use K210 chip based board cyberEye (a modified version of Maixduino), which has support for OpenMV and also features a KPU, a neural network inference accelerator. For the reference mobile platform we will use M.A.R.K. (stands for Make A Robot Kit, I’ll just call it MARK in the article 🙂 ), which is a robotics platform for educators and makers. I am on the team currently developing it, so I decided to share some of its internal works with my readers. MARK is an open source project and despite the examples in the articles are written for using with cyberEye and MARK, you certainly can apply them on a different platform with some modifications.
Let’s buckle up and begin!
I think for most of people who are into robotics, when we just started one of our first projects was a simple IR sensor line follower. There are different implementations of the algorithm, based on the number of IR sensors, but they all boil down to the same principle of measuring the intensity of IR light bouncing back from the surface.
One step in complexity above that is computer vision line following. For MARK there are two options to use CV line following:
- Graphical programming in Codecraft
- Micropython in MaixPy IDE
The first option is very suitable for education, since it allows the students to study different parameters available and the internal mechanics of the algorithm, without need for coding skills.
You can have a look at it at https://ide.tinkergen.com/ . Select MARK(cyberEye) from devices menu and go to Machine Vision tab, you’ll see three blocks related to CV line following:
- Set line identification color to black(0-64)/white(128-255)
- Set line identification region weight A: B: C:
- Turn angle
The complete CV line following code looks as following(we add servo block to tilt the cameras servo):
Set line identification region weight A: B: C: block is useful for when you have dotted line or particularly sharp angles – and also to account for different camera angles. It basically tweaks the Turn angle sensitivity to line deviation from the center of the screen in three different regions of interest(ROI). Let’s have a look at two examples to gain empirical understanding of how it works.
In the first two images we see straight line produces turn angle of 0 degrees, which means go straight. In the images 3 and 4 we see the camera is facing a turn on the map, and the output of turn angle function is 3, which equals to slight turn to the right. Isn’t it supposed to be turn left? It is because with our default region weights (A:70 B:50 C:30), the turn angle is influenced by line segment in A region the most – and as we can see the black line segment in region A is on the right! Finally, in the last picture, I changed region weights to (A:30 B:50 C:70) and now the turn angle outputs -9, which means medium speed turn left. It is because now the turn angle is influenced by line segment in C region the most.
Here is the video of basic CV line following and more advanced examples, all programming done with Codecraft.
For Micropython, you can use MARK high-level API, which is behind Codecraft blocks. The CV line following part is in camera.py and consists of the following functions:
These functions directly correspond to blocks in Codecraft.
If you need to make more tweaks to algorithm, you can find it in camera.py file. We based it on grayscale line following algorithm from OpenMV, which you can find here:
OpenMV team did a terrific job at explaining the algorithm behind CV line following in the comments of the script and I would like to supplement the text explanation with the pictures for people who understand graphical representations better (like me 🙂 ).
Here we specify ROIs(regions of interest) in the picture and assign a “weight” to each one. We also calculate the weight divisor, as sum of all weights. In our case that’s 0.7+0.5+0.3=1.5.
For every region of interest we
a) find the blobs
b) find the largest blob
c) add it’s weighted x-coordinate to centroid sum
In the end we have center_pos variable which tells us what is the deviation of the line from the center of the image.
In our case it is:
Okay, when you see this you might think to yourself, “whatever was before was quite clear, now what is this black magic?”. Here is the description from OpenMV script comments:
The equation below is just computing the angle of a triangle where the opposite side of the triangle is the deviation of the center position from the center and the adjacent side is half the Y res. This limits the angle output to around -45 to 45. (It’s not quite -45 and 45).
Difficult to grasp the meaning at first, but when you do the drawing, it makes perfect sense. We are trying to find the angle of the right triangle, with it’s right angle(90 degrees angle) located at the center of the screen. When deviation is 0, the angle is 0 as well, which means go straight. The bigger the deviation, the larger the angle.
If we plug our numbers from above into the formula, we’ll get
-atan(86.33-80)/60=-atan(0.105)=-0.104616658 rad or -5.99 degrees
Which is slight turn to the left. Sounds reasonable from the image we are seeing.
Stay tuned for more articles from me and updates on MARK Kickstarter campaign. In the next article of The Road from Line Following to Lane Following series we will have a look at Lane Keeping algorithms and apply Deep Learning for that task as well.
Until the next time and stay safe from the coronavirus!
Original article by Dmitry Maslov published on Hackster.io