Veedah
Veedah is a low power, and cost-efficient robot. Made from parts in my high school's robotics club it uses only two webcams to perceive depth and guide its self to a destination. Well by destination I mean ...going in a straight line, for now. (Fun fact: you cant perceive depth using just one eye. Try catching a ball using just one eye, it'll be a tad hard. Be Safe ;) )
The goal for this project was to make something as cool as spot the robot, made by Boston Dynamics. It is surely a work of art that uses pretty expensive tools to operate, and I live in Africa, so that wasn't possible. But I didn't give up, I made a very basic version of it, but her the catch, its has wheels on the bottom. Beat that Boston Studios!!!
The way this robot would work is that instead of using suspensions that Spot uses Veedah uses simple servo motors on the hinges. Since it is just hinge operated it will be hard to turn, right? Well to solve that problem I used wheels on the bottom. It would be a great experiment to use Omni-directional wheels, probably quite a seen to watch. Furthermore, the 'million-dollar' sensor used in Spot will be replaced by using only two cameras in Veedah.
How do two webcams compare to Spots LIDAR sensor? Is it reliable?
Hands-on, Spots is better, but in terms of price and efficiency, the two cameras are better. And also I don't have the budget to get the lidar sensor Spot uses.. SO :|.
The way Depth perception using Stereo Vision works is simple trig, and math. So you take two cameras and place them pointing in the same direction on a common platform. And both these cameras should have the same focal length and quality. The rays of the cameras would intercept, giving us a pixel value of their interception on their respective values. Using this information we can find the distance between the cameras platform by using the distance between two cameras and judging it with the pixel values. For this, we will need a bias number that would be outputted using the dimensions of the visuals of the camera, the angle of the camera, quality of the picture. This bias can be either manually calculated or found by using the image calibration method. I used the image calibration method provided by Matlab.
After the image calibration was working my second step was to figure out how to get the deciding software to work. For this, I took inspiration from Jabralis's work on the flying drone. It was the box method. So you draw a virtual box on the screen and then you try to get the target in that box by moving the robot.
I changed it slightly and made it so that if there is a really red area in a certain box move this part and do all these things.
At the end of the project the process surely worked but it was really power-intensive and GPU intensive. SO I had to use my computer as the brain. And that meant it had to be connected to a wire at all times. Since the image processing was taking too much time and energy. Even after that, the robot wasn't working perfectly and it had issues with making decisions.
This was my first project that involved cameras and mechanical parts other than the Lego Mindstorm so I believe it went very well. I will surely look back at this project in the future and improve on it. But till then I've got other interesting projects piled up. So lets hit those!!!
The goal for this project was to make something as cool as spot the robot, made by Boston Dynamics. It is surely a work of art that uses pretty expensive tools to operate, and I live in Africa, so that wasn't possible. But I didn't give up, I made a very basic version of it, but her the catch, its has wheels on the bottom. Beat that Boston Studios!!!
The way this robot would work is that instead of using suspensions that Spot uses Veedah uses simple servo motors on the hinges. Since it is just hinge operated it will be hard to turn, right? Well to solve that problem I used wheels on the bottom. It would be a great experiment to use Omni-directional wheels, probably quite a seen to watch. Furthermore, the 'million-dollar' sensor used in Spot will be replaced by using only two cameras in Veedah.
How do two webcams compare to Spots LIDAR sensor? Is it reliable?
Hands-on, Spots is better, but in terms of price and efficiency, the two cameras are better. And also I don't have the budget to get the lidar sensor Spot uses.. SO :|.
The way Depth perception using Stereo Vision works is simple trig, and math. So you take two cameras and place them pointing in the same direction on a common platform. And both these cameras should have the same focal length and quality. The rays of the cameras would intercept, giving us a pixel value of their interception on their respective values. Using this information we can find the distance between the cameras platform by using the distance between two cameras and judging it with the pixel values. For this, we will need a bias number that would be outputted using the dimensions of the visuals of the camera, the angle of the camera, quality of the picture. This bias can be either manually calculated or found by using the image calibration method. I used the image calibration method provided by Matlab.
This is the Image that came out after the calibration.
The problem was that my camera set up was a bit unstable and well it just didn't work out. So I tried adding more glue to it to make it stable, and it worked!!! Not the best to make judgments out of, but it worked!!! ...Well Kinda
I changed it slightly and made it so that if there is a really red area in a certain box move this part and do all these things.
At the end of the project the process surely worked but it was really power-intensive and GPU intensive. SO I had to use my computer as the brain. And that meant it had to be connected to a wire at all times. Since the image processing was taking too much time and energy. Even after that, the robot wasn't working perfectly and it had issues with making decisions.
This was my first project that involved cameras and mechanical parts other than the Lego Mindstorm so I believe it went very well. I will surely look back at this project in the future and improve on it. But till then I've got other interesting projects piled up. So lets hit those!!!




Comments
Post a Comment