Our overarching goal is to explore why robots are limited to factory environments and why we don’t have them cooking for us and folding our clothes yet.
In our journey, we plan to publish our progress, tutorials, code, and workshops.
Right now, we’re building software/hardware for a robot arm to play chess.
From this, we hope to validate fundamental understanding of robot arm design, control (using control systems and/or reinforcement learning), sensor systems (vision, encoders, tactile), and mechanical actuator systems (servos, steppers, gearboxes).
We currently have two hardware efforts:
- Designing a Mr. Janktastic from scratch
- Building the open-source BCN3D Moveo Arm for a better hardware system for software to experiment and test vision, RL systems with.
Working on a variety of problems in vision, control, and high level planning (RL soon!)). See the Software Docs for a deeper dive.
What have we done?
- Object detection working on chess pieces with 90%+ accuracy using YOLOv5 and usable in ROS
- Put together a 500+ chess piece dataset for detection
- Prototype gripper fingers that can pick up chess pieces decently well
- Created Gazebo simulation that is controlled by MoveIt pipeline, including a simulated camera, chessboard, and chess pieces
- Got an early version of protoarm to stack some boxes using IK and trajectory planning from MoveIt and ROS
- Designed a prototype 6dof arm