AprilTags 
We also began working on detecting april tags and determining their positions using the python apriltag library. We wrote a script able to detect multiple april tags from a camera feed and get their IDs and position in the image. On the left is a screenshot of the output video feed, which outlines and IDs each april tag. On the right is part of the terminal output with the location of one of the tags, including the position of its center and corners. This information is printed about each tag and is constantly updating with their live positions.









Kinematic Math
Even though the kinematic math was abstracted by the Inter box API we still developed the flowing blog post that explains how the WidowX works. We felt that it was important to understand what was going on behind the scenes even if were weren’t manually completing the calculations. ​We therefore went through all of the math and how it's abstracted by the API


Looking forward! 
Though we made good progress in the latest sprint we still have a ways to go! In order to make sure we stay on track we decided to break our remaining tasks into MVP task and stretch tasks. For our MVP we would like to accomplish the following:
- Get everyone computationally set up
- Camera module
                 - Figure out where we want to put april tags (on pieces? chessboard corners?)
- MVP: 8x8 array of robot arm poses for each square
- MVP: pickup heights for each piece
- MVP(?): figure out how to pick up pieces on very corners of board without arm collapsing
- MVP: GUI to integrate chess engine 
                  - convert engine move to physical move
                  - figure out which move the opponent made

In our MVP our robot would only be using the camera to identify the opponents move. The robot would then calculate it’s move and execute it by going to pre-saved poses. 

For our stretch goals we would like to look at the following: 
- April tags on edge of board to get a geometric map of the chess board (known distances between each square, pieces can be abstracted to the center point of each square)
- array of poses for graveyard
- figure out how to read move status from server so we dont’ have to do janky time.sleep stuff
-  Camera [ ] select which camera we’re using
-  Post-MVP [ ] mounting location for the camera
                        -  physical camera mount design/fab(?)
                        - functional AprilTag code in Python: geometric poses between april tags (on 4 corners of chess board) relative to camera
                         - AprilTag code for transforming camera frame to arm frame
                          - post post post mvp: finger detection


In this implantation we could dig more into the machine vision aspect of the project: not only analyzing with machine vision but also finding more elegant, and sensor based methods of localization. 

As we go into the next week Mia and Dan will be working on the GUI Eddie will be working on the pose array, Kate will continue working with April tags and and Will will be working on the literal edge cases of the chess words. We plan to have our MVP completed by Wednesday so that we have time to give some of our undivided a host and work on our website.