Design Page
This page showcases the design aspects of the Sockrates project.
Design Criteria
Our project needed to meet 3 different requirements: sensing, planning, and actuation. Its desired functionality is described below.

Drafted Plan
- Actuation - Folding: Our general folding actuation comes from using forward kinematics. We had Sawyer move in specific joint configurations to enact the folding aspect of our project. In order to ensure that Sawyer’s gripper did not run into the table, we placed an additional piece of hardware on the table to act as an elevated platform with 3 prongs (See image). We start with Sawyer picking up the top end of the sock and dragging it over itself toward the bottom end of the sock. By using forward kinematics with pre-defined and specific joint angles, we were able to achieve the most consistency. We then had Sawyer pick up the folded sock from the middle and raise it up to the external USB camera to begin the sensing phase.
- Sensing - Color Detection: Using an external USB webcam (can we specify the type of this camera) that is attached to the monitor, we aim to detect the color of the sock. We launch the USB cam package to subscribe to the image_raw topic which allows us to parse the image that the camera can see. At this point, Sawyer has picked up the sock and held it up to the camera. Then we apply an HSV mask using OpenCV in order to extract the pink and green colors within the camera. Then if there exists more green than pink we return green, and vice versa.
- Planning - AR Tag Tracking: Given the return value (either pink or green) from the sensing aspect of our project, we then plan our trajectory accordingly. If this is the first sock of its color (i.e. first green sock or first pink sock) we put Sawyer into a position to see the entire table such that both of the AR tags are in view. We then use a linear trajectory to move Sawyer to the relevant AR tag, incorporating PID control to ensure that we have a smooth trajectory with Sawyer. We drop the sock there (at the correct AR tag) and we then save the trajectory from the start position to the position where we placed the sock. Since the pile of folded socks will block Sawyer’s view of the AR tag for future socks of the same color, it is essential that we save the trajectory that we planned. For the remaining socks, we execute the corresponding trajectory to place the sock in the correct pile and, since we had previously saved the trajectory associated with each color, Sawyer still knows which path to take when sorting a sock of a color it has seen before.
Design Tradeoffs
When we formulated our design we made the design choice to create a master script to run multiple iterations of our sock folding algorithm. The main trade-off was we had to integrate many different packages together in order for the script to run properly which proved to be exceedingly tedious. However, the result was a more practical means of running through the entire sock sorting pipeline from a single terminal (plus terminals for necessary servers). We also chose to fold the sock in a specific orientation. We placed the sock a certain way on our elevated sock platform and the location of the prongs limited where we could put the sock. We initially tried to use inverse kinematics but decided against it since the trajectories produced were varying, inconsistent, and potentially unsafe trajectories. In comparison, the forward kinematics trajectory from Sawyer’s standard tuck position to the positions required for folding was consistent and safe. Even though this was a big tradeoff, we had to prioritize safety and consistency to meet the goals of this project which led to our use of precise joint angles and forward kinematics for the sock folding. We also chose to place the sock directly on the AR-tag, meaning that we would have to save the first trajectory Sawyer encountered for our planning. This was to ensure consistency for repeated foldings in one cycle. However, this would mean that Sawyer would have to place the AR tag in the same place every time so we lose some of that flexibility to move the AR tags around after the first cycle.

Sawyer Setup with External Camera and Raised Platform
Real-world Robustness?
In a real engineering application, our design could use some upgrades. While it is reliable and robust and is able to perform well on multiple iterations, it is not the most efficient or durable. This is due to the fact that:
- 1. Occasionally the right-hand camera of the Sawyer is unable to detect the AR tag and thus will cause our system to end prematurely.
- 2. Additionally, the pink hue on Sawyer has, on more than one occasion, caused color sorting inaccuracies (i.e. incorrectly classifying as pink) when the external camera is positioned in certain orientations.
While we attempt to mitigate the error this introduces by cropping to use only the center-third of the external camera image as classification input and centering the camera as best we can to where the sock will be held, this may not be 100% consistent. Thus, in a real engineering application, our design may not be entirely foolproof given these limitations. However, these are all design tradeoffs that could have feasible solutions we could implement if we planned on turning this project into a commercially viable product.