"Waldo" - Robotic 3D printed sensor actuated project
For my MEAM5100 course at the University of Pennsylvania, I developed a robotic hand (Waldo System) capable of mimicking human hand movements. The project combined hardware and software integration to achieve precise motion control.
Hardware Design:
Actuation: Five SG90 servos, controlled via fishing wire and elastic paracord, recreated natural hand movements.
Materials: 3D-printed PLA components for lightweight yet durable construction.
Input Mechanism: Flex sensors mounted on a glove provided the user input.
Circuitry: Custom PCB with impedance buffer op-amps optimized sensor readings for accuracy.
Challenges and Innovations:
Transitioned from hinge/bolt joints to elastic mechanisms, reducing friction and improving reliability.
Optimized servo torque and fishing wire tension to ensure full finger flexion.
Software Integration:
Developed control algorithms to translate glove movements into servo actuation.
Integrated power-efficient circuits for simultaneous operation of input and output systems.
Results:
Functional robotic hand inspired by designs like the Ada and InMoov robotic hands.
Smooth, responsive motion that closely mimics human hand dynamics.
"Final Project - Throckinator" Autonomous Self driving ground robot
The Rise and Fall of Throckinator
Team 11 Final Project Report – MEAM 5100, University of Pennsylvania
For our MEAM5100 final project, our team designed and developed a competitive robotic system named Throckinator, inspired by the mechanics of autonomous systems and optimized for the course’s challenge-based competition.
1. Mechanical Design:
Platform: Adapted a two-wheel-drive system with a caster wheel for enhanced maneuverability and stability.
Motors: Upgraded to 12V, 100 RPM DC motors with magnetic encoders for precision and torque, overcoming the limitations of high-RPM motors from earlier iterations.
Custom Bumper: Designed a robust front bumper for pushing, defending, and button-pressing tasks, with a personalized “The Rock” design.
Sensor Mounts: Strategically positioned Time-of-Flight (ToF) and sonar sensors for wall following and autonomous navigation.
2. Electrical Design:
Control Board: Developed a modular proto-board with Molex connectors to simplify wiring and ensure reliable connectivity.
Power Systems: Utilized separate power sources: a 5V LiPO power bank for the ESP32 microcontroller and sensors, and an 11.1V LiPO battery for motors and peripherals.
Noise Reduction: Mitigated ground noise from motors, ensuring cleaner sensor readings and improved system reliability.
3. Software Integration:
Microcontroller: Chose the ESP32-S2 Saola microcontroller for its GPIO capabilities, PWM support, and Wi-Fi functionality.
Control Interface: Designed a web-based control panel to manage robot modes (manual, idle, wall-following, and autonomous attack).
Functionality: Integrated PID control for precise motor actuation and developed algorithms for wall-following and autonomous attack strategies.
Collaboration Tools: Implemented a scheduler, structured file system, and a shared Google Drive for effective teamwork.
4. Competition Strategy:
Focused on reliable wall-following to navigate the arena and perform attack modes effectively.
Combined ToF sensor data and Vive position tracking for precise autonomous actions, successfully achieving target engagement.
I2C Communication: Addressed conflicts with multiple devices by assigning unique addresses and re-wiring connections.
Integration: Overcame issues with noise, power management, and code stability through iterative testing and troubleshooting.
This project was a collaborative effort with team members Ethan Sanchez and Mateusz Jaszczuk. Together, we applied mechatronics principles to create a competitive robot, enhancing our skills in system design, integration, and problem-solving. Special thanks to my teammates for their contributions to mechanical, electrical, and software subsystems, making this challenging project a rewarding experience.
Video: Autonomous ground robot navigating through the course using Sonar & Time of flight sensors
Left: Control, state and code flow diagram Right: Wiring diagram of robot
Robotic Pick-and-Place System – MEAM 5200 Final Project
Course: MEAM 5200: Introduction to Robotics – University of Pennsylvania
Team: Andrik Puentes, Benjamin Aziel, Solomon Gonzalez, Mateusz Jaszczuk
Instructor: Prof. Cynthia Sung
Fall 2024
As part of UPenn’s MEAM 5200 course, our team designed and implemented a robotic system capable of autonomously picking and stacking blocks in a competitive setting using the Franka Emika Panda robot and a vision-based perception pipeline. The project required handling both static and dynamically moving objects with precision, speed, and robustness in both simulation (Gazebo) and hardware environments.
Inverse & Forward Kinematics:
Developed a full-stack motion planning pipeline using custom FK and IK solvers with null space optimization to center joints while achieving target end-effector poses. Implemented real-time joint velocity control via the pseudoinverse of the Jacobian matrix for smooth, safe motion.
Vision-Based Perception with AprilTags:
Designed an end-effector mounted vision system to detect AprilTags on blocks. Transformed detected poses from the camera frame to the robot’s global coordinate system, enabling accurate grasping and manipulation.
Static Block Manipulation:
Created a modular routine to detect, approach, and stack static blocks using preconfigured joint positions. Accounted for sensor offsets, gripper dynamics, and detection inconsistencies between simulation and hardware.
Dynamic Block Manipulation & Prediction:
Implemented real-time trajectory prediction for moving blocks on a rotating turntable. Accounted for angular displacement and environmental delays to predict future poses and time the robot’s grasping motion.
Stacking Algorithm:
Designed a logic-based system to calculate stack heights and place blocks with consistent alignment. Dynamically adjusted stacking heights based on the number of blocks placed and ensured collision-free placements.
Hardware Integration & Tuning:
Tuned joint speeds, camera calibration, gripper force, and sensor tolerances. Addressed real-world challenges such as camera view angle, transformation matrix discrepancies, and turntable misalignments.
Advanced to Quarterfinals in the final competition against other teams.
Achieved 100% success rate in static block stacking during simulation and most hardware runs.
Reached 57% success rate in dynamic block stacking on hardware, with promising results that improved with additional calibration.
Developed a modular, team-agnostic codebase adaptable to different robot stations (blue/red) and capable of real-time adjustments.
Bridging the gap between simulation and hardware requires significant calibration, especially for camera transforms and object detection.
Real-time robotics systems demand robust timing control, hardware awareness, and fail-safe logic for edge cases like failed grasps or block misalignment.
Modular design and separation of perception and control pipelines enable quick iteration and debugging.
Map of virtual environment using ROS Gazebo
Red and Blue Task Configurations with Manipulability Indexes
Panda manipulator pick and stack sequence for static block in simulation. From left to right we can see manipulator approaching detection configuration, approaching the block, grasping it, moving above placing tower, and gently stacking it on top.
Panda manipulator pick-and-stack sequence for dynamic block during hardware test. From left to right, we see the manipulator detecting the block, predicting its trajectory, aligning with the block, grasping it, and picking it above above the turn table
Panda manipulator pick and stack sequence for static block during competition. From left to right we can see manipulator approaching detection configuration, grasping the block, gently stacking it on top of the tower, and leaving to safe position above the tower
Technical formal report of final project