2X-RHex - TRUSSES Independent Study - UPenn MEAM MS
For my independent study at the University of Pennsylvania, I designed and built 2XRHex, a two-times scaled RHex-inspired hexapod—as a mission-relevant testbed for cooperative, payload-enabled robotics. The platform was developed to support TRUSSES-style heterogeneous robot collaboration, with a focus on lunar analog rescue and regolith (sand) traversal, where robots may need to dock, brace, pull, push, or recover one another in challenging terrain.
Platform Design
2x-scaled XRHex mechanical + electrical system designed for increased payload-to-body-weight capability and robust field operation.
Iterative leg development (RHex-style compliant leg concepts) to balance traction, sinkage resistance, and durability in sandy environments.
Lightweight chassis + protective enclosure/cladding with design decisions guided by engineering robustness and testing needs.
Compute + autonomy architecture centered around ROS2 for modular control and inter-robot communication.
Mobility + Cooperation Roadmap
Forward alternating tripod gait planned as the baseline locomotion mode.
Designed toward cooperative behaviors (e.g., brace-and-pull, push-and-drag, anchor-assisted recovery) to enable rescue-style scenarios.
Exploratory work toward a holonomic/efficient-turning variant via leg/drivetrain geometry changes or hybrid leg modules.
Scaling the RHex lineage for “doing work,” not just locomotion: moving from mobility-as-an-end to a platform that can meaningfully support payload-bearing and cooperative force transfer.
Design-build-test iteration: rapid refinement of leg and chassis designs to improve performance while maintaining simplicity and mechanical reliability.
System integration for collaboration: building the software and hardware foundation needed for multi-robot docking, coordination, and cooperative gaits.
Completed mechanical and electrical design of the 2× platform and validated core platform performance through initial testing.
Gathered early experimental data on mobility capabilities and design tradeoffs (leg geometry, chassis robustness, and terrain interaction).
Implemented ROS2 integration to support future cooperative behaviors and system-level demonstrations.
Build and tune a stable alternating tripod gait and integrate it into the full system.
Run controlled sandbed/regolith-style tests (including single-leg studies) to evaluate sinkage and failure modes across traversals.
Demonstrate docking + cooperative rescue behaviors (brace/pull, push/drag, recovery) as a system-level milestone.
2X-RHex at AMES testing - January 2026
Derived a GRF-to-structure pipeline to size leg cross-sections using von Mises + bearing constraints across dynamic load cases.
Assumptions:
Leg modeled as a rectangular beam section (stacked plates)
Loads include tangential + normal ground reaction forces (GRF) bending and torsion
Constrains: allowable stress (via factor of safety) + bearing load + width limit
Impact:
This analysis directly informed the leg plate stack used in the V1 prototype and set conservative load envelopes for gait and payload testing
Chosen design sits just under allowable stress to minimize mass, Higher λ_dyn/FOS shifts the feasible region to thicker plate stacks.
GRF Leg Design Sweep (2XRHex)
Modeled bending + torsion loads from GRFs and ran a discrete search over waterjet plate stacks to minimize leg mass subject to von Mises + bearing constraints.
Selected design (λ_dyn=1): t=4.763 mm, b_eff=4.76 mm, h=26.15 mm → 0.112 kg/leg, σ_vm=264.9 MPa < σ_allow=275 MPa.
"Waldo" - Robotic 3D printed sensor actuated project
For my MEAM5100 course at the University of Pennsylvania, I developed a robotic hand (Waldo System) capable of mimicking human hand movements. The project combined hardware and software integration to achieve precise motion control.
Hardware Design:
Actuation: Five SG90 servos, controlled via fishing wire and elastic paracord, recreated natural hand movements.
Materials: 3D-printed PLA components for lightweight yet durable construction.
Input Mechanism: Flex sensors mounted on a glove provided the user input.
Circuitry: Custom PCB with impedance buffer op-amps optimized sensor readings for accuracy.
Challenges and Innovations:
Transitioned from hinge/bolt joints to elastic mechanisms, reducing friction and improving reliability.
Optimized servo torque and fishing wire tension to ensure full finger flexion.
Software Integration:
Developed control algorithms to translate glove movements into servo actuation.
Integrated power-efficient circuits for simultaneous operation of input and output systems.
Results:
Functional robotic hand inspired by designs like the Ada and InMoov robotic hands.
Smooth, responsive motion that closely mimics human hand dynamics.
Autonomous Self driving ground robot
Team 11 Final Project Report – MEAM 5100, University of Pennsylvania
For our MEAM5100 final project, our team designed and developed a competitive robotic system named Throckinator, inspired by the mechanics of autonomous systems and optimized for the course’s challenge-based competition.
1. Mechanical Design:
Platform: Adapted a two-wheel-drive system with a caster wheel for enhanced maneuverability and stability.
Motors: Upgraded to 12V, 100 RPM DC motors with magnetic encoders for precision and torque, overcoming the limitations of high-RPM motors from earlier iterations.
Custom Bumper: Designed a robust front bumper for pushing, defending, and button-pressing tasks, with a personalized “The Rock” design.
Sensor Mounts: Strategically positioned Time-of-Flight (ToF) and sonar sensors for wall following and autonomous navigation.
2. Electrical Design:
Control Board: Developed a modular proto-board with Molex connectors to simplify wiring and ensure reliable connectivity.
Power Systems: Utilized separate power sources: a 5V LiPO power bank for the ESP32 microcontroller and sensors, and an 11.1V LiPO battery for motors and peripherals.
Noise Reduction: Mitigated ground noise from motors, ensuring cleaner sensor readings and improved system reliability.
3. Software Integration:
Microcontroller: Chose the ESP32-S2 Saola microcontroller for its GPIO capabilities, PWM support, and Wi-Fi functionality.
Control Interface: Designed a web-based control panel to manage robot modes (manual, idle, wall-following, and autonomous attack).
Functionality: Integrated PID control for precise motor actuation and developed algorithms for wall-following and autonomous attack strategies.
Collaboration Tools: Implemented a scheduler, structured file system, and a shared Google Drive for effective teamwork.
4. Competition Strategy:
Focused on reliable wall-following to navigate the arena and perform attack modes effectively.
Combined ToF sensor data and Vive position tracking for precise autonomous actions, successfully achieving target engagement.
I2C Communication: Addressed conflicts with multiple devices by assigning unique addresses and re-wiring connections.
Integration: Overcame issues with noise, power management, and code stability through iterative testing and troubleshooting.
This project was a collaborative effort with team members Ethan Sanchez and Mateusz Jaszczuk. Together, we applied mechatronics principles to create a competitive robot, enhancing our skills in system design, integration, and problem-solving. Special thanks to my teammates for their contributions to mechanical, electrical, and software subsystems, making this challenging project a rewarding experience.
Video: Autonomous ground robot navigating through the course using Sonar & Time of flight sensors
Left: Control, state and code flow diagram Right: Wiring diagram of robot
Robotic Pick-and-Place System – MEAM 5200 Final Project
Map of virtual environment using ROS Gazebo
Red and Blue Task Configurations with Manipulability Indexes
Course: MEAM 5200: Introduction to Robotics – University of Pennsylvania
Team: Andrik Puentes, Benjamin Aziel, Solomon Gonzalez, Mateusz Jaszczuk
Instructor: Prof. Cynthia Sung
Fall 2024
As part of UPenn’s MEAM 5200 course, our team designed and implemented a robotic system capable of autonomously picking and stacking blocks in a competitive setting using the Franka Emika Panda robot and a vision-based perception pipeline. The project required handling both static and dynamically moving objects with precision, speed, and robustness in both simulation (Gazebo) and hardware environments.
Inverse & Forward Kinematics:
Developed a full-stack motion planning pipeline using custom FK and IK solvers with null space optimization to center joints while achieving target end-effector poses. Implemented real-time joint velocity control via the pseudoinverse of the Jacobian matrix for smooth, safe motion.
Vision-Based Perception with AprilTags:
Designed an end-effector mounted vision system to detect AprilTags on blocks. Transformed detected poses from the camera frame to the robot’s global coordinate system, enabling accurate grasping and manipulation.
Static Block Manipulation:
Created a modular routine to detect, approach, and stack static blocks using preconfigured joint positions. Accounted for sensor offsets, gripper dynamics, and detection inconsistencies between simulation and hardware.
Dynamic Block Manipulation & Prediction:
Implemented real-time trajectory prediction for moving blocks on a rotating turntable. Accounted for angular displacement and environmental delays to predict future poses and time the robot’s grasping motion.
Stacking Algorithm:
Designed a logic-based system to calculate stack heights and place blocks with consistent alignment. Dynamically adjusted stacking heights based on the number of blocks placed and ensured collision-free placements.
Hardware Integration & Tuning:
Tuned joint speeds, camera calibration, gripper force, and sensor tolerances. Addressed real-world challenges such as camera view angle, transformation matrix discrepancies, and turntable misalignments.
Advanced to Quarterfinals in the final competition against other teams.
Achieved 100% success rate in static block stacking during simulation and most hardware runs.
Reached 57% success rate in dynamic block stacking on hardware, with promising results that improved with additional calibration.
Developed a modular, team-agnostic codebase adaptable to different robot stations (blue/red) and capable of real-time adjustments.
Bridging the gap between simulation and hardware requires significant calibration, especially for camera transforms and object detection.
Real-time robotics systems demand robust timing control, hardware awareness, and fail-safe logic for edge cases like failed grasps or block misalignment.
Modular design and separation of perception and control pipelines enable quick iteration and debugging.
Panda manipulator pick and stack sequence for static block in simulation. From left to right we can see manipulator approaching detection configuration, approaching the block, grasping it, moving above placing tower, and gently stacking it on top.
Panda manipulator pick-and-stack sequence for dynamic block during hardware test. From left to right, we see the manipulator detecting the block, predicting its trajectory, aligning with the block, grasping it, and picking it above above the turn table
Panda manipulator pick and stack sequence for static block during competition. From left to right we can see manipulator approaching detection configuration, grasping the block, gently stacking it on top of the tower, and leaving to safe position above the tower
Technical formal report of final project