Myoelectric Bluetooth Enabled Animatronic Fairy Wings
Project Overview
Our group – consisting of Logan Rogge, Brandon Rho, and Xiaohai Yang – created a set of fairy wings, fully equipped with RGB programmable lighting, movement of the wings through Myoware sensors that pick up electrical signals emitted by the muscles in our arms, as well as speech recognition provided by the microphone that is built in to the Arduino TinyML kit. With Dr. Dorothy Wang as our team’s advisor, our project is a technical application of the concepts and theories provided to us by the Washington University of St. Louis Electrical and Systems Engineering department. Our goal for this project was to demonstrate the electrical and computer engineering skills that we’ve gained over the course of our tenure at the school. The techniques used in this project are foundational for more career focused applications such as robotic prosthetic arms and modern fitness watches.
Techniques
Myoware Muscle Sensor
Using the Myoware sensors, we can detect the small voltages that cross over the skin when a muscle is flexed. Each sensor has three electrodes and processes the voltages detected into an analog signal. This signal is passed through an amplifier with adjustable gain, first order bandpass filter, rectifier, and envelope detector to ultimately provide an output signal with values ranging from 0-1023. The gain can be adjusted to account for different muscle strengths for different users or different muscle groups. The sampling rate can be adjusted when programming the board, but the recommended is one sample every 0.05 seconds.
Motor Servo
We used the SG-5010 servo motors to move the wings. These high torque motors have the ability to turn 180 degrees, but in our setup they could only rang from 0~75. The motor details are shown below.
LED strips
The RGBW neopixel strips were attached to the border of the wings, and had to be cut and resoldered in some places to make it possible to fit the shape of the wings. The strips had 5 wires attached: one pin for power, and 4 channels for red, green, blue, and white lights. Each channel was connected to transistors and resistors that were attached to different digital pins on the arduino nano. Since these Neopixel strips were a newer model, they were not compatible with a lot of the arduino libraries to program them. Because of this, we ultimately programmed the lights by simply using the command analog.write(LED, value) to send a signal to the pin with a value that will determine the brightness of that color channel. This programming method worked well and was simple, but had some limitations since it did not enable us to address individual LEDs.
Power
We used two 9V batteries to power our system. We had some issues when trying to power both motors at the same time as the LEDs, so going forward we would use an additional 9V battery to power to motors and LEDs separately. The sensors had their own power shields that snapped on the sensor back.
Bluetooth Connection
We used two Arduino Nano 33 BLE sense boards. The bluetooth transmits signals and commands from the central board on the arm that is connected with the sensors to the peripheral board setted on the back to trigger the motor servos and LED strips. The signals carried in bluetooth are integers in range 0-255, and can have stable connection within 30 cm distance between two boards.
Figure: BLE connection between two boards
Background Interaction
The camera OV7675 from the tiny ML kit takes a photo of the background per 2 seconds and calculates the average color of the background of the whole frame. The average color will be transformed from rgb565 format to normal rgb888 format and sent to the peripheral board via bluetooth, then used as the color of LED strips.
Figure: OV7675 camera
Figure: One example frame of laptop screen capture by OV7675
Voice Control
The voice control was made using a combination of the on-board microphone as well as a model trained within Edge Impulse. One of the main features of the machine learning model is that it can recognize the numbers “one”, “two”, “three”, and “four”, each of which was designated for a different LED pattern on the wings. While the microphone does accurately predicts the words that are listed, it is also not as accurate when there is background noise and straight silence. This is because it’s incredibly difficult to account for the different frequencies that occur from outside someone’s voice. Behind the scenes, the algorithm predicts the word between a 0 and 100% confidence interval, which is then sent over through Bluetooth to the other Arduino for processing.
Results:
In conclusion, this project not only achieved most of its intended goals but also provided our team with invaluable insights into the practical challenges of developing interactive technological solutions. The knowledge and experience gained from overcoming these challenges will undoubtedly aid in our future endeavors in engineering design and application. Further development and refinement of the existing prototype could lead to more stable and robust applications, potentially extending beyond artistic installations to include practical assistive technologies.
About Us
- Advisor: Dorothy Wang, Lecturer, Department of Electrical and Systems Engineering
- Engineer: Xiaohai Yang, Candidate for B.S. in Computer Engineering, M.S. in Robotics
- Engineer: Logan Rogge, Candidate for B.S. in Electrical Engineering
- Engineer: Brandon Rho, Candidate for B.A. in Economics and Engineering, B.S. in Computer Engineering, M.S. in Computer Engineering