I just got this video in an email -http://www.youtube.com/watch?v=go2uQT_J2XA . Freddy is an animatronic robot I built for a client about 3 or 4 years ago, one of my first, and he still performs live at shows (in Hong Kong mostly). Here’s an older video I have that shows off his personality better - http://www.youtube.com/watch?v=emIjQTGqTlI
Whats really cool about Freddy is that he’s controlled entirely by an MP3 file that has the movements embedded in the song – this way his actions are always perfectly synced to the music.
So how do you actually go about syncing the movements? Specifically, how?
These are questions I was faced with when I first attempted this project as a 15 year old . The solution is clever, yet simple. MP3 files as we know them are two channels, one channel Left Ear and one channel Right Ear. What my solution does is use one of the channels for data and the other for the music.
Using any audio editor you can then sync data to music easily. Just put commands in at the same moment as some music is playing, and voila perfect syncing every time.
But what are the commands? How do you store a command?
DTMF tones. The same tones that you phones play when you push the number buttons. Thats the easy way ( MIDI being slightly more complicated but better way). Right channel has the music, and the left channel has DTMF tones at the identical moment.
There are a wide array of DTMF decoder chips out there. Essentially you just hook up a decoder to your microcontroller, connect your left ear channel (for data) to the decoder, and done. Your microcontroller will then read commands from the decoder and then interpret that as needed to do whatever actions you prescribe. Perfect syncing all the time, and all commands are programmed in the MP3 file itself, so you just need an audio connector and an iPod. Want to change the actions for a song or add another? No need to reprogram a microcontroller, just change the tones in the audio file.