August 19, 2021 at 1:23 pm
My name is Ronald J. Fontenot. I have a new account here, though I’ve seen this site years ago. I have a BFA degree in electronic art, and have been designing logos and graphics for small contract jobs as I look for steady employment. I am also the creator of the Youtube Channel, “Ronald J. Fontenot’s MUSIC Channel”, where I post songs and compositions I’ve written, as well as creative content I’ve made over the past 8+ years (on this channel and two previous channels) Nice to meet you!
Now that I’ve introduced myself, I would like to ask you about a scenario I recently faced. I animated a dancing merchant and his assistant (this was a pixel animation I made using Aesprite), and rendered out an image sequence that I synced to music.
Because it’s a dance, I need certain moves at certain times. So I manually placed each image in the image sequence (around 200 images) along the soundtrack in Movie Studio 17. Here are the obvious problems:
1. It’s tedious, and even more so if I want to smooth or change the animation. Smoother animation means more frames = more images in the sequence to manually place.
2. I don’t have as much control. I wanted to render in layers, but that just means I’d have a set of images to manually place for the merchant, a different set for the assistant, and then the background.
3. It’s inefficient for editing/changes etc.
I wrote the song first, so I can’t really make the animation and then tailor the audio to the animation. To view my final animation, please see it here:Some contents or functionalities here are not available due to your cookie preferences!
I’ve received feedback saying it’s awkward that some of the dance has more frames than others, however, I don’t really know a better way I could have gone about this in terms of efficiency.
How would you have approached this project in terms of sound syncing? Thank you!
Ronald J. Fontenot</div><div>
EDIT: I actually cut the song short in terms animation because I’d like to find a better way of doing this. The song has a second verse, and the entire song is around 4 minutes long
August 24, 2021 at 2:30 pm
What comes to mind is, your dancer only has a limited number of moves, so, once each of the moves is rendered out once, you could copy/paste it into place on the beat you want, everywhere you need that move. Combining the moves then becomes pretty easy. Or am I missing something? I don’t know the software you’re using, so that’s possible.
I have a project on the back burner that imitates your 8-bit look in FCPX, going after the look of early 80’s Midway arcade games. But I’m cheating by taking normal green screen footage, posterizing it, reducing the frame rate with an fx plug-in, then adding the 8-bit blockiness with a filter. What you’re doing, from scratch, is much more complicated.
August 25, 2021 at 6:25 pm
<div>Thanks Mark for the reply. You are correct, once I sync a dance move, when it repeats, copying and pasting those frames along the soundtrack does work and makes it easier. I have more control of my animation if I have different frames for each section rather than repeat them. So if, say, I repeat frames 1-10 for a dance move three times, then I am bound to keep those frames the same if I ever want changes. So if I edit things to make the character blink and smile, he’s going to do that at the exact same time the exact same way three times because frames 1-10 are the same for that dance move. But if I render frames 1-30, and I time frames 1-10, frames 11-20, and frames 21-30 the same, then the timing is the same, but I can have the control to make custom edits differently for the same dance moves.</div><div>
So again, if I want the merchant to smile in frames 1-10, blink in frames 11-20, and wear sunglasses in frames 21-30, I maintain the control to do that.
That’s why I render all 200 frames, and not…say…50 frames and repeat them along the soundtrack. It’s about control.
I think ideally, Aseprite pixel editor software needs to incorporate sound syncing. Then as I animate, I can place what I want where I want precisely. Doing this between two software programs is essentially the issue, I conclude. So I’m going to write the company and keep my fingers crossed 🙂
August 26, 2021 at 2:35 pm
So I took at look at a demo of the software you’re using and I had an idea: because it has multiple track layers, can you create a very simple layer with symbols or colors synched to your music track, then use that as a reference tool for synching your other moves?
I’ve done something like that before in my music-based edits, where I will be cutting to the music. I’ll often cut up a video track by laying down marks to the beat, and just fill the shots with different colors at first, to give a sense of the image pacing, then go back in and replace the colored sections with actual video clips later. It also helps point out to me on the timeline, very quickly and visually, how close I am to completion, as I work the different areas.
August 26, 2021 at 11:48 pm
Hey Mark, in answer to your question, yes I believe some kind of sound guide or markers would help. In my case though, I’m not just moving the characters to a beat, but they are reacting to song lyrics. When the song says “Through sunshine…” the merchant puts on sunglasses, for example.
So really, what needs to happen is Aseprite needs to incorporate sound syncing into their software.
If they do this, then I can import the music first, then simply animate to it! Which would be ideal, because when I’m finished, I know my entire animation is synced as I created it. So now I just render the image sequence, import it into a video editor (Movie Studio), import the music track, render and I’m done!!
I messaged them on their community board, and someone posted this has already been suggested and they’re (hopefully) working on it.
But yes, what you suggest works when moving to a rhythm or a beat.
Log in to reply.