Activity › Forums › AJA Video Systems › Sync all over the place
-
Paul Harb
March 24, 2010 at 8:54 pmIm not sure to see the frame rate on a R3D file? But when I look at the QT that it creates with the RED files, they are 23.98…not 23.976 or 24fps. I did move the card to another slot yesterday, I cant really tell yet honestly if the sync is random anymore. It doesnt seem to be, now it seems to be a constant drift between my master audio, which was also used as playback and is whats being heard on my camera scratch audio and was a 44.1k 16bit file. The camera was being run at 23.98 fps. when I line these up in the timeline,, there is a drift….and it seems constant now. I swear its driving me nuts cause it was a moving target but now maybe moving the card solved one issue but I was actually dealing with two issues.
Paul
Paul Harb-Producer/Director
Wrong Beach Multimedia
Dual 3.2 GHz Quad/10.5.5/8GIG RAM/FCP 6.0.4/QT 7.5.5 -
Mark Spano
March 25, 2010 at 5:12 am[Paul Harb] “Im not sure to see the frame rate on a R3D file?”
Wait – didn’t you say you transcoded the R3D files to ProRes? What did you use to do that? RedCine-X would probably show you the metadata of the R3D files and you could see the shot frame rate. Also – what device played back your music on set? This whole thing could point to that device playing back at the wrong speed.
Other than those things (and what we’ve already discussed in this thread), I can’t think of what would cause the drift. You might have to start at the beginning. Try stuff like doing it on another machine (if you have one) – transcode a clip, slide your audio in sync and see what happens. Uninstall/reinstall KONA drivers. Trash prefs for FCP. Log in as another administrator level user. I don’t know – gotta start somewhere. Sorry it’s not easily apparent, but that’s sometimes how it is with problems like this. Good luck.
-
Paul Harb
March 25, 2010 at 5:42 pmHey Mark,
We figured it out…..I was getting wrong information….my playback guy did play a 48k 24bit file on set, which is what the camera recorded as scratch. They sent me the file they used and it all syncs up. I did try exporting the file myself and transcoding to 48k but that didnt fix it. But good news is its better now 🙂
Thks again for all your help! Coming on here and exhausting everything we could come up with made me really go back to the playback guy and thats when it all came together. thks again.
Paul
Paul Harb-Producer/Director
Wrong Beach Multimedia
Dual 3.2 GHz Quad/10.5.5/8GIG RAM/FCP 6.0.4/QT 7.5.5 -
Ken Glaza
March 31, 2010 at 5:30 pmI am not sure about what RED is but I have had problems like that when the audio files were separated from the corresponding video files in some programs. They could read at different timings. See if the audio is interlaced or not. Just a guess.
-
Gary Adcock
March 31, 2010 at 11:19 pm[Ken Glaza] “See if the audio is interlaced or not. Just a guess.”
What is interlaced audio?
gary adcock
Studio37
HD & Film Consultation
Post and Production Workflows for the Digitally Inclined
Chicago, ILhttps://blogs.creativecow.net/24640
-
Ken Glaza
April 1, 2010 at 10:35 amInterlaced means that the audio is broken into pieces and put with each frame of picture. Some softwares separate them into two files. Pictures only and sound only. The only common point is that they both start at the same time. Kinda like the way film and sound start at the movie set. The slate is the only sync point in common (click visual with click in sound to match the frame at the beginning) Then the sproket holes keep them together in time as long as they turn at the same rate. Look at where you store your intermediate workfiles. If they are separated then you may be having an issue with the sample rates of the audio. Your video frame rates may match but,the audio recorded time values (sample and bit depth)may not match the playback values so that it is reading audio at different rates then when first recorded or sampled. Kinda like you got extra or are missing sprocket holes in the audio. Check for drop frame option too. I still don’t know what you are using as software and hardware. Call me for a free consultation 2485578276EST I do forensic audio and video and this is interesting to me for archival reasons.
-
Gary Adcock
April 1, 2010 at 2:36 pm“Interlaced means that the audio is broken into pieces and put with each frame of picture”
I believe that to be an inaccurate use of the term and technically incorrect.
Audio is not stored at the frame level in any video formats that I know of that exists in either the video world or the courtroom. (I do forensic and evidentiary work on a regular basis)
” Some softwares separate them into two files. Pictures only and sound only.”
Actually you have that backwards- it is reversed – some software actually marries the 2 file types together during capture.
Audio is a separate entity than video or film, does not technically carry time code, does not have or support “frames” and can exist separate from each other. But the history lesson was good for some.“the audio recorded time values (sample and bit depth)may not match the playback values so that it is reading audio at different rates then when first recorded or sampled.”
This can also happen as in this case that the audio compression internal to the recording is at a different sample rate than the editing software was expecting/ demanding. This is a very common mistake for many users when attempting new workflows.
“Check for drop frame option too.”
there is no such thing as drop frame audio- it does not exist.[Ken Glaza] “Call me for a free consultationI do forensic audio and video”
No thank you.
gary adcock
Studio37
HD & Film Consultation
Post and Production Workflows for the Digitally Inclined
Chicago, ILhttps://blogs.creativecow.net/24640
-
Ken Glaza
April 2, 2010 at 12:40 amI really didn’t want to get into this level of complexities. There is no room to teach here. I like a brisk conversation but can we dispense with the slams Mr. Moderator?
Audio- A single file example!
“MP3 files are segmented into zillions of frames, each containing a fraction of a second’s worth of audio data, ready to be reconstructed by the decoder.” Left and right audio are interlaced (“to unite, interwoven, interlocked, intermixture”; Websters’ Third c1966); according to the codec to create a single file. Look at this simple example.https://www.mp3-converter.com/mp3codec/mp3_anatomy.htm
Video and Audio in one file!
A bit more complex because they are described as being a container type of file used to provide a single file of audio and video to the user.
“A container or wrapper format is a meta-file format whose specification describes how data and meta-data are stored (not coded). By definition, a container format could wrap any kind of data. Most container formats are specialized for the specific requirements of the data. For example, a popular family of containers is found among multimedia file formats. Since audio and video streams can be coded and decoded with many different algorithms, a container format can be used to provide a single file to the user.”
https://www.answers.com/topic/container-format-digital
I have provided a link so that you can see a graphic of the single file structure that a popular type of AV file is made of.
https://graphcomp.com/info/specs/ms/editmpeg.htm
In this container format, it is almost always with out exception that audio is interwoven with picture as the single file is created. Some do it similar to the MP3 frame by frame method and others use big groups of pictures (GOP) with different compression schemes. Some codecs break the sound into a frame by frame timed piece so that editing would be the easiest. But it is up to the secret sometimes pricy codecs used to encode and decode the interwoven audio with video files.
Once again maybe the encoder or decoder is missing “Container” and Coding information and the settings need to be looked at. Or if they are separate audio and video files it might be the choice of a few settings that are not matched as well. I am suggesting the checking of drop v non drop frame for the video compatibility, sample rates, even bit depth, audio and video and who knows what else may be an option. Granted I could have said that “Some softwares keep audio and video in separate files”. Nothing lost or gained here, nothing reversed.
Truly Your Forensic Expert Ken Glaza -
Gary Adcock
April 2, 2010 at 1:25 pm[Ken Glaza] ” I like a brisk conversation but can we dispense with the slams Mr. Moderator? “
I did not slam anything but your statements, which still I believe to be inaccurate here in a Manufacturers forum on professional video hardware editing tools and the formats used therein.
They have little to do with the issue the original user was referring to and have now become a obtuse discussion without regard to solving someone’s editing problems with a specific piece of hardware.
There is no way I know of to incorporate audio into a SINGLE frame of video, be it 1/24 of a second or 1/60 of a second, as then it would no longer be a frame.-
Now a muxed (multiplexed) signal can be have a length of a single frame and incorporate video and audio, but then it would no longer be a ‘Frame’ in the video since.
The term “Interlaced” in this space is more commonly used regarding the alternate line by line playout of Video based on the decay of phosphors when the specifications for Color TV were established for by the National Television Standards Committee in 1953.
I can find no mention of anything such as interlacing of audio in the current NTSC specs, nor any I can find any listing whatsoever in either the EBU or the SMPTE standards manuals, the 2 organizations that govern video and audio transmission for broadcast.gary adcock
Studio37
HD & Film Consultation
Post and Production Workflows for the Digitally Inclined
Chicago, ILhttps://blogs.creativecow.net/24640
Reply to this Discussion! Login or Sign Up