Forum Replies Created
-
Ht Davis
March 2, 2015 at 10:57 pm in reply to: Best way to edit 1080p 29.97fps DSLR footage in 720p sequenceDon’t forget, you want to keep an eye on field order. If your customer wants blu-ray, adobe doesn’t support the AVCHD spec for output to disc. They will write it, no problem, in encore, but they will not transcode to it. AVCHD does have options for 30fps at 720. But Blu-ray strict encoder in encore and in AME both are limited to 1080i at 30FPS (DFT), and 720p at 60fps or 24\25fps, all drop frame. What does the customer want for output medium? A FILE or a set-top playable disc? If a set-top disc is what they want, you’ll have to conform to that, or do a 2 step encode. Step 1, output your project in all its glory to an uncompressed format. Step 2, use a windows compatible AVCHD encoding program to encode the video to an AVCHD stream, with a separate audio file transcode to AC3. You may now use encore to output the disc… You should be able to Menu it and use the Premiere sequence as a starting point to make chapters etc; then just point the transcode to the AVCHD file, place the audio, and then fire off a disc.
-
Ht Davis
March 2, 2015 at 10:46 pm in reply to: Question about using 1080p in a 720p sequence for reframing in a multicamera editCreate a new 1080p sequence, Copy\paste all the work from the 720 to the 1080, don’t adjust the sequence to fit video. Select all video, set to adjust to fit frame. Should already be at 1080, but just in case…
Why\logic
If you have a 1080 sequence with cameras, your video is 1080;
Nest it to sequence A, then sequence A is an output module for your multi cam (a control deck with an output preview). IF sequence A is set to 720, check the preview size. IF it is 720p as well, you will see 720 quality.
Do the same for sequence b, renest the content from sequence A, and viola. Same rule applies. It will pull the video from the sources in your multi cam, and then recode them for your preview first. Then you just output that sequence to whatever you want.
Bottom line:
You start with 1080, you can end with 1080. You’re just piping some into 720 in one sequence. IF you already downscaled your multi cam sequence (the starting point), you can get around this problem as well. Same way…
First…
Copy everything in your original multicam wrapper sequence into another sequence at your desired size (1080)…. First, group it all, then select and copy it as it is, and paste it as it is.
Now import your 1080 footage (if you haven’t already). Ungroup the footage in the new multi cam, and replace each piece with one from the bin.
Next step is tedious…
If you have multiple video\audio tracks in your original nested sequence, you need to duplicate that structure. If you’ve made any sub clips at 720, you’ll need to make them at 1080. This is moot if you simply offline the 720 files and link instead to your 1080, then have them resize to fit your 720 sequence…
Now that you have all your stuff in order… …2 ways to go.
You can start by copying from your 720, or you can start by going from your new multi cam and placing in your new 1080 sequence.
Copy the edits in your 720 to your 1080. You can do this by saving effects, fades etc as presets all along your timeline. That way you only have to activate the preset for each clip\edit. You will also have any spare clips from your 720 timeline… …You should replace these with their counterpart from the BIN , But do this AFTER you create a preset from any effects. When you’ve finished, you should have your 1080 matching your 720. Now to fix audio… You shouldn’t have to worry much about audio presets. You’ll simply have to make your edits across both audio and video tracks the same way. Once you’ve made sure all the edits across both are done and have presets, you can move on to the next steps.
Remove the 720 multi cam video track completely. This will take it’s audio along with it. Now you have a duplication of your 720 video and you can move on with your life. -
Mark the areas in your multi cam sequence timeline that you want to sub clip with edit marks (razor) for both audio and video. Select this with a drag select within those marks (it will select all the clips you touch without needing to select whole clip). Right click and select Nest. It will make a new sequence for you, with your chosen video, and the clips are already multi cam enabled in most cases. If not, you just ctrl select all the video, right click and select multi cam enable. Do the same for the audio and you should be golden. You have to do this one at a time, but it does work. Done it in cs6 myself.
If you want to use each sub clip as a separate “Camera”, you may be out of luck. I haven’t even thought of trying this, but… …You might do just like above, then render and bring the new files in as source clips. Another method: Open the multi cam sequence in the source monitor, use in\out marks and the “Make Subclip” command, which will create sub clips of that source with the in\out being start and end marks. Now you should be able to use those like reference clips, and place them the same as a normal sub clip, into a sequence like cameras. If it doesn’t work right away, try rendering out the whole of the previews for the original sequence (the multi cam sequence you sub clipped) and try again. Follow the same multi cam enable as I showed above. The downside here is that you will not be able to select from your original cameras, but instead, you’ll have to select from your sub clips. In order to have a different view from your actual camera source, you’d have to make the change, render it out, and then go back to your sub clip, find out which virtual camera it fills, and select that for your new multi cam sequence. Again, I don’t think it would work, but it’s possible.
-
yeah, multicaming nested sequences is a no-go. But that doesn’t stop you from making sub clips of the sequence… …depends on your order of operations really.
If you are like most prosumer types and shoot in MTS or MT2S, you Have 1 large file with all your video, and it mixes several sources together or several “cam” sequences, you will have to sub clip the clip before you can actually do anything with it. You can create sub clips of any clip with the create sub clip command. You do have to specify the start-end. I suggest you nest the clip mark several places where you want to clip. Now open it in the source monitor, and select inout points, use the make sub clip commands. If that won’t work, use the source clip in the source monitor, work in that window with inout marks, and make the sub clips (which will show up in project window as new clips but are not rendered files). This will give you separate clips to use. Place in a sequence and sync them how you wish. Nest the sequence in another sequence. CTRL click (left click) the video track to select it, right click, and select multicam>enable. You can now select your video source from any in the original sequence. Do the same for the audio and you’ll be able to mulisource your audio much the same way.
If you can ingest your video in a format that allows you to make separate, discreet files, you are much better off than using a single file. You can multi cam a lot easier. When you want to sync multiple cameras, you place them all in the same sequence, on different tracks. Make sure each video lines up with it’s audio. Sync with your favorite method.
Nest the sequence in another sequence. CTRL click (left click) the video track to select it, right click, and select multicam>enable. You can now select your video source from any in the original sequence. Do the same for the audio and you’ll be able to mulisource your audio much the same way.I’ve tried using multiple multi source sequences in a sequence. It can be done, but I wouldn’t recommend setting them as separate cameras for another run at multi cam. It doesn’t work as far as I’ve seen.
Once you have your video ready, you can cut and clip to your hearts content. Subclip it with the source monitor, edit\cut with the sequence timeline. If you have to render the sub clips to their own files, use edit marks (razor) on the timeline to outline the area in both audio and video, and then use a MARQUEE SELECT (like a box selection in photoshop) and drag over the areas within the marks; right click and select nest. This will actually create the sequences for you, with your sub clips inside, without removing it from your other timeline. You can render each of these how you wish. For cataloguing, some actually go one step further. Take your sub clip timelines and place into a new timeline for your gagreels or your highlights, place markers and such, then render the one sequence out to a file or send to Encore.
-
You’re funny… Remember this adage as long as you edit: Always scale down at the end. When you use a proxy video, you want the same resolution as your source.
Here’s my Suggestion…
I import the video into the project panel in full res. I place each in its own sequence (I create each sequence and drop the video into it), at normal size for the output, but for the PREVIEW (at the bottom)… …I set a proxy file to use. This will allow you to make edits using the resolution you want to work in, while outputting the resolution you want to end up with. Now you can render your previews, and offline your large files. At this point you should be able to continue your work. If you have trouble with multi cam… This issue has been faced before.
When you render your proxy video, use the same resolution, but drop the quality, or use a COMPRESSED PROXY. This means you use a similar extensioned file, but use the compressed form of it. This cuts down on the data speed needed, and should produce great quality while keeping speed. I work from a core2duo laptop with 4gb ram and 256mb vid card. I still get great results using both methods with up to 4 cameras and 1 external audio source. I just have to render effects every 2 or three cuts to make sure the previews are updated. Give’s me time to do my chores.My chores:
1Make Coffee
2Pour coffee
3drink coffee
4Relieve myself of coffee
5repeat 2-4
6repeat 2-4
7repeat 1
8repeat all -
Profiles don’t have any direct connection to bluray (they are not from that standard, but they are described within it; the same way butter in a recipe isn’t actually created by the people who wrote it or in any way connected with it other than a certain amount being described by it).
Your Profile level is a compression modifier. “Legal” Bluray is limited to using up to 4.2 for a profile. Each profile is a list of several combined options that include “PROFILE LEGAL” pixel values (like a 1080 value), maximum limitations for bitrate (the speeds of data being read from the disc), and of course field order (how the images are handled getting to the screen; 1080i is interlaced 1080 frames, which use 2 fields for every frame, and will display those fields one after the other; 1080p is progressive scan, which displays all fields in each frame at the same instant, or operation). Profiles are also largely locked for frame rates, mainly to keep each profile to it’s own bitrate maximum. The higher the bitrate, the higher the quality of the frames as you play it back. This is because you are COMPRESSING the data from the movie, most directly by taking the jpeg images (from the video camera) and grabbing the “KEYFRAMES” (those frames that are kept at fullest quality) and all the others are only tracking changes from one frame to the next as they are stored in the stream. The keyframe, if opened in photoshop, would then be a full picture, but the others would only have those pixels that changed significantly since their previous frame. This is also how the h.264 works, as well as vc-1 or AVC. They all COMPRESS THE DATA. The bitrate you choose affects this estimation, as well as the quality of the keyframe.
4.2 has a maximum value of 40000mbps for bitrate. So does 4.1, but the difference in the two is actually that the sound component can push over the 40000, (the sound is mixed into the file separately as a secondary stream in most cases anyway, but with this profile, it is a separate stream when read by the player, and is less compatible than 4.1). The compression profiles conform your video to Bitrate, in kbps or mbps, which is also the same measure for internet speed. You see where this is going?
These profiles allow you to encode the video for use as a stream of data, which is what blu-ray uses anyway. So using them, you can encode your file for use with a blu-ray authoring program like encore. There aren’t that many on mac, but it doesn’t matter that much… …I typically use encore, but occasionally switch to some PC action with parallels… …Then I build the folder structure from the pc side, with menus and such, and finally I bring it to toast or just a command-line burn. I”ll even use the pc to burn because it often just works better than the mac burn apps (mac doesn’t like discs anymore, they keep telling people that discs are dead, but how can they be when the entire industry uses them as the media of exchanging the product between hands? Yes even the great Jobs had his flashes of genius that didn’t quite make the cut for the rest of us; May the Jobs be with you… …And may he rest peacefully never to know some of the horrors the new kids are visiting on us from his old chair.)
Profiles of 5.1 or 2 enable more frame rates and modes, but are not “Legal” for some programs to use with bluray. Note that some PC programs wouldn’t give a rats ass, and would let you build the folder structure, burn it and even play it back; but most “COMMON” or non-smart players would not play it. PS3 and above are usually able to play the “COMMON” disc at 4.1, but no higher. Thus you are kept to using that if you want to burn with your described situation. Here’s a run-down:4.1 profile—
1080i (this is max quality, no progressive video here, only fields unless you install x264 codec and encode from a command line with a –fake-interlaced flag; the flag sets it to split the fields into interlaced fields, but forces both to be played simultaneously on newer players and plays interlaced fields on older players at the closest comparable speed; this mode is called PsF or progressive as separated fields).
720i\p (this is a direct, exact downscale of the image from a 1080, which means it fills the screen when blown up; this uses fewer pixels, less data, and produces better quality at times, mainly due to the fact that it can use higher frame rates and progressive scan video, while still fitting into the same bitrates).
Frame rates
For 1080, use max of 30i. For 720 use max of 60p. Difference: 30i is 30 interlaced frames per second, 60p is 60progressive frames per second. 30i is 1\4 quality of 60p for motion, but 60p usually looks so sharp it’s almost a detriment to the perception of the video.
For all the profiles, you’ll see 24p settings across all the sizes. This is because 24p was the old celluloid film standard (actually more like 24i played over the air or cable). This format looks more like the older film standard, for motion and some other attributes. It also keeps more quality at the same bitrates, meaning you won’t see as much of a drop in quality as you output to blu-ray compatible format. By allowing the progressive frame rates, the visuals are much clearer than the old celluloid playback or the old vHS, which played back an interlaced (the data was similar to progressive video, but playback was on a CRT, which split the image into the scan lines, and displayed them in fields) video format to CRT tv screens. Remember that Blu-ray is a TV playable format, and the Profiles it uses are designed to look better on TV screens than on computer screens. Why? Especially when TV screens are now just computer screens that play tv? TV’s operate at 120hz to play interlaced fields. Computer screens are at 60hz for progressives. Higher values would be detrimental to the video capabilities of the computer screen. Since the tv’s only operation is display of video, it can utilize all it’s processing for that, and produce similar visual quality with different methods of data packaging. Most broadcast is all interlaced, but some is transmitted as PsF, a progressive format that is packaged as separate fields, but the fields are drawn simultaneously. You’ll see that in movies, you’ll see it using ON DEMAND SERVICES. It produces better quality, and usually has a price tag. Many find this to be the best, not only because it allows the same bandwidth and data rates to play higher quality motion video, but because the visual quality of that playback can be better on tv’s and occasionally, with the right frame rate, on computer screens. To get it, you will need a hardware encoder, or a piece of software that supports the –Fake-interlace mode for encoding (free command line x264 with the –bluray-compat and fake interlace flags, to produce an output file with a .264 extension–which is raw .264 data that won’t be reencoded by encore or other programs; set the profile to 4.1 or 4.2 but remember 4.2 is less compatible, and the rest you can find online in doom9’s forums mostly). -
Ht Davis
February 14, 2015 at 2:26 am in reply to: Interlacing progressive footage – how to really do itUnfortunately, doing this all at once is almost impossible on current hardware. You’d have to know some programming.
IF you know ame at all, you can have one tower set to do all the conversion with one hardware, and output to a folder on an external or a shared folder, and then on the other machine, have it “Watch” that folder in AME, and simply output to another separate location for the final step (watching a folder allows you to apply an automatic queuing and rendering with a set preset; so once the AME on the first machine finishes–if you use hardware encoders in ame it still works–with the first step, you can apply the second without even being there; and, yes it still requires two workstations).
From what I see:
You start out in 720p. So work with the frame size first. Upscale the frame size with everything else remaining the same. Now make any other adjustments to quality in your favorite editor, and (to do the same to each one you may have to script it) then have it export. I’m unfamiliar with FCP and compressor, but I know adobe has very little on the ball for this in premiere. You might be able to script it for AE, and have it queue up, then render them all out in ME.
With 50 videos, you’ll want to just try one, then apply a gausian blur or the like to it and check a preview. Just a couple of minutes of it would do. Once you got that to your liking, take down the blur specifics and you should be able to create it as a filter for AME somewhere (eh, still haven’t figured that one out). Do the watch folder thing and let it go.
Once you’ve done the frame size, you can play with the interlace. You need to set this to upper or lower field. I’ve found that lower-field is generally a better viewing on tv’s. But you can do it however you want. It’s a preference thing. -
You’re all close… …But most aren’t answering the question, only offering their own work around.
Here’s what guys who actually present playable media do…
Remembering that the biggest factor in “Motion” quality is frame rate, and the second biggest is field order, you have to think about what you are getting when you actually render out your project.
Any NLE is going to allow you to set up a Timeline that is based on Frame rate, not field order (they don’t care about field order ON THE TIMELINE ITSELF); but there is a Preview video created (a proxy file that you can size to your hearts content and format to your liking) that does worry about field order, as it will be what you see when you play back your effects. When you render out the final project, you typically want to keep the frame rate of your timeline, but occasionally you can just use your preview video if the quality is to your spec. This means you want to choose your timeline based on frames, but choose your preview based on your output spec so you can get an idea of what your product will look like.
For 60p source–>start with 60i preview, at full Frame Size (1080). If you have to output a disc, You may want to leave it at 60p, but use 720p target for preview. You could, at the end, nest your final in another timeline, same frame setting for the timeline, but set the preview to 30i with full size and see if that still suits you. If so, go with it.
For 30p Source–>stick to 30p output for online or computer based playback. You can squeeze this into blu-ray with x264 plugin or handbrake and freex264 (FFMPEG)… …But you are best off using 30i transcode for 1080 preview, and then set up a new timeline with a 720p preview, and see which you like the best across all screens (Put a compressed preview or transcode onto usb and playback on tv).
For 24p, remember not to move around too much, and you should be able to go right to 1080p blu-ray, or full frame youtube, but you will lose more quality at that sizing. You may want to shift to 720p target, if for no other reason than that you want to use less data rate while keeping fine edge quality. Scaling up from that is easier to do at the screen than on the server where it is stored.
I like to use uncompressed for source, and occasionally proxy, as playback on a computer looks better. Rendering out several work areas and playing through to see my work is the only way I’ve been able to get it done with a macbook pro 2.16ghz core2duo and an older geforce 8600m with 256mb vram and 4gb ddr2.
side note:
If you do any slo-mo, you really want a camera where the frame rate is really high, like 60p, but you want to be able to cut it down to 24p. This is called a pulldown. Since 60 and 24 both have several common factors (1,2,3,4,6), the frames can be reprocessed to 24p with better results than with 60 to 30 ( 1,2,3,5,10). The first 4 factors are perfectly inline, and only 1 apart, where with 30, there are only 3 1 apart factors. Rendering to a slower rate yields that, for a number of frames, there is a number of sections, and those sections must have so many frames. Divide 60 by 4, you get 15, divide 24 by 4 you get 6. At this point, it may look mismatched. But take a closer look… …In the end you get this: for every 5 frames at 60fps, there are 2 at 24fps. This would typically drop 3 of the frames. If you know where to drop these 3 frames, you can, effectively, slow the motion, and recode it to 60 frames by simply reconstitution of the motion from those dropped frames, or by stretching the time it takes to play 5 frames, so that only 2 frames are played. That’s 40% speed! Less than half-speed, so just over double the time. 60fps becomes 60frames every 2.4seconds, or 24fps. By stretching the frames over more time, you can reinterpolate the motion and slow it down. Put a 60fps, slow it to 24 and then render the section to a 60fps preview. You’ll see it in a 60p\i container, but the bits that define what is actually playing should show the drop in frame rate. This will still output just fine from any encoder, to any container with a 60frame. -
For projectors with a pc attached… Find out what the input line for the projector is. VGA, drop to 30p, or just code as 60i. VGA is at 60hz for the primary standard in the US, or if using NTSC projector, and later versions will support it in progressive modes (higher data rates, more visual data to process and output). If the projector supports dvi, check the refresh rate; you may find 120hz, but I’d be surprised to find very many business models at that level, as most I’ve seen are at 80hz max. 60hz means code to 60i, or 30p, for full compatibility, and if you can test a short video, try 60p. If you are using HDMI, 60hz is the lower end of it. 120hz is standard anymore, with motion estimation, and if you use 60p, it will work as if coded in 120i, estimating the motion and smoothing it out as it projects, though most projectors have a weak motion estimation and you are better off turning it off in some instances (like if you’ve coded in progressive mode).
-
It’s no bug. I’ve had the same troubles… In AE you have to turn on the transparency, and in premiere there isn’t one. In Premiere, when you want it to overlay, set it in a video track above the main (video # 2 3 4 etc) and treat them as layers going up… …The top video is on top (overlayed) of the others and will play over them. With transparent backgrounds, remember to size your graphic to the frame size. Right click them all, modify, size to frame. Now they should show up, but beware squeezing and stretching. If they don’t have the same exact aspect ratio, they might distort or shift up\down left\right. If you don’t match to frame, they may be of a size beyond the frame, and be sitting outside the viewing area. By resizing to the frame, you bring it to a size that matches.