Forum Replies Created
-
OMG… …You guys are great…
Okay everybody listen up. I’m happy to hop in and dance in the same disco, so I hope you don’t think me rude. AME has the option to encode video and audio together from an image sequence. Add the audio file to the list of files, make sure you import it as a sequence, and it will detect the audio file (CC2014 and above). Alternatively, get the sequence into AME, then check your export settings, and check the audio box at the top. If it doesn’t open a dialogue for you to select an audio file, try adding the audio file to your sequence of pics. Once detected, you should be able to render the file as a movie.
If you started with a video, renderfarmed, and processed, then you should simply add the result as a new comp and apply your audio there.
If you were in premiere originally, you can dupelicate your main sequence, unlink the video from the audio, and replace it with the comp from AE, then export out flat from premiere, or queue to AME. It’s not perfectly intuitive, but it will let you view your work in perfect clarity, add a few more effects and save on preview time…I’m assuming by farming it out, you want to continue to work, either on another project or the same one. If the same one, do the AE comp and drop the comp into the dupe sequence. You can re-render this out whenever you want, replacing frames in AE as necessary. By dynamically linking, you are creating a decompressed master set of frames, then linking those files to a new comp that will dynamically update back in the dupe. If you want to continue to dupelicate markers for edits and chapters etc, then just redupe and drop in the comp for each change, but only render the changed frames. Boom… …Blew your mind huh? This has worked well for me since cs6. By rendering previews of the comp sequence in premiere to get a good look at sections where I want to place fx (and setting the work area to those spots only), it takes minutes to render and preview a change. Honestly, I like to add the fx to a comp set of frames so I can use one sequence, then re-render the images and render out for my targets quickly. By rendering out to the folder first, I can then add the project to AME, set 3 or 4 targets and process them all in parallel. Overnight I hit every target, and everything in sync.
For those with Final cut and compressor… …Compressor will farm out renders to computers with compressor… …currently After effects doesn’t care if you’re pc or mac…
Compressor is great at dumping a folder of images from interlaced video so I can output progressive. The only reason I keep it… Besides pro-logic stereo upscaling of audio… -
That’s a bit overblown, there mike.
Simpler way with a few machines:
Using After effects to render your video is the key. You’ll need a drive big enough to hold every single frame in JPEG format, then you’ll need to have after effects installed on every machine (doesn’t require activation, just needs to be installed). This will also install the After effects rendering engine. Now install the same plugins on every rendering machine (If you use any specialty plugins, you can render up to a frame before and then after… …and so on… …then let your main machine handle them with a second render operation to save on plugin costs). You’ll have to set up the after effects rendering engine on each machine so it can access the same watch folder. In current CC (2015 or higher by my knowledge), you can even do this across platforms… …PC\mac makes no difference, so long as they can all see the files and the job in the watch folder. You must create the folder in a shared location (this means all computers can see it; can be network drive, NAS, or a drive on a computer, but I recommend using a tower with a multi-connector NIC, and linking them all to it, then using usb3 or thunderbolt RAID drive enclosure). Set up another location for the output (this will be a folder of images). Place a rip of your audio track into the output folder. Now go to the other computers and start the render engine (not after effects itself) on each one, making sure that you can read and write to the watch folder, and the output folder. In the render engine, set the watch folder, the output folder will also be selected automatically. Once you set this up, set them to start scanning or watching the watch folder. There’s nothing there yet, but there will be soon. Do the same thing for either win or mac. You’ll need the same plugins if you added any, or you can render portions without the plugins first, then render pluginned portions in a second, single machine pass (I’ve tried it and it works well enough without a lot of extra overhead when the plugin area is short. It requires setting up multiple render jobs, but do it enough and you get old hat. Now go to the first machine and tell after effects to COLLECT files (usually found in the file menu), and tell it to place them in the watch folder. Depending on the size of your project, this could take some time… …Unless the Watch folder is the same location where you placed your files initially. It’s a catch all step to make sure all the computers can access the files. Now start the job. You can add and subtract machines as necessary. If you put the audio into the output folder and run it through AME, you can compress the video back down with the audio attached in a lot less time.This works all the way down to CS4 but then you needed to have licenses for every machine. At cs5, you paid half license, and entered a special key. In CS6, you just add a txt file to a specific location. Look up renderfarm for CS6… …Cheap way to get it done. However, it hasn’t been cross platform until adobe bought a farm engine protocol that remapped the project directory so the commands from the machine went to the right location. If you do have 2 or 3 machines and CS6, you can run up to 4 render engines, before they want you to add a license, but that still equates to 4 times the power. With plugins, you might not be so lucky. While it’s perfectly legal to install some older plugins on multiple machines and have them render just fine, some will reject the rendering engine if the key provided doesn’t support it, or if it detects the same key somewhere else on the same network. Check your plugins and check the license agreement.
ETHERNET is best costwise, especially if you get GIGABIT, and yes it can work between several computers as an intranet (not internet like you see in your browser, but contained to those machines only), and it will be faster; you’ll need a server style NIC in at least one machine that will serve as the central hub.
You’re better off speedwise with a fiber connected network (you can find these nics online now but expensive), or if you can find them online, thunderbolt\usb3 crossover networking wires. These wires allow you to connect computers together directly at speeds reaching 10gb\s. 10mb ethernet can work, but it’s pretty slow by comparison.Finally…
You won’t want to be working on the master machine while it’s handling the render job, or any others. However, if you must do some work, you can get away with it. Because you’re outputting only the images, uncompressing the video file, you’re not using as much of your computing power. Compression takes up a lot more calculation power as it has to compare each frame with those around it, and then drop unnecessary data, and place the new image into the video file’s data stream. If you have audio to go with it, it’s even tougher. If your images are compressed video already… …um, glacial bowel syndrome comes to mind as a description… Split up those processes and do the decompress on many machines, then recompress on only one, and you could cut the processing time by up to half in theory (though you’d have to have one computer for every frame, a fast network, and be able to set up the render engine on all of them at once… ….yeah… …maybe not). Two or three computers working on this cuts down the time greatly.And an FYI… …GPU cards only help you to process playback data for most operations unless you have a special engine for them installed, or you know how to hack your settings to utilize them. Most often, the primary rendering of the photos is done by your main processor. Your copies of AE need to match, plugins need to match, all need access to the data, and your processor\os type has to match (x64 to x64). Other than that, you can go cross-platform with this, get some cheap pc’s and build a farm that’ll keep the hot delivery under 30mins, all run from a mac… …then laugh your face off, grab some coffee, sit back and pray your chair don’t break.
I’ve done this with CS6 with old and new machines working together. I borrowed the new ones from pals, and set up CS6 for rendering. They have CC or don’t even use adobe, but don’t mind the space being used. I link them with an ASUS gigabit router or two, and two 2tb drives. I work from external drives on my home machine, but I use disk images to move the data around. It’s so fast to move one big file to an area where it can be loaded to multiple machines as if local, then I can run my CS6 render engine. It finds the location locally, which routes to the networked drive, and outputs to another networked drive. Then I just encode from there. Easy peasy. A 4 hour hd video with transitions etc and high end audio takes roughly 3-4hours to render for output media. Since I go to DVD\Bluray anyway for most things, I can even multi-target and run them all in parallel on my fastest machines to get 2, 3, 4… ..10 different outputs with different settings so I can check quality… …Now that’s service. All with CS6, one really old core2duo, 2 to 4 i7’s, all with 4-8gb ram and only my 4gb core2duo macbook pro with a video card… …it’s pretty fast if I don’t do a lot of weird effects. And for the work I do with non-profit groups, it’s fast enough that it doesn’t make me wanna puke. I’m updating to cc2015 now, and hope to include some Cheap PC’s really soon so I can work even faster. I cannot handle 2016 or 2017 on my laptop, but 2015 I can do, and I can install it on two more besides… …no prob bob. I’m there. 2 i# or i5 pcs with decent networking, ssd’s for the main drive, CC2015 and solitaire… For pro-res rendering, you just use the AME on the mac for best results. For other, go either way with AME cc2015. It handles so many formats and the advanced tools are really powerful by comparison. Just outstanding.
-
Ht Davis
August 27, 2016 at 12:04 am in reply to: Using a program like “Alien Skin blow up” on video layers in PSFor those who don’t know:
When you upscale a single image for processing, it’s USUALLY for a printer output, but OCCASIONALLY for a screen output. Eh…. ….that was about 10 years ago… …Let’s try again, shall we?
When you’re customizing a single image to an output media, you have to remember a few things. First, PANDA. Pixels Are Not Dots. Printers use dots per inch as a measure. If you set your output resolution to match your pixels to your dots, remember this: If a dot is placed for every pixel, then the printer has to create the color for EVERY Pixel by placing ink on it, which can cause problems with some, where the color smears as the page moves. This has been a fight for so long, people still don’t realize that printers actually account for this smudging to create the pixels’ actual color variation and transition. So don’t try to resample an image to fit your dots per inch, but rather about 1\2-1\4 of that, and you’ll see significant improvement. I’ve had great results with values like 266ppi for a cheap 800dpi printer. PPI used to mean points per inch, to define the images used for text characters. With modern imaging, it has been able to stand for both points and pixels, depending on your application. If you use points, it’s usually a square section of pixels on Long Pixels screen sizes (aspect ratios of 4:3 with rectangular pixels), or equivalent to 1 actual pixel (or a value like 4 or 9; a perfect square) on modern screens, based on your program’s design. Modern programs can use 1pt per PXL, making the term interchangeable. This allows you to resample an image to a larger or smaller size by following simple steps to provide the transitional data, then stretching it. First uprez the PPI, let the program cut the pixels for you. Now blow up the image by a factor, and let the program resample at current ppi. Now set the PPI to your output media data size. Screens are usually 72-120ppi, video is 72. People say “You can’t CREATE detail!” and shout it all the time. They are right from the perspective that you cannot rebuild what wasn’t there to begin with. However, you can create a more pleasing transition, or a finer edge to add clarity and depth.
If you want to blow up your video by going frame by frame and blowing it up, DO NOT START WITH VIDEO DATA. Render the JPEG or TIFF version of each frame, work on one or two and find the common setting of best fit for the process described above. Your ultimate sizing should be 150ppi to 300. Many ask why when the video scales to 72ppi? Well… …It’s complicated. When resampling the images for video files, you want to start with as much info as possible, and drop down during the processing. This resampling will drop more data, but will keep more of your sharper edges and transition areas if there’s more pixel data to use in the calculation. Once you have the settings, you can write them down, then create an action that plays. Apply this action to a whole folder of your video images. Now go and have fun for 6-24 hours, because this will take some serious time and processing power. If you know scripting and can do this in a script, sending yourself an email when it finishes the lot, great, do it and use that knowledge, baby! Now you can set it and forget it until your pants vibrate awkwardly on that special date.
Once you resize the images and save them to another location, they should be 2-4 times the data size. You’ll need enough space for 5x the size of your UNCOMPRESSED (jpeg images) video. Keep the originals until you’re done, this helps as a backup, but also provides some other possibilities.
Once you have your video frames, you can wrap them in a video or place them in a timeline for cutting (extremely slow playback); or you compress a working video from them and replace it with the non-compressed version later, which yields time saved and impressive detail retention.
Here are some options for you:
If you only blow up by one full step (i.e. 720×480 to 1280×720), you can employ a technique called Detail Matching. This technique requires the following–
1. You place your new large files as the odd frames, and your originals (simply scaled up) as even into a video file that is double your original frame rate
2. You output a file at your original frame rate, blending the two sets of frames together.
If you sharpened and applied some grain before upsizing, and retained a lot of sharp detail, this will smooth some out, making it more visually appealing, as the color and transition areas will blend and smooth slightly. It’s flattering and visually appealing.
Another technique requires that you completely over sharpen and high-pass each frame, then create a DETAIL MAP from that (you create a black and white edge map, a bit like a sketch with the edges all very dark black and everything else perfect white), then you create a VECTOR DRAWING of said map, save it, and apply it to the blown up version of the original as a detail enhancement. Play with a few images and blending modes to get this close, then create an action and apply it (now you need 6-7x the space of your uncompressed video). By blending the vectorized details, you’ll retain more of the detail of the image, scaled almost perfectly. Bring them back with subtlety, as it’ll look awful with too much. Use this method and simply make an output video to work with in sequence. Again keep the ppi to 300.
Why a vector drawing? It resizes perfectly, even if it is a huge file. But with only 2 colors, one of them a 0000 value, it will be a little smaller, generally, saving a little space. Blowing it up will create natural breaks in some lines with rasters, but vectors calculate exact pixels and will connect them, keeping shapes and curvature a little more true. By blending the adjustment into your original, blown up, you’ll be affecting areas that were smoothed over, and applying a sort of grainy sharpness back to them. Thats why you should do this with subtlety, as it can pixelate the image if not done carefully.
Of course, you can apply both methods. IF you get a grainy look from the second, do the odds\evens method using your originals, and the grain will smooth over. This doesn’t CREATE detail, it simply duplicates existing micro contrast as a vector map, then applies the vector map to a larger frame. Because this is so easy to overdo, applying a resample-scale-smoothed version of the original as a blend provides a kind of smooth filler, pulling the effect back a bit.
This isn’t an overnight kind of operation. It’s DAYS upon DAYS of rendering and action building. It WILL NOT CREATE DETAILED UPRES of ANY video. IT WILL RETAIN MUCH MORE DETAIL WHEN UPRES-ING MOST video.
This is for those with varying video rendering setups, especially with older software packages with perpetual license ownership (you know, like CS6 or 5).
I can confirm some decent results with the detail retention of AFTER EFFECTS CC at it’s current version. Premiere gets OK results, but AE gets excellent results. It does so by changing the PPI without changing the actual number of pixels in the image. IF I have a 6ppi image that contains 300 pixels total, it’s inches are 50 effective inches. If I drop the sampling of the pixels to 3, I have 100 effective inches, but the detail in those 3 pixels per inch is drastically smoothed over. This is where resampling takes over and recalculates color values in the pixels. When done this way, some colors are blended to create an easier transition area, with smooth edges. If you then alter the pixel count of the image, keeping the rest of the sizing normal (inches), you can raise the PPI back up, dropping the effective size in inches to get back to the original output PPI value, with a larger image size. This stretches and smoothes the transition areas a little more, and squeezes some areas together creating hard edges. Afterward, a sharpening algorithm applies more contrast to edge areas (unsharp masking with a high threshold, and narrow radius). Where there’s already a hard edge, the threshold applies some drop in contrast\noise to reach the average value, and viola. You have a clean edge, sharpened for screen use, and blown up to a larger output size. That’s essentially the way the plug functions. However, in premiere, the engine is less precise with values and it will only work about one step up. After effects can pull 720p to full 1080 or 3k with relative clarity. I’m told it incorporates a High-pass filtering that maps details and blends them back in. -
There may not be, from what I’ve seen, but you can do as you have done, or use the audio sample display to move within miliseconds to make your cuts. If you’ve got 29.97 framerate, you’ve got 54.94 fields per second. So divide a second into 59.94, and you’ll have the timing of one field. 1\60th of a second is roughly .0151515… Cut .01 seconds, and generally, you cut one field. It’s not a good idea, but you can do it. You can also nest a clip first, and end it’s last frame with one full field, doubled onto the track above. For a single frame, it won’t cause any real visual harm. But if you’re reversing a clip:
Start a new sequence with reversed field order. Place the clip, reversed, into this sequence on two tracks, right click and set the only field for the top to upper, bottom to lower. These will play their field only while playing ALPHA on the opposing frame. Clip one field from the appropriate track by the audio timing method (you can show audio units using the Program monitor window menu). You should now have a clip with a 1-field handle.
This isn’t really a decent method for clipping arbitrarily between shots. It really requires some kind of transition.
There is a really simple alternative. Clip out a whole frame, export the frame as a video, set it to Progressive format. This will grab only a single field. Now take that one frame and place it between the two clips, on an upper track. You have a choice. If you set that one frame over them, and remove a field from both, you can use that frame to fill them both. Place it twice, as one field for the first, and one for the second, nudging the two together, keeping the single frame above them both. This will create a smoother transition in the cut. -
I know this thread is old, but I can honestly say I’ve seen the Angle modifier in areas in Encore, but haven’t figured out how to make use of it.
“Unsupported”–basically, they can’t really help you with it, but it’s there.
Since encore was mostly a bought engine with an adobe GUI, they haven’t been able to do much with it in recent times. However, I’ve seen some Properties that sport an ANGLE tag. I wondered then if there was a way to import a second video\sequence to the timeline and work it as an alternate.
From the DVD spec, Alternate Angles was adopted pretty early, but only with low quality 540 video running at 4mbps max. Later, when more players had broader memory chunck support (about a year after the first DVD players were built for progressive video and 1.5 years after the first interlaced players– in the US anyway), there weren’t many big titles using it. However, it had expanded with the addition of Progressive video capability. Now, with 4 angles, you’ll need some light overhead to maintain the stream, along with a second or two for the shift after the control is pressed. This means the player must load two chuncks of data at once into memory (usually it has about 9.4mbps for total bitrate of video, with chunks 1-4 megabytes in size; most players have 12-16 megabytes of active memory for the Java implementation and interface of the player–commonly 16). If you load two chunks of the whole video, you have 3-4 chunks of the data (video and audio are read into separate streams in memory, though from the same chunk file on disc). Using the Reference Movie VOB structure, the chapter or break points being aligned, and having a common audio make it easy to call up the same location in each movie angle, however, the chunks of pictures in the data have to be set up the same as well, so the calculation of the frames in the compression are similar. Changing angle happens at the next GOP, meaning that each GOP must be closed, occaisionally increasing the amount of data required for the same quality if there’s a lot of motion.
Here’s where it’s not simple:
4 angles->each angle maxes out at 8mbps total bitrate (including audio)
5-9 -> each angle maxes out about 7.2mbps total bitrate(including audio)
It drops even further with more angles
The reason is simple; while the chunks of data are the same general size, the amount of data needed to play out a switch is 1 full gop from each. Since this takes up extraneous space in the reading of the disc, the read speed of any single video is slowed. With more options to choose from, the tracksplit on disc gets larger, and the read is even slower. I think this is actually built-into the auto track split feature for burning. If you bake your own transcodes, you should be able to designate a primary angle timeline, a secondary, and a tertiary. The disc flowchart should allow you to set that up easily enough. Make sure all chapter points are in the same locations(exact frame) and all video is the same format (interlace\progressive) for the same frames, and of course, the same audio needs to be a part of every track. When you designate other angles, you can apply a choice page for the DVD at the start, to prevent Orphaned files, and setting them on their own pathway. Changing the angle for the video allows the user to simply press the angle button and another timeline should play. This means the timelines are where you’d create multiple angles. Start with multiple timelines, set each to a different angle. Any chapters should be attached to all timelines with the chapter breaks pointing to the same time. When you burn the disc you should be able to switch angles. -
Ht Davis
August 8, 2016 at 6:59 am in reply to: Interlacing progressive footage – how to really do itFogive my last. Here’s a better understanding of what you’ll need to do.
First, double every frame. That’s right, double the number of frames, and for every frame, you can do this using AE to copy a folder of images. Duplicate the folder. Place all the images into one folder, renaming one set as necessary. You should be able to render this to a file. Place in a new sequence set to the higher frame rate as you need, but keep the sequence in interlaced format (60i), both of the videos should be in progressive, on two separate tracks. Right click the one and tell it to display itself as the first field you need, do similar with the other file, but set it to the second field. Now you can output this video. You have twice the fields you need and twice the frames. Every field and frame is played twice. How do we undo this? Please If I have to explain everything to you… So far, we’ve only just begun. Nest this video in a proper sequence at 60i, speed it up 200%. Now it will blend every field to it’s solid when you output, but display the fields properly staggered for playback. You’ll be running them at 120hz, which is 60i’z playback framerate. If you have 60 progressive frames, you can roll them as 2interlaced frames, and they should look decent. It just takes time to run the files out. Don’t count on any digital software to do this all for you. You have to render images first, double each one, then put them into two tracks, and designate each to play a single field, then dump it in another sequence and speed it up to 200%, add the audio at the end. The amount of time it takes should be the same. It will blend frames together if it has to… ;p Blending two of the same frame yields the SAME FRAME! It will take time to render the output as it processes each step, but if you start with a folder of images you can get a lot more done. You could resize the images to your desired viewing size fairly easy with a droplet. You could apply a sharpening action to one copy of the images so when they blend it becomes more natural-looking. It’s all about how you want to make your workflow. -
I like your method to use the doubling method, but I don’t like using the 50% drop. We’re talking FIELDS, which are placed every other line, with nothing in between. The awesome part? When you overlay the frames, they should line right up. This also provides some new ideas on combining multiple low res cams into large res.
I’ve seen a guy test this and get decent results. It takes serious craziness and a hunger for lunacy and really huge frames, but it seems to work fine.
You need multiple of the same camera. IF you want to shoot progressive frames out of interlaced cams, you can get a 60p frame rate and a large amount of frame. by lining them up just right with just over half overlap horizontal and only 1\3 overlap vertical, you can get progressive large frame with a lot of cheap small cameras in an array that are synchronized to some frame-accurate timecode. Horizontally, every other camera should be flipped upside down. For every 2 fields you’ll get 2 frames. If you process it the same way as you said, but overlay each cam properly, and erase the areas of distortion, you’ll end up with a longer frame. You just have to process it correctly, and do so in after effects. The upside-down cams capture the lower field of their neighbors, hence the overlay of just over 1\2. With the timecodes all syncronized somehow, you can flip the upside down over in AE and stitch it right up. With 2 cameras, you’ll get about 50% of the frame size of the camera (50% of 1080p kind of sucks), and at 3 cams, you’ll get about 66-75% full frame. But you add the 2\3 to 3\4 frame size across for every camera you add horizontally after 3. The outside area will be interlaced, and need to be cut off. If you do this for several vertical rows, and then try to stitch them, you’ll gain 2\3 vertical extension for each step up, and you’ll be stitching progressive to progressive. Do the same in each vertical row that you just did. They should match in direction, camera for camera, upside down to upside down. They need a max of 1\3 overlay with the frame below and should be vertically aligned. When you stitch progressive to progressive, you’ll be overlaying full frames one on top of another, and there will be an area of distortion you’ll have to remove from the top. You could also just output the JPEGS, and have a script grab from each vertical row, send to a photoshop droplet that stitches them, and drops them to a folder. You could watch the folder in AME, and have it render afterward to a video file. At 3 cameras horizontal, you should have nearly a full frame, about 2\3. This means you can make your own 4k video camera out of about 6×6. The horizontals should be very close together. No more than a few feet apart. In fact, it may even be best to keep them only a few inches from one another. With 6 cameras about 6 inches apart, you’ll span 3-4.5 ft with tripods alone; and with a straightline rig, you can place other cameras on top of those. I’d put them no more than 2ft apart top to top. You’ll reach 12ft in a hurry, and you’re best off with another tripod behind the others with an angle stabilizer holding your whole rig. You wont be able to move it, but it works.
It’s a cheap and dirty way of getting huge sensors. -
the issue with video is that the frames aren’t always in perfect sync, even with the same camera model. If they are started at a given moment, one camera may be halfway into or out of a frame, while the other is not. While this isn’t a total processing nightmare, it adds complexity.
First, to make this as simple as possible, you’ll need the same model camera, zoomed to exactly the same focal length (which means only 2 options, all the way in or all the way out; unless you can control them from an app that can give them the same focal length adjustment input). Then you’ll need to make sure you synchronise the shooting (again 2 options: you can use a timecode calculation and clock to set them to start at the same time or a frame apart, so they line up exactly; or you can simply synchronise them by audio if you have the same source –by lighting values in certain areas if you have them color correct and own a the program that will synchronise by the lighting at the common edges, and then cut them down to the same exact beginning and end, nesting each out of the sync sequence and into separate output sequences for export to a new video file; these two options work, the second because it will extend frames at ends to make them line up when you cut them at the same frame of the sequence). Even with all that, you now have two video files and there’s one more thing you need to worry about… …THE SHOOTING ANGLE. Both cameras need to shoot along the same 180 degree plane while 1\3 of the frames overlaps.
Here’s where it can get interesting and long. I havent yet seen the panorama function available on it’s own in AE. I have seen it in photoshop, and I’ve even automated it using a photoshop action for short sets of files. What you’ll need to do is output each video to frames, then automate the stitching of each set of frames for an actual stitching.
There’s a second alternative not many people talk about. Set up a comp that is sized 2x-(1\3x) where x is the pixel width of your video. Set each video to the side, and line them up, then the one that overlays on top of the other, you’ll need to remove 1\6th to 1\4 of the entirety of that video size from the overlap area. This is where it would distort. Now all you need to do is apply any warp or straight line adjustment, and they should line up nicely. Try to cut the top one at an area where there aren’t any finely detailed objects or people at least most of the time. Where people actually cross, you may see distortion, go to those frames, and you’ll have to selectively add the top overlay on the first few of their crossing, until they pass into the undervideo.
This is the general process I’m afraid. Time consuming, hard work, but worth it for meshing large frames together for crystal clear extended video. Another rule of thumb:
Instead of overlapping the photographic 1\3 or so, you’ll want to overlap at least two areas of minimal detail\action along the axis of combination (the horizontal for panoramic horizontal stretching, the vertical for stretching along the vertical). This can really cause more harm to the size you can utilize. However, I’ve seen it done, and I’m not advocating going out and doing it just describing the experience, where 6-12 1080i cameras were used, all forcing opposing interlace (one camera would cross half into another for visual width but be flipped upside down and the next right side up and so on, in a multirow fashion. The furthest outlying range of the video was single field only, but at 3\4 the single-cam size in, there was a perfect matchup of two fields, and was output from a single comp as full frames in jpeg, before being wrapped into a comp of 60fps progressive frame data for each row. When they were overlayed and stitched, they yielded between 6 and 8k Progressive frame video when placed into a sequence in premiere, which was later used for punch in at 2k. I was shocked as hell to see it. They guy called the aparatus his “Camera Wall”, and it used a tripod with a Cbracket screwed to it, with another mono-bar attatched to the top of the bracket, with another cbracket, and cameras placed at regular intervals of 4-6 feet, across longbars on top of 4 tripods. The rendering alone took nearly a week, but it worked. -
Does anybody else see it? If you can remove the vocals cleanly enough, output the file that way (“But but but…” STOW it and read on). Now you should have a file with vocal, and a file without. Split the tracks of both into mono. Phase one set and mix them together. At this point, your vocals should be back, with some noise left.
I’d suspect your problem has to do with mic distance, how hot, type of mic and\or how loud the person was speaking\singing. When this kind of thing happens, those are usually the culprits. When you have to fix it, note the following:
Is the noise as loud as your speaker or louder? Remove vocals, then phase to bring them back and work from there.
If the noise is softer than the vocals, option 1 works, but you can also resignal the whole thing. It’s frowned upon in some circles, but it’s saved my butt in several instances. ReSignal is an old cleanup method to get rid of amp noise or process noise. It works because it is an ELECTRICAL process, not a digital one.
Resignal makes use of the following standards for measure:
75% on any knob for volume is considered a 0 or no amplification, any above attenuates down, any above attenuates up, with most knobs attenuating +6 or +10 at 100%.
Most equipment will produce some noise when amplifying a signal, but not when de-amplifying it. Some equipment however, will still apply a resistance noise.You’ll need to know the zero points on your input equipment, have 2 DAW stations set up or 1 and another recorder with output capability. Resignalling requires enough time to play back the signal multiple times, and enough hard drive space to hold each pass multiplied by how many times you route back and forth. Explanation: To resignal, you’ll output from one station, through an output card or box (xlr or TRS), something with a knob attenuated down from 75% to a bit lower, like 65 or 60%, out to the input of the other computer, attenuated to 75% or 0 adjustment. This track will output to the first computer again, from a device (amp IO box interface or mixer console) set at similar adjustment below 75%, with the first computer set the same way on the input (0 adjust). Most IO boxes have several inputs and outputs. IF you have stereo audio, you’ll need 2 inputs for each send. This attenuates the actual signal you record and passes it back out at a slightly lower level. This smooths out the fundamentals and basic shake of the noise, so long as it’s below the good audio far enough (a few dbv). The vocals can get to a point barely audible, and still produce enough signal to be useable. Just remember to offset the original track so it gives about 12-15 seconds of dead air you can use for noise printing. What to watch for: IF you can play the original or a a slightly lower volume track through a set of headphones while you are recording a pass, watch the meters on the other tracks (which should be connected to your other inputs)–the meter should move only when your vocals pass in or when the noise is just loud enough (you can attenuate by a smaller amount with a final pass), and the difficulty will be watching and working 2 screens at once. Once the noise is at a low enough level, we’ll clean off the noise. Gather the 12-15 second front dead air, and combine it all into one sound, in its own track, save it to a file, open it up, select it, use the noise reduction plugin, capture and save the noise print. Open your final audio track you chose in file mode (not in multitrack editor anymore), run the noise reduction on it, but keep it between 80 and 95%. Now reverse the process you just did by amping the knobs by only a small amount, and use your new deamplified file as the starting point. Repeat the noise reduction process like before. Then do a new noise print on any remaining background noise. DOn’t ask why, I just know it works.
-
The best noise reduction is amplitude dropouts, most especially with non repeating noise. Why is simple enough. if you repeatedly drop your amplitude using i/o signal reduction, you will eventually get your noise to near zero or zero signal. the downside? as you boost back to your nominal level you will undoubtedly add repeating signal noise. this is removable. just record dead air through the same pass through for 12 seconds, save the file, and use that wav to create a noise reduction pattern. runthe noise reduction on the file you created from your pass through and it will clean up nicely. this is an old radar signal technique adapted to sound. in radar, theres a law af averages that is an adjustment to the whole thing, as certain exactness isnt required. sound is different. if you record the same amount of dead air as your audio then phase ityou can effectively wipe it out, but you might drop some sound values. the moise reduction works better, as it can calculate those differences using an accurate balance.
Also, In light of recent problems found with audition, consider the following:
Make a Batch file in windows that will
1. take a dropped file as input(your session file will do)
2. get the parent folder (the whole structure is important)–this is the folder where your session is, along with the folder containing your audio files that you record.
3. periodically non-destructively (as in not deleting files that are no longer in the first folder) Mirror the contents of the parent folder to another location anywhere on any drive (every 2-5 minutes works) and make sure you set this folder, or you can simply use the parent folder of the original file and add a Backup folder inside,
4. then just continually make backups using a forever loop (Condition is always true like While 1==1). You can close it when you close audition by simply closing the Command prompt window. You won\’t lose everything even if you crash, after all, audition cannot delete what it knows nothing about.On mac open automator:
start by making an application–call it BackupStart. create 2 Path variables: SessionFile, ParentFolder. These will be what you use to grab your files. Create more Path variables with names similar to BackupFolder#, where # is the number of which backup folder. You can now Set Variable Value (in actions library) for the path Session file as your first action (this will catch the path of the file you drop on the app). Next, run a shell script in Bash (an action in utilities or system), and set STDIN to Arguments. Clear everything in the shell, and type: dirname \”$1\” Exactly as shown (don\’t replace dirname, it\’s a command that grabs the name of the parent folder of the file you just dropped in. Add another Set Variable action for ParentFolder. Now use an ASK for finder items action, and look in it\’s options to \”Ignore this items input\” and check the box. Add a Set Variable for BackupFolder1. Repeat for each BackupFolder variable you have (ask for folder, set variable). Now add a GET VARIABLE action, go to options and select \”Ignore input\” again, make sure you are GETting the ParentFolder variable. Now another GET Variable for Backupfolder1, but DONT ignore input (you want the two to pass into one another and continue on). Repeat this last op for each BackupFolder variable, leaving the ignore input unchecked to group them all together. Now add a RUN WORKFLOW action and turn off \”Wait for workflow to finish\”. Save this file, leave it open, and go to file Duplicate. Rename the duplicate BackupLoop1. File >Convert this doc to a Workflow. Delete the variable SessionFile from this document and all but the very last of the actions(Run Workflow). The other variables are still necessary. Everything we add should be above the RUN WorkFlow action. Use a Get Variable on your ParentFolder, and as before, do not check ignore input. You need this to run straight through from the first document. Add the GET Variable for your backups. Add a Shell Script in Bash, with STDIN set to Arguments. Clear the script box, and type:
rsync -vau \”$1/\” \”$2/\” (enter)
rsync -vau \”$2/\” \”$3/\” (enter)
The first line copies your parent folder\’s contents, the second copies the first backup to the second. You can continue this until you have handled every backup in the script. Apply a PAUSE action for a few seconds. Now add a Loop Action, and set it to run 50 or more times (applies a wait time until finish) and set it to use the same input. This will continually backup all your data as you record, and when you hit stop, you should get a copy of your audio almost immediately after, done by your system, and making the RAW file data into a finished file set. Now add the GET VARIABLE set again for all your variables, ignoring the input of the first one, but keeping it for the others. Point them into the last action of RUN WorkFlow. Again, Duplicate the document, call it BackupLoop2. Change the Run Workflow in this file so it points to where you saved BackupLoop1. Change the RUN WorkFlow in BackupLoop1 to point to BackupLoop2. Change the Run Workflow in BackupStart to point to BackupLoop1 and place BackupStart in your DOCK. When you get ready to record, drop the session file onto your dock icon, pick your backup folder(s), and then let it go. Hit record, and when you hit stop, wait a few seconds for it to end the raw file tags. Now check your backup folder. You should have a perfect WAV capture there.IF audition crashes, you can drag the files in your backup folder to the original place and continue.