Forum Replies Created

Page 91 of 92
  • Blaise Douros

    March 11, 2015 at 5:39 pm in reply to: memory loss
  • Never use your boot drive for storage, or for a scratch disk. If you really want to keep the scratch separate from the media, you’ll want to get an external drive.

    I would suggest that you get external drives, or a RAID array, to store your media on. Then, a dedicated drive (maybe your internal HDD) for the project and scratch.

    Of course, a regular backup schedule will require that you get more drives than this.

  • Blaise Douros

    March 3, 2015 at 6:38 pm in reply to: Time to upgrade?

    Note, though, that x-particles doesn’t work with C4D lite–you have to have the big kid version.

  • Sorry, I realize that Faithful is a Canon standard setting. But still, what happens when you switch to a regular picture style?

  • That’s some weird shit. It looks to me like it’s a codec problem, like the camera’s processor is having trouble rendering the contrast of the thin, dark branches against the bright sky.

    I note you’re using a third-party picture style–what happens when you switch to one of the standard ones like Faithful?

  • Blaise Douros

    February 28, 2015 at 12:57 am in reply to: I need Professional Advice

    The video from a Canon HF-G10 will be sharper out of the box than a DSLR. DSLR footage tends to need some post processing to sharpen it up.

    If by “crisp” you mean “more cinematic,” then shallow depth of field is what’s going to do it–that’s what DSLRs are good at. Unfortunately, this means you need a real live person pulling focus for you–even the 70D and 7DII’s autofocus is going to be MUCH slower and probably less accurate than your G10.

    Everyone bags on small sensor camcorders, but their autofocus is fast and they have a ton of depth of field, so everything is in focus.

    I would re-evaluate what you really need in order to improve your image. Better/more lighting and good audio are going to go way farther than a new camera. The HF-G10 is fine for what you describe.

  • Hah, thanks! I THINK it’s true, at least from a perspective of a guy who is not a programmer or software engineer…just a longtime user!

    Your summary is just about right. The only correction I’d point out is that compression in video is more tied up with changes to the frame over time, rather than jpeg-like blocks within the image (though there is some of that, too). With the All-I-Frame stuff you’re working with, that’s not an issue, but regular h.264 IPB compression hinges on key frames.

    Think of it like this: a key frame is a fully detailed image, and then the compressor takes over and records the changes in the image for every frame since the key frame. So you have less information because it only stores data about what’s different from the key frame. The amount of compression depends on how many key frames per second are being used; All-I-Frame compression has a key frame for every image, so it’s less compressed. On the other end of the scale, when you compress something for web streaming, you can sometimes end up having a key frame only every 96 frames. You can see why the detail tends to break down a little.

    There is, of course, compression on even the I-Frames of your DSLR footage–the color space is 4:2:0, for example, and it’s 8-bit.

    But the bottom line is that Tero’s comment is right–your eye is a good indicator in a case like this. Civilians will usually not be as sensitive to these kinds of things, but sometimes not–my boss is not a videographer or photographer, but he’s got a really good eye, even if he doesn’t know the tech behind it.

  • Gotcha. The best person to answer this would be one of the Adobe guys (are you reading this, Todd Kopriva?), but from what I understand, it basically works something like this (and I apologize if this is overly basic or if you already understand how this works):

    When you have a 1920×1080 clip on a 1920×1080 canvas, the pixels are represented on a 1:1 basis. When you scale it up, you of course lose some pixels to cropping, but the ones that remain present are subsampled across the 1920×1080 canvas. Let’s use a single black pixel on a white surface as our clip example: when you increase the size of the clip by 200%, that single black pixel would be subsampled across four canvas pixels (not two, because remember it doubles in both x and y dimensions). Now you have four black pixels on a white field.

    This is why resolution loss becomes obvious pretty quickly–when you double the size to 200%, you’re not actually cutting your onscreen pixels in half–you’re cutting them down to a fourth.

    Now, let’s say we didn’t scale that image up by 200%, but by only 150% (100% is the native size of the clip). Theoretically, that black pixel is scaled up to the size of two pixels; but it’s not that simple, because the size increases in both x and y dimensions. That means that those four pixels that showed black are going to have to try to represent only part of that two-pixel black dot. So depending on that dot’s position, you might have one central black pixel with the ones around it registering grey as a transition, or maybe four pixels showing dark grey. This process is called interpolation, and there are different algorithms that can be used for different results. Basically, it’s the software trying to average out the values for each pixel, based on what’s around it.

    Of course, the other factor that we’re not taking into account is the quality of your initial footage. Super clean, 4:4:4 ProRes is going to look a lot different than highly compressed 4:2:0 DSLR footage, which loses some detail to compression. Highly compressed footage introduces additional artifacts that reduce your resolution in a different way–by decreasing the amount of information in the frame. So uncompressed footage is going to hold up to your scaling a lot better than compressed footage will.

    So, I suspect that the answer to your question of “is there a hard limit” is…it depends. Depends on the footage, the subsampling algorithm that the NLE is using, and depends on how much you want to scale it up. Like I said, with AVCHD footage, I can usually get away with 10%.

    Don’t you love it when the answer is “it depends?” I sure do 🙂 But knowing about all the factors definitely helps when making an educated guess.

  • Blaise Douros

    February 25, 2015 at 11:54 pm in reply to: Canon XA20 Professional HD Camcorder

    Nice! I hadn’t heard of this camera. Looks very interesting. I’d love to have 4:2:2 again; I do hate grading 4:2:0 AVCHD footage…

    Only thing I’d be wary about is the “planned” 4K upgrade. In the FS-700, didn’t the upgrade entail buying an external recording unit?

  • It all depends on how you’re mastering. Are you going out to 1080P? Then be really careful–that 10-15% number is a bit high, in my completely unscientific and unsupported personal opinion, I wouldn’t go any higher than 10%, otherwise it gets pretty noticeable.

    Are you mastering to 720P? Then you’ve got a little room to play around. If mastering to 720P, theoretically you can zoom in to 150% and not lose resolution. The key to this is to bring your 1080P footage into a 720P timeline–you have to reduce it in size by 66% to fill the screen, but if you don’t, you’ll already be zoomed in at the max resolution, since the edges of the footage will be outside of your canvas. So: if you’re scaling up in a 1080P timeline, and mastering to 720P, scale the footage up to 150%. If you’re putting 1080P in a 720P timeline, it should come in already zoomed in, or you can reduce it down to a maximum of 66%.

    Mastering to 480P SD? You can zoom in by like, 200, 250% easily.

    Mastering to a 360×270 animated GIF? It’s your birthday, and you should thank Eddie the Editor, God of Post Production, for his many blessings. Zoom in as far as you damn well want.

    Now, if the question you’re asking is “can I uprez 1080P footage to 4K without losing detail,” the answer is, objectively, no. However, you have to sit reeeeeeally close to a big monitor to see the difference between 4K and HD–I believe Fast and Furious 6, or whatever the latest installment was, used DSLRs as crashcams and mixed quick shots into the regular 4K stuff. As long as you don’t linger on the uprezzed stuff, it should be OK.

Page 91 of 92

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy