Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums VEGAS Pro Turn 480p clips into 720p: is it possible?

  • John Rofrano

    March 3, 2012 at 3:11 pm

    You can try using an upscaler plug-in like Boris Continuum Complete 7 (BCC7) UpRez. You can purchase it separately as a Continuum Unit and I would highly recommend that you download the free trial and see if it will help you. Here is a tutorial I made for Boris TV on how to use it:

    Episode #86: UpRez SD footage to HD in Sony Vegas Pro 10

    or watch it here on the COW:

    UpRez SD footage to HD in Sony Vegas Pro

    ~jr

    http://www.johnrofrano.com
    http://www.vasst.com

  • Dave Haynie

    March 4, 2012 at 9:04 am

    [John Bean] “”Based on *visual comparison* only, you can verify that a YouTube 1080p video will not *decode* to 1920×1080 as good as a DVD 480p video can *upscale* to 1920×1080 at bit-rates equal-to or greater than YouTube’s 6 Mb/s for 1080p.””

    Actually, you can very much verify that a well encoded 1080p video at 6Mb/s in AVC will look better than a well encoded DVD/480p at the typical 6-8Mb/s upscaled to 1080p. That’s not to say you’ll find all that many videos on YouTube that compare in quality to commercially mastered DVDs, but that’s an entirely different issue.

    [John Bean] “And this is because a 720x480p-8Mb/s video has *potentially* more information per frame to accurately decode to 720×480 and then up scale to a higher resolution like 1920×1080.”

    I don’t think you understand how video compression works. A 6Mb/s AVC video has about as much useful information as a 12Mb/s MPEG-2 video. I can certainly present material that would look better on DVD in MPEG-2, and I can certainly present material that would look better in AVC/HD at a slightly lower bitrate. Obviously, the AVC encoding will break down a little sooner on high motion video, but MPEG-2 isn’t far behind.. that’s why amateur MPEG-2 or AVC nearly always looks terrible, while professional video mastering engineers will run a low pass filter over high motion parts of a video, crank up the bitrate (no commerical videos are encoded CBR), and maybe even apply motion blur, to eliminate visible macroblocks.

    -Dave

  • John Bean

    March 4, 2012 at 5:24 pm

    I understand very well how compression works and that AVC is a much better compressor than MPEG-2.

    If you are talking about a 720×480 pixels per frame of video, you don’t need a great compressor!

    For a 720×480 video, to achieve a TARGET BIT-RATE of 8 Mb/s using a *LOSSLESS* compression only requires about a 1.032 to 1 compression of the bits! That is practically 1 to 1 – no compression at all!

    An *UNCOMPRESSED* 720x480p (23.976 fps, 8 bpp) video only needs a 8.236 Mb/s bit-rate! That is UNCOMPRESSED! This works out to be only 346 kb/frame. To compress this video to 8 Mb/s works out to be 334 kb/frame only!

    346-334 = 12 kb difference!

    In other words, if you set the TARGET BIT-RATE to 8.236 Mb/s, a 720x480p (23.976 fps, 8 bpp) doesn’t even need compression at all! Setting a TARGET BIT-RATE of higher than 8.236 Mb/s would be redundant too!

    A 1920x1080p (23.976 fps, 8 bpp) video in its *UNCOMPRESSED* form has a BIT-RATE of 398 Mb/s.

    But if you wanted to compress this 1920x1080p (23.976 fps, 8 bpp) video to a TARGET BIT-RATE of 6 Mb/s, that requires a great compressor!

    Yeah, 398 Mb/s is significantly larger than 6 Mb/s! That’s a 392 Mb/s difference! The only way to go from 398 Mb/s to 6 Mb/s is to throw away a lot of information using a LOSSY codec like AVC.

    A 1920x1080p (23.976 fps, 8 bpp) in its UNCOMPRESSED form contains 16588.8 kb/frame or 16.6 Mb/frame.

    For a 1920x1080p (23.976 fps, 8 bpp) video, if you use a constant bit-rate of 6 Mb/s, then the compression codec is limited to 250 kb/frame or 0.25 Mb/frame! That is a 16.35 Mb/frame difference!

    That is a lot of compression needed!

    Since AVC is INTERFRAME compressor, depending on your AVC compression settings (like how many reference frames), it can recreate the current frame using information from the previous reference frames. So the amount of information an AVC decoder has to recreate a frame may be larger than 250 kb for some frames for at TARGET BIT RATE of 6 Mb/s.

    For example, in our case scenario, frame#0 is limited to 250 kb. But frame#1, uses frame#0 as a reference. Lets assume frame#1 contains 250 kb of new information. So in total, frame#1 will be recreated from 500 kb of information.

    But even still, this is still a far cry from 16.6 Mb/s (uncompressed).

    A 1920x1080p (23.976 fps, 8 bpp) at 6 Mb/s does not contain 1920×1080 pixels of information. It contains way less more information to accurately decode back to 1920×1080 uncompressed. On average, it will only have about 250 kb/frame of information to work.

    Where as a 720×480 (23.976 fps, 8 bpp) at 8 Mb/s essentially does not need any compression at all. At 8 Mb/s, it will contain on average of about 334 kb/s of information to work with.

    334 kb/frame > 250 kb/frame

    That is why in a lot of cases, you will get better results *upscaling* a DVD 720×480-8Mb/s video to 1920×1080 than you do *decoding* a YouTube 1920×1080-6Mb/s video back to 1920×1080.

    Cheers!

  • Dave Haynie

    March 4, 2012 at 8:39 pm

    [John Bean] “I understand very well how compression works and that AVC is a much better compressor than MPEG-2.

    I don’t think y’do… otherwise, you wouldn’t come up with such horribly incorrect calculations.

    [John Bean] “An *UNCOMPRESSED* 720x480p (23.976 fps, 8 bpp) video only needs a 8.236 Mb/s bit-rate! That is UNCOMPRESSED! This works out to be only 346 kb/frame. To compress this video to 8 Mb/s works out to be 334 kb/frame only!”

    Nope. Full 720×480 video is 24bits/pixel (or more, but lest’s stick to the standard stuff), otherwise dubbed 4:4:4 color, at ~24fps. Each frame is 8,294,400 bits, 24 of these per seconds yields 199,065,600b/s, or ~200Mb/s, or 66MB/s for uncompressed video. This is a bit simplistic, since it’s leaving out format overhead and other things. So I can easily point you to existing uncompressed standards, which will illustrate this for you.

    The first common uncompressed digital standard was SMPTE D1 (aka ITU-R 601, Rec. 601), which runs 720×480, 30fps NTSC video in a number of difference uncompressed (aside from color decimation) modes. The standard 4:2:2 D1 is 173Mb/s… a bit lower than my calculation due to the gain in size from 30fps versus the loss in size due to the 4:2:2 subsampling. Feel free to look this up… it’s normal, everyday stuff I suspect most folks here have been working with for years (I started in digital video back in the 80s… it wasn’t pretty). The peak D1 rate is around 275Mb/s for 4:2:2:4 video, this includes an alpha channel. There’s a PDF of the spec here: https://www2.rohde-schwarz.com/file_6272/7BM19_0E.pdf.

    Take the D1 format, keep the 4:2:2 encoding, compress each frame using a mild JPEG-like compression of about 3.3x (yeah, a factor of 3.3, which is actually pretty small), and you get the DV50 standard used in many higher-end SD camcorders. D1 actually encodes a bit of the off-screen stuff you don’t actually need (it was intended to be exactly digital video, not computer video), so they cut that out for the camcorder formats, thus, the peak uncompressed rate is slightly lower than D1.

    Now take that same 24-bit YCrCb SMPTE D1, subsample 4:1:1, compress using a JPEG-like intraframe compression algorithm (discrete cosine transform lossy filtering with Huffmann encoding.. I’m sure you know all this, right), and you get to the DV25 standard… 25Mb/s for standard 5:1 compressed digital video, the foundation of the camcorder industry prior to HD catching fire.

    Which of course should lead you to realize that standard definition at 8Mb/s in MPEG-2 is compressed about 15:1 from RAW, or 25:1 from the original uncompressed source. That’s significant, sure, but once you really grok the difference between intraframe-only and interframe compression, it’s no huge surprise that DVD video, as we all know, can actually look better than DV video, when well encoded from higher resolution sources.

    [John Bean] “A 1920x1080p (23.976 fps, 8 bpp) in its UNCOMPRESSED form contains 16588.8 kb/frame or 16.6 Mb/frame.”

    As with the SD, you need to check you math here. A 1920×1080 frame with the usual 8-bits per color contains 49,766,400 bits per frame, or 1,194,393,600 bits/second worth of information, or roughly 1.2Gb/s. Easy math… 6x as much resolution as SD, 6 x 200Mb/s = 1.2Gb/s. The HDV specifications don’t encode full HD; the store 1440×1080/30p at 25Mb/s using MPEG-2… that’s a 48:1… almost twice the compression of DVD. And yet, there’s absolutely no question that HDV looks better on-screen than upscaled DV.

    The US broadcast standard for ATSC MPEG-2 is 19.4Mb/s, though it’s rare that anyone actually broadcasts just one stream in that channel. So leaving room for an SD channel or two, that’s usually a max of about 15Mb/s, including an audio stream. So broadcast HDTV is roughly 80:1 compressed from the original source material… and again, no one’s going to argue that it doesn’t look better on-screen than upscaled SD. I assume you’ve watched broadcast HD and aren’t trying to debate this, either, even if you didn’t understand the math.

    If you did nothing more than move from professionally encode MPEG-2 at 15Mb/s to equally well encoded AVC, you could drop to 7.5Mb/s with no visual difference. Not all AVC is necessarily that well done; in particular, camcorders don’t always have the compute budget to fully employ everything you can within the AVC specifications. But that’s not an issue when preparing material for broadcast, or even for YouTube, necessarily. AVC really does have twice the coding efficiency of MPEG-2, conservatively, if done right.

    All this is about are psychovisual encoding… what does your brain care about, what doesn’t it care about. It’s certainly true that, all things equal, you will have more compression artifacts when you encode at 80:1 versus 25:1… no question about it. But all is not equal. When I run MPEG-2 on an SD image, I’ll get 45×30 macroblocks. On my 72″ screen, that means each macroblock in SD is 1.4″ x 1.1″. I’m going to very definitely see even a small bit of compression noise here. That full HD image delivers 120×68 macroblocks… each block is 0.5″ x 0.5″ … there could be far more per-block noise and it would be less visible. Not only that, but for the same image, each block is covering less area… so the likelihood of compression noise is far less.

    And that’s all for MPEG-2… AVC can do far more complex things with macroblocks, very, very long GOPs (I-Frames only when you need them, etc). Thus, the same visual quality at half the bitrate.

    -Dave

  • John Bean

    March 5, 2012 at 6:49 am

    My bad @Dave. I mistakenly used 8 bpp when it is suppose to be 24 bpp (8 bits per RGB channel=8×3=24). A simple slip of the mind that that there 3 channels of color to account for.

    But atleast now I know you took the time to carefully go over my analysis. :p

    Even still, correcting that mistake still does not change the conclusion because the BITS PER PIXEL cancel out in the final overall calculation anyways. I’ll get back to that in a second.

    What you said about the MPEG-2 and AVC codecs is correct in regards to quality (ie. macro-block size etc). AVC is indeed much better than MPEG-2.

    Your claim that a 1920x1080p (23.976 fps, 24 bpp) video at 6 Mb/s is of higher quality than a 720x480p (23.976 fps, 24 bpp) video at 8 Mb/s would be correct … only if the compression was LOSSLESS.

    But you cannot achieve LOSSLESS compression at those high rates for 1920x1080p-6Mb/s videos!

    I am not sure if you understand the role and importance of the BIT-RATE setting.

    The target BIT-RATE setting you select during encoding (compression) limits how much bits the encoder can use to encode all the FRAMES that make up a video PER SECOND.

    The encoding codec must find a way to divide the allowable BITS PER SECOND among all the FRAMES that make up a video PER SECOND. For example, for 24 fps and a target bit-rate of 6 Mb/s, the encoder has to divide 6 Mb among the 24 frames in each second of the video in order to satisfy the target bit-rate requirement.

    The target BIT-RATE setting is how you control the file size of the video. A higher bit-rate leads to a bigger file size.

    The target BIT-RATE setting defines the minimum processing rate at which a playback device has to operate at in order to process those video frames to display back without frame dropping. For example, if your video has a BIT-RATE of 25 Mb/s but the playback device can only process 10 Mb/s, there will be lots of dropped frames (choppy video).

    Here’s another example. You can burn a Bluray ISO image onto a DVD disc so long as your videos have a BIT-RATE less than 28 Mb/s because that is the maximum BIT-RATE that can be read from of a DVD disc. Conversely, Bluray discs limit videos to 40 Mb/s.

    In other words, the BIT-RATE controls the video’s BANDWIDTH requirements.

    So if you follow, than the BIT-RATE is the setting that constrains the allowable BITS PER FRAME.

    If your target BIT-RATE is CONSTANT, then the BITS PER FRAME is CONSTANT too. If your target BIT-RATE is VARIABLE, then the BITS PER FRAME is VARIABLE too.

    For VARIABLE BIT-RATE, the encoder takes bits away from frames with less activity (ie. low frequency information) and gives it to frames with high activity (ie. high frequency information). Frames with more high frequency information have more information to encode and hence require more bits.

    So a higher BIT-RATE means the encoder has more bits to work with … ie. it can encode with more BITS PER FRAME.

    If you follow, then there are just no way an encoder can compress a 1920x1080p (23.976 fps, 24 bpp) video to a bit-rate of 6 Mb/s without having to restrict the allowable BITS PER FRAME to a very small value. The compression ratio is just way too high!

    And as I’ll show again below with the corrected calculation … a 1920x1080p (23.976 fps, 24 bpp) video with 6 Mb/s will have a very small BITS PER FRAME value … that is also much smaller than the BITS PER FRAME for a 720x480p (23.976 fps, 24 bpp) video with 8 Mb/s.

    So here are the calculations again after fixing the bits per pixel to 24 bpp:

    1920x1080p (23.976 fps, 24 bpp) compressed to a target bit-rate of 6 Mb/s
    {uncompressed video}=49766.4 kb/frame ->[compression 200 to 1]-> 250 kb/frame={compressed video}
    or in reverse ..
    {compressed video}=250 kb/frame ->[decompression 1 to 200]-> {uncompressed video}=49766.4 kb/frame

    720x480p (23.976 fps, 24bpp) compressed to a target bit-rate of 8 Mb/s
    {uncompressed video}=8294.4 kb/frame ->[compression 25 to 1]-> 334 kb/frame={compressed video}
    or in reverse ..
    {compressed video}=334 kb/frame ->[decompression 1 to 25]-> 8294.4 kb/frame

    => a 720x480p (23.976 fps, 24bpp) video compressed to a target bit-rate of 8 Mb/s will have:
    1. a smaller compression ratio than a 1920x1080p (23.976 fps, 24 bpp) video compressed to a target bit-rate of 6 Mb/s

    2. a larger bits per frame of compressed data than a 1920x1080p (23.976 fps, 24 bpp) video compressed to a target bit-rate of 6 Mb/s

    Again, this can be visually verified that for the majority of the cases, a high quality DVD movie looks visually clearer and better than any YouTube 1920x1080p video.

    * * * * *

    Here’s the analysis again corrected for 24 bpp.

    For Video#1: 1920x1080p (23.976 fps, 24 bpp) @6 Mb/s

    bits per frame (uncompressed)
    = pixels per frame * bits per pixel
    = (1920×1080 ppf) * 24 bpp
    = 49766400 bits/frame
    ~= 49766.4 kb/frame
    ~= 49.8 Mb/frame

    bits per second (uncompressed) = bits per frame * frames per second
    = 49766400 bpf * 23.976 fps
    = 1193199206.3999999 bps
    ~= 1193.19 Mb/s
    ~= 1.19 Gb/s

    =>for a 1920x1080p (23.976 fps, 24 bpp) video, in its uncompressed form, it has a BIT-RATE of about 1193 Mb/s.

    compression ratio = target bit rate / uncompressed bits per second
    = 6 Mbps / 1193.19 Mbps
    = 6000000 bps / 1193199206.3999999 bps
    = 0.005028498148354116
    = 198.8665344 to 1
    ~= 200 to 1

    =>to achieve a TARGET BIT-RATE of 6 Mb/s, for a LOSSLESS compression, a video stream of 1920x1080p (23.976 fps, 8 bpp) requires a compression ratio of about 200 to 1.

    That is, for LOSSLESS compression, roughly 200 bits of uncompressed video information must get encoded to 1 bit to achieve a TARGET BIT-RATE of 6 Mb/s.

    Or stated differently, for LOSSLESS compression, 1 bit of compressed video information must decode to 200 bits of uncompressed (original) video information, if the desired TARGET BIT-RATE is 6 Mb/s.

    On a PER FRAME basis, a 1920×1080 video frame compressed by a factor of 0.0150854944, gives:
    => bits per frame * compression ratio
    = 49766400 bpf * 0.005028498148354116
    = 250250.25025025025 bits/frame
    ~= 250 kb/frame (unchanged from the last calculation)

    =>for LOSSLESS compression, a video stream of 1920x1080p (23.976 fps, 24 bpp) at 6 Mbps has 250 kb/frame of information.

    =>to obtain a TARGET BIT-RATE of 6 Mb/s for LOSSLESS compression, the encoding video codec must compress 49766.4 kb/frame of ORIGINAL UNCOMPRESSED video information into about 250 kb/frame.

    {uncompressed video}=49766.4 kb/frame->[compression]->250 kb/frame={compressed video}

    For Video#2: 720x480p (23.976 fps, 24 bpp) @8 Mb/s

    bits per frame (uncompressed) = pixels per frame * bits per pixel
    = (720×480) ppf * 8 bpp
    = 8294400 bits/frame
    = 8294.4 kb/frame
    ~= 8.29 Mb/frame

    bits per second (uncompressed) = bits per frame * frames per second
    = 8294400 bpf * 23.976 fps
    = 198866534.4 bits/s
    ~= 198.9 Mb/s
    ~= 0.199 Gb/s

    =>for a 720x480p (23.976 fps, 24 bpp) video, in its uncompressed form, it has a BIT-RATE of about 198.9 Mb/s

    compression ratio = target bit rate / uncompressed bits per second
    = 8 Mbps / 198.9 Mbps
    = 8000000 bps / 198866534.4 bps
    = 0.040227985186832925
    ~= 25 to 1

    =>to achieve a TARGET BIT-RATE of 8 Mb/s, for a LOSSLESS compression, a video stream of 720x480p (23.976 fps, 24 bpp) requires a compression ratio of about 25 to 1.

    That is, for LOSSLESS compression, roughly 25 bits of uncompressed video information must get compressed to 1 bit to achieve a TARGET BIT-RATE of 8 Mb/s.

    Or stated differently, for LOSSLESS compression, 1 bit of compressed video information must decode to 25 bits of uncompressed (original) video information, if the desired TARGET BIT-RATE is 8 Mb/s.

    On a PER FRAME basis, a 720×480 video frame compressed by a factor of 0.965471644, gives:
    => bits per frame * compression ratio
    = 8294400 bpf * 0.040227985186832925
    = 333667.0 bits/frame
    ~= 333.7 kb/frame (unchanged from last calcultion)

    =>for LOSSLESS compression, a video stream of 720x480p(23.976 fps, 24 bpp) at 8 Mb/s has 334 kb/frame of information

    =>to obtain a TARGET BIT-RATE of 8 Mb/s for LOSSLESS compression, the encoding video codec must compress 8294.4 kb/frame of ORIGINAL UNCOMPRESSED video information into 334 kb/frame.

    {uncompressed video}=8294.4 kb/frame->[compression]->334 kb/frame={compressed video}

    Cheers!

  • Dave Haynie

    March 5, 2012 at 9:47 am

    [John Bean] “My bad @Dave. I mistakenly used 8 bpp when it is suppose to be 24 bpp (8 bits per RGB channel=8×3=24). A simple slip of the mind that that there 3 channels of color to account for.”

    Actually, the calculations you did were at best for a monochrome image. You reported 345600 !BITS! for an SD frame. That’s the number of pixels, not the number of bits.

    [John Bean] “Your claim that a 1920x1080p (23.976 fps, 24 bpp) video at 6 Mb/s is of higher quality than a 720x480p (23.976 fps, 24 bpp) video at 8 Mb/s would be correct … only if the compression was LOSSLESS.

    But you cannot achieve LOSSLESS compression at those high rates for 1920x1080p-6Mb/s videos!”

    I said absolutely nothing about lossless compression, other than the fact that you need about 200Mb/s for lossless SD and 1.2Gb/s for lossless HD. NO ONE IS DELIVERING LOSSLESSS VIDEO. Get over it. Outside of some very high end professional editing systems, or the output of your HDMI connector, you are never going to find lossless video.

    Sure, you can render it from Vegas if you like, but unless it’s from a lossless source, there’s little point.

    You still seem to have not the slightest clue about how psychovisual compression algorithms work, or much of anything else about video and compression. There is just far, far more you need to know to even start having a useful conversation about this here.

    [John Bean] “If you follow, then there are just no way an encoder can compress a 1920x1080p (23.976 fps, 24 bpp) video to a bit-rate of 6 Mb/s without having to restrict the allowable BITS PER FRAME to a very small value. The compression ratio is just way too high!”

    While you reproduced my uncompressed/RAW numbers properly this time (after being given the answer, twice), you still have no idea how MPEG, and in particular, AVC compression work. You really don’t, and you know it.

    It is clearly pointless either suggesting to you where you can find real-world examples that refute your thesis, such as broadcast HD video (which is far better looking than DVD) or lower-bitrate modes on pretty much every AVCHD or MPEG camcorder, which often go as low as 6Mb/s, and still, on many subjects, will look better than upscaled SD. Not always… again, you really have to understand how MPEG works to understand why all of my claims are true.

    Enough on this thread. Learn some things before speaking about them. There are a fair share of beginners here, and you’re already misleading them on numerous issues.

    -Dave

  • John Bean

    March 5, 2012 at 10:09 pm

    Dave,

    Please don’t get upset here.

    Your claim that a YouTube 1920x1080p video at low bit-rates like 6 Mb/s is far superior than a DVD video at standard MPEG bit-rates like 8 Mb/s just doesn’t make sense mathematically, nor can it be validated with any visual comparisons.

    Most YouTube 1080p videos are closer to 3.5 Mb/s too. Since AVC is better than MPEG-2, at best, you may only be able to claim that they are comparable in quality, ie. equal.

    Just think about this: one uncompressed 1920×1080 frame video requires ~50 Mb (1920×1080 * 24 bpp)

    One frame alone from a 1920×1080 video already exceeds the low bit-rate of 6 Mb/s. Low bit-rates of 3-6 Mb/s that you typically find with 1080p streaming websites and consumer camcorders just can not give you enough bits to store great quality 1080p video.

    Where as one frame from a 720×480 video requires only 8.29 Mb (720×480 * 24 bpp). Then using the INTER-FRAME encoding algorithms of the codec, subsequent frames can be recreated without much more new bits of information. This is all well within the standard bit-rates of 7-9 Mb/s for DVD videos.

    Cheers!

  • Dave Haynie

    March 6, 2012 at 9:05 am

    [John Bean] “Your claim that a YouTube 1920x1080p video at low bit-rates like 6 Mb/s is far superior than a DVD video at standard MPEG bit-rates like 8 Mb/s just doesn’t make sense mathematically, nor can it be validated with any visual comparisons.”

    Do not put words in my mouth: you know very well that’s not what I said. What I said is that a 6Mb/s 1080p video may well be visually superior to an 8Mb/s DVD, when viewed on a high resolution display. I actually does make sense mathematically, but even more sense when discussing the cognitive psychology aspect. Since I minored in Cog-Psych, and have a BS degree in Math (and EE), I feel pretty qualified to claim this does make sense.

    However, sir, I see your gauntlet thrown, and I happily pick it up. A battle of wits, to the death! The game starts here: https://forums.creativecow.net/readpost/24/945522. It ends when you choose, and we both drink!

    -Dave

  • John Bean

    March 6, 2012 at 5:36 pm

    From this post: https://forums.creativecow.net/readpost/24/945296

    Dave wrote:
    That’s not right. He’s got very high quality 480p, but it’s still just 480p. If you upload very good quality 1080/24p to YouTube (I’ve sent them 20Mb/s AVC in the past), they will render it using the excellent x264 AVC CODEC to 1080/24p at about 6Mb/s.

    While this isn’t anything to write home about, it’s actually pretty close to the lowest bitrate setting on my Panasonic HMC40. AVC at 6Mb/s is going to suffer terribly on fast motion, but it’s going to look much better than anyone’s 480p when displayed on a good television… not to mention the fact that the 1080p doesn’t need to be upscaled, only the 480p does.

    I’m pretty in this quote, you are claiming a 1080p video at 6 Mb/s is superior to a 480p video at 23 Mb/s. And here we are talking about a 480p video encoded using AVC too!

    In regards to Dave’s understanding of BITRATE, from the same post, Dave wrote:
    The bitrate is going to be a very big factor when it comes to the quality of upscaling, which he’s got to do here. When you upscale compressed video, you’re upscaling the compression artifacts, too. The Canon 480p should look better upscaled than regular DV-class video, as long as he’s not shooting overly fast motion video.

    Regardless of what video codec you are using, the BITRATE is a big factor when it comes to DECODING because it limits the number of bits the encoder can use to encode all of the frames in any given 1 second interval of the video.

    A 1080p video at 6 Mb/s is limited to 6 Mb per second regardless of what codec is being used.

    It is then up to the codec to adjust its parameters accordingly so as to satisfy the desired bitrate requirement. Maybe it means using bigger blocks. Maybe it means less sensitivity to temporal changes. Who knows?!

    But in the extreme cases like we have here, it does not matter because it is not necessary to know the details of the codec’s inner workings.

    In the context of this thread it is not necessary to get into the gritty details of any particular codec. Hence, it is not necessary to have degree in Cog-Psych and Math to understand what is going on. Just basic math and understanding of video structure.

    In the best case scenario for an INTER-FRAME codec, if all the frames in 1 second are exactly the same, then the encoder can use almost all the available 6 Mb for the first frame and copy it over for all the remaining frames. But 6 Mb is still far short of the 50 Mb required for one frame of 1080p video. If your 1080p video was just a picture (repeated over and over), you still could not achieve LOSSLESS compression with a bitrate of 6 Mb/s.

    In the worst case scenario for an INTER-FRAME codec, if all the frames in 1 second are totally different from one and other, the encoder must divide the 6 Mb among each of these frames. In the case of 30 fps, that is 200 kb for each frame (6 Mb / 30). In this case, this INTER-FRAME codec essentially behaves like a INTRA-FRAME codec.

    In other words, a 1080p video at 6 Mb/s will have LOTS and LOTS of compression artifacts from its worst case scenario to its best case scenario.

    Best case scenario for a 480p video: if all the frames are exactly the same in 1 second, only 8.29 Mb is required for LOSSLESS compression. Yes, LOSSLESS compression! This 8.29 Mb/s is well within the range for standard DVD videos. If your video is just one picture frame (repeated over and over), you can set your MPEG-2 encoder to LOSSLESS compression!

    Worst case scenario for a 480p video: if all the frames are completely different in 1 second, then for 30 fps at 8 Mb/s, each frame gets 267 kb (8 Mb / 30). Now consider the bit-rate of the 480p-23 Mb/s video in this thread. That is 767 kb per frame in the worst case scenario (23 Mb / 30).

    So it’s really not hard to see that a 1080p video at 6 Mb/s will have very much more compression artifacts than a 480p video with a very high bit-rate.

    So the debate really boils down to this: upscaling algorithms vs compression algorithms

    Can a 480p video at very high bit-rates upscale to 1080p and be of higher quality than a 1080p video encoded at low bit-rates like 6 Mb/s?

    My position is that a 480p video encoded at very high bit-rates will have very much less compression artifacts. Because the compression artifacts are indeed low, when you then upscale to 1080p, the compression artifacts will still be very much lower than a 1080p video at encoded at very low bit-rates like 6 Mb/s.

    Cheers!

  • Aleinyo Kims

    July 10, 2015 at 6:50 am

    John bean,you are a genius,it is making more sense now.

Page 2 of 3

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy