You have to punch the numbers of the resolutions. Which is hard when terms like ‘HD’, ‘2K’ and ‘4K’ are thrown around, without specifying what the actual resolution is. Such terms are just umbrella terms for describing a range of resolutions.
HD usually means 1920×1080 (or ‘Full HD’). But sometimes people call 1280×720 HD also.
2k can mean either Native (2048 × 1080), Flat cropped (1998 × 1080), or Cinemascope cropped (2048 × 858). I’m going to assume that if you’re delivering for web, they want Native. But, it’s best to confirm with the client exactly what resolution they want.
4k can mean either Full Frame (4096 × 2160), Flat cropped (3996 × 2160), or Cinemascope (4096 × 1716). You should check with the DOP exactly what resolution they are shooting.
Once you have confirmation of the exact pixel resolution of how youre shooting and how you’re finishing, then you can punch the numbers. So lets assume they want a 2048 × 1080 delivery, and the DOP is shooting 4096 × 2160. Shooting resolution divide by finishing resolution is 2, meaning that you can scale the footage up by 200% without loosing any quality at all.
The general rule of thumb is that you can scale up an additional 10-20% and the quality loss will be negligible (ie. imperceptible to the naked eye). Although I personally don’t love doing that because even if quality loss is imperceptible, it still might compromise the colorist/VFX artist’s ability to work, also that you never know how someone later down the track will compress your film, and any compression will be aggravated by quality loss.
If you’re finishing a HD/2K format, and shooting 8k, your laughing. 8k includes a bunch of different resolution which I can’t be bothered listing, but it genrally just means a pixel width of approx. 8000 pixels. So you could essentially scale up 300-400% or thereabouts without any quality loss – this is approximate, and again you need to confirm the precise resolutions before getting precise calculations.<font color=”#888888″>
</font>