Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: The Bitrate Bible (Read 6533 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

The Bitrate Bible

Hi, recently I made this bitrate table which illustrastes how bitrates and resolution increase, both for audio & video (geometric progression). It displays plenty of inbetween steps that people are usually unaware of (the red and purple steps) and puts forward the most logic steps to use (the green steps). So, after reading IgorC's topic on Opus bitrates, I thought it could be useful to share it to avoid discussing steps that are approximations. The green steps that are highlighted in yellow are somewhat subjective as they underline the best quality/size ratio. Depending on if you care more about size or quality, you should lean toward 128Kbps + 1080p@8Mbps (size efficient) or 160Kbps + 1080p@12Mbps (quality/transparency efficient). This table unifies all bitrates, audio & video (just like Maxwell unified electromagnetism ;) ). I took the old MP3 CBR bitrates table that we all know 128-160-192-224-256 etc, I deconstructed it to understand the logic behind it & I reconstructed the whole picture like a puzzle, then I built the most exact equivalent for video. I hesitated between posting this here or on Doom9 ... but I fear that all those Doom9 people using -crf  (Constant Rate Factor) would not have understood the purpose of the table. I am a two pass user & I built this table to find what where the logic bitrates to use for video encoding.

This table also highligth a very important detail when it comes to choosing the right bitrate: there is a logic reason why people are hesitating between 128Kbps and 160Kbps nowadays ... something happens between these two steps, the steppping from the geometric progression switches from +8 to +16 which means that 128Kbps is the last of the +8Kbps steps & 160Kbps is the first of the +16Kbps steps. It's not just about listening tests, as the transparency point is a zone rather then a point, the final choice is affected by the way this table is built, if you are size-conservative, you will naturally be relunctant to switch to the next stepping (but it is really worth it IMHO quality wise).

By the way, this table also shows that the ingeneer who has chosen 1920x1080 as the 2K standard instead of 2048x1152 should be fired ;)

Hope it helped ;)


Re: The Bitrate Bible

Reply #1
I think trying to assign quality levels to audio above about 100 kbps is kind of meaningless. Stereo in this range is generally transparent, so discussing abstract quality doesn't make sense. Or at least you're describing a thing that probably only exists in your imagination and not the real world.

A better way to think about it may be in terms of the probability that an entire track will be transparent. It is high at 128k, but doesn't reach 1 until lossless bitrates.

Re: The Bitrate Bible

Reply #2
Well, we use this scale everyday so it has an impact in real life ... now I agree that transparency depends on the input complexity so the very concept of bitrate steps is debatable ... the thing is: I didn't really invent this table, it was already hidden there (most of the data is objective, so it's not just "in my mind", except the yellow underlining), I just tried to put in plain sight how it works. Fell free to use whatever step you like or to use TVBR or -crf. Units & standards are handy nevertheless ;)

Re: The Bitrate Bible

Reply #3
Mind to share what codec are we talking about for the video part of this graph? The graph makes no sense if it's codec universal considering how differently MPEG2, h.264, h.265 would perform at given bitrates and resolution.

I have some concepts in my mind sometimes like, if VCD was a format that was coming out now, using OPUS and h.265 with what bitrates and resolution would it get away with? Same with DVD.
Would VCD be able to offer anything more than Stereo? (Most possibly yes)
Would VCD be able to offer anything better than SD quality? (I have no idea)
What about DVD? Could It offer proper FullHD (or more) with stellar multichannel 5.1 audio? (I guess yeah)

Re: The Bitrate Bible

Reply #4
This table was built using x264 slower & x265 superfast (2 presets which have comparable speed) & Opus/Itunes CVBR in mind. But it doesn't matter.

Bitrates are bitrates no matter the codec the scale remains the same, this scale has been there since the beginning of MPEG codecs. An efficient codec (modern) will indeed need a lower step in the scale to achieve transparency, but this is not what I intended to discuss. I just wanted to reveal the full picture of the geometric logic behind the bitrate scale. Maybe, I should not have highlighted some steps in yellow (which only reflect my own use) ... it misleads people obviously.

 

Re: The Bitrate Bible

Reply #5
I just want to say that the "Unnoficial K Name" column is incorrect.

1K is Full HD = 1080p 
2K is WQHD = 1440p (twice the area of 1K)
4K is UHD = 2160p  (four times the area of 1K)

Re: The Bitrate Bible

Reply #6
1K is Full HD = 1080p 
2K is WQHD = 1440p (twice the area of 1K)
4K is UHD = 2160p  (four times the area of 1K)

I'd hate to be "that guy" but nope, your wrong. https://en.wikipedia.org/wiki/2K_resolution
The "K" moniker is based on the width or horisontal resolution.
1K would be equivalent to 1024x768, and 2K is 1920x1080 (but most call it 1080p, I sometimes call it 2K though).
3K would be 2560x1440 or 3440x1440 (either which makes no sense, but the "K" moniker never truly made sense anyway), I've never seen 3K displays advertised as such though.
And 4K is obviously 3840x2160 as that is what popularised it although technically speaking 2K is 2048 x 1080 and 4K is 4096 x 2160.

If they had used a existing standard based value I guess that 4096x2160 should have been called 4KiB(i) resolution?

K is so easy to type though. 2K, 4K, 8K. But using p is more correct (it's actually exact), Youtube uses "p", I seem to recall they used "K" but has now dropped it.
If somebody says 2160p without mentioning the aspect ratio then assume it's 3840x2160 resolution.

The "K" thing should probably be dropped from marketing stuff as it confuses even the likes you JAZ :P

The math is such: Round(width / 1000.0)=K

Re: The Bitrate Bible

Reply #7
Bitrate being just one of n parameters that make up audio or video encoding, I honestly don't see much practical generic use for such tables. Not to mention it being called a "bitrate bible"...

Beside, unless one's trying a new CODEC, doesn't one usually settle for this or that bitrate, once they're happy with it?

edit: extra second sentence.
Listen to the music, not the media it's on.
União e reconstrução

Re: The Bitrate Bible

Reply #8
2 pass instead of crf, why?
PANIC: CPU 1: Cache Error (unrecoverable - dcache data) Eframe = 0x90000000208cf3b8
NOTICE - cpu 0 didn't dump TLB, may be hung

Re: The Bitrate Bible

Reply #9
smok3: the answer is complex so hang on ;)

2pass and crf output the same quality at same size.
2pass is slower but filesize is predictable, while crf is faster but filesize is unpredictable.
If you want predictable filesize you need to use 2pass.

crf is a "trick" to achieve the best quality possible the fastest way possible, but try crf with different input and you will realize that the output bitrate varies a LOT even with input files that share similarities (same resolution/framerate/color depth ...)

crf is better than 2pass as long as you assume that the slower presets outputs better quality, because with the time that you save using only 1pass you can increase the quality by using a slower preset (which is true for x264, but, sadly, not for x265).

So, I do understand your question, which is a clever one.

But, as soon as this assumption is false (that a slower preset would output a better quality), the main advantage of crf upon 2pass is lost.

Warning the following claim needs to be proven:
The problem is that x265 is actually flawed, there are in fact 2 codecs in one,  ultrafast & superfast on the one hand & the other presets on the other hand. Starting at veryfast, x265 becomes blurry & the assumption that a slower preset would give you a better quality is false.

Due to this hidden flaw, for a very long time, I though x264 was better/sharper than x265, until I realized x265 was better if you don't use its blurry presets.

At some point, (when I will have time) I will report this on x265 development thread on doom9 & prove my claim with screenshots.
It just seems that people on doom9 rely too much on metrics & simply don't watch the ouput.

Furthermore, many people scorn ultrafast & superfast presets as they score lower in ssim & the same crf will output a bigger file with these two presets. They deem this result as a proof that these presets would be worst. If you watch the ouput video on a 4K screen & compare screenshots with staxrip, you will soon realize that this is the exact opposite that happens, these 2 presets may score lower in ssim but they output better quality in real life.

The flaws I am talking about are almost unoticable on my 24 inches 2K screen, but they become blatant on my 48 inches 4K screen.

I don't know the technical explanation of this fact, maybe it's not even a bug. Maybe, it's the H265 features that are used in the slower presets which create naturally this blurriness.
I have read plenty of people telling bad thing about SAO (I am not alone to complain about x265 bluriness), so I tested with/without SAO to see if it was the culprit and the answer was unfortunatelly that no, SAO is not the only responsible for this sudden loss of sharpness.

Explaining this on doom9 would take me ages as I would have to prove my claims.

In the end, I don't encode video, I just buy new hardrives ... so I don't care. (But in the past vorbis aotuv already had a commit due to one of my bug report, so trust who you want ;) I am a real bitch when it comes to quality ... I wouldn' say that if I wasn't self confident that I am right)

I hope it's clearer now ... I don't claim that crf is bad, but I am convinced that, as of now, there is a bug/problem that makes crf with x265 almost useless as you cannot improve quality at high bitrate by using a slower preset than superfast.

You may say that even with sticking to superfast, I could use crf to achieve the same quality faster. You would be right. In the end, I like predictable filesize & I am ready to sacrifice speed for it.

Re: The Bitrate Bible

Reply #10
CRF is better, because it estimates the required bitrate for you. With only two-pass mode, you have to spend some time analyzing the video first for the amount of detail it contains. You can't be precise enough in the assessment if you use only your eyes and memory. At the same resolution and framerate, two videos can differ much more in the required bitrate, compared to sound, assuming it has some activity in all channels.

For example, let's say you're encoding a TV season, and there is one episode with rainstorm and ocean waves for the majority of its duration, and the others are mostly static scenes with talking heads. With average bitrate, you'd have to overcode most episodes to leave enough margin for quality, or maybe encode the entire season at once, but I am not aware of any tool doing this.

I'd say a DVD would comfortably fit a 720p, which has twice the number of pixels (assuming they are all filled in), than standard definition, and h.264 is roughly twice as efficient. I have no experience with h.265. How would you have a power hungry CPU from space capable of doing h.265 (de-)compression, and yet be stuck with the capacity of a CD... Maybe you could have the old standard 1CD sub-SD rip with decent stereo sound.

According to the bitrate table, SD requires only about 1 Mbit? Party like it's 1999?

I don't see anything notable in that bitrate steps grow. As numbers get bigger we don't care as much about precision, and computers like doublings instead of smooth growth.

Re: The Bitrate Bible

Reply #11
CRF is faster, not better. CRF is only "better" because we have limited amount of ressource. So using CRF we can trade speed for quality. In an unlimited CPU ressource world, with no encoding time limitation, we would all use 2pass.

This is a misunderstanding that this table says anything about quality, by itself.

It only does because I added my opinion through my yellow underlining.

This table is neutral & codec-less.  This is only maths (except the yellow cases which are my opinion). The right way to use this table is to you do your own listening test & visual test and you see how it matches with the scale. It is not to search the scale for a perfect bitrate without any personnal codec testing.

Quote: According to the bitrate table, SD requires only about 1 Mbit? Party like it's 1999?
You're expecting a higher bitrate because you're mixing MPEG2 with AVC/HEVC.
This table doesn't say anything by itself about quality. But this table mixed with my own testing tells me (and only me, unless you blindly trust me) that with x264/x265 at 432p (16/9 resolution) quality will be decent at 1Mbps, optimal at 1.5Mbps & starts to become overkill at 2Mbps. (... but 432p will be crap anyway, even if you max out the possible quality for the resolution).

With MPEG2, I dunno, I never tested MPEG2. I just know that the bitrate will have to be higher, obviously.

How can I predict that ? Because I thoughrouly tested at 1080p with x264/x265 & for me at 2K (on a 4K screen) quality is decent at 8Mbps, optimal at 12Mbps & starts to become overkill at 16Mbps.

(Note: people who have only tested 1080p on a small 2K screen are likely to like lower bitrate better than I do ... because well "igorance is bliss", they'll realize their misstake when they'll upgrade their screen ...)

As the table is proportional, I can extrapolate all this info. But I repeat it's not the table itself that teaches you that, it's a mixture of this objective mathematical table & your own subjective codec tests. Nobody can test codecs for you.

The only moment when this table tells something about quality is when there is a stepping change because a these point the quality increase is not linear anymore, there is a stall & being aware of this gap is interesting quality-wise.

Re: The Bitrate Bible

Reply #12
Quote
How can I predict that ? Because I thoughrouly tested at 1080p with x264/x265 & for me at 2K (on a 4K screen) quality is decent at 8Mbps, optimal at 12Mbps & starts to become overkill at 16Mbps.

If the implication here is that using a higher screen resolution than the source material and upsampling makes spotting compression artifacts easier, then I disagree.

Re: The Bitrate Bible

Reply #13
Quote
Furthermore, many people scorn ultrafast & superfast presets as they score lower in ssim & the same crf will output a bigger file with these two presets. They deem this result as a proof that these presets would be worst. If you watch the ouput video on a 4K screen & compare screenshots with staxrip, you will soon realize that this is the exact opposite that happens, these 2 presets may score lower in ssim but they output better quality in real life.
You can only compare two files with extremely similar bitrate (otherwise it's apples and oranges imho), so in that case you would actually have to do a 2 pass version of each of the presets.

I use crf x264 stuff daily for 1080p stuff and bitrate varies a lot (from 4 to 20k average), however most of my stuff are short clips, so that may account for that.

Quote
2pass is slower but filesize is predictable, while crf is faster but filesize is unpredictable.
Yes, but the important part would be that with crf quality is at least slightly predictable, when with two pass is win or miss. Now for example 10 Mbits may be something you decide it will cover most stuff just fine, but in reality stuff on disk will take unnecessary space or will look fuggly (depending on source).

I haven't done much with x265, are you doing 10 bit encodes?
PANIC: CPU 1: Cache Error (unrecoverable - dcache data) Eframe = 0x90000000208cf3b8
NOTICE - cpu 0 didn't dump TLB, may be hung

Re: The Bitrate Bible

Reply #14
That's exactly what I've done, I compared each preset at 2 pass same bitrate (and then each green step against the others within a preset ... it took me a week cauz I also did it both for x264 & x265, 2 pass & all presets all crf ... which is ... hundreds of encodes, but the extract was only 30 sec) & I found that superfast was better than the slower presets, which was unexpected to say the least (so I double checked with another sample & found the same result)... as of now I don't care much as I cannot offer to use the slower presets on 4K anyway ... so I just hope it will be fixed in a future x265 release ... I was sad & happy at the same time when I discovered how to avoid x265 bluriness ... happy cauz I finally found an x265 setting that would beat x264 slower (by a very slight edge) all the time ... and sad because it meant that x265 could be even better without that bug/bad behavior.

In the end, I use x265 superfast 2pass 12Mbps 10bit for 2K blurays. If the source has a high bitrate but looks bad (this happens with bad cameras or TV captures) I transcode to 8Mbps or 6Mbps depending on how bad it looks. I tend to avoid 6Mbps cause that is really really lossy.

My lossy encoding strategy is to encode at one step in the scale above the step at which I can witness clear artefacts when I focus ... so as I can regularly ABX killer samples at 128Kbps & see the washing out of skin details at 8Mbps ... I use 160Kbps+12Mbps ... the way I see it 160Kbps is 128Kbps with a 32Kbps bulletproof vest ... and 12Mbps is 8Mbps with a 4Mbps bulletproof vest ... I like 128Kbps(audio)+8Mbps(video) for streaming, but not for archival.

If that bulletproof vest is not enough, experience has shown me that increasing the bitrate higher is usually not worth it anyway.

As for 10 bit, I tested 10bit vs 8bit on a couple of 8bit BD source, 10bit was clearly better IMHO. I was even surprised by the fact that it was such a clear cut improvement, thinking it was all about roundings, I didn't expect it be visible. ... I don't do 8bit anymore.

A big problem with crf is that it's not handy for transcoding ... say you have a 8Mbps source & you want to transcode it to something close to 6Mbps, with crf you can never be sure that the result will actually be lower than 8Mbps ... crf works great as a guideline for beginners but it's not precise at an advanced level if you know exactly where to hit to save space.

Re: The Bitrate Bible

Reply #15
Quote
say you have a 8Mbps source & you want to transcode it to something close to 6Mbps
So there is actually a real-world example on when/where/how would one decide to go that route?
PANIC: CPU 1: Cache Error (unrecoverable - dcache data) Eframe = 0x90000000208cf3b8
NOTICE - cpu 0 didn't dump TLB, may be hung

Re: The Bitrate Bible

Reply #16
I am a CRF user as you feared. I use CRF (variable bitrate) because it gives a constant quality no matter what you throw at it. And yes, for each movie the resulting size will vary a lot! I don't care what the size of the movie is, because it will easily fit on my HDD.

This is the opposite of Constant Bitrate and 2 Pass, where you either give too many bits (waste) or too little bits (quality loss).
Constant Bitrate should (only) be used when you have a bandwidth restriction. 2 pass should be (only) used when you have specific size in mind (DVD/Bluray disc).


btw. I use CRF20 with x264 and x265. If i want better quality i use CRF18.

Re: The Bitrate Bible

Reply #17
No, 2pass & crf give the same quality at same bitrate. I don't dislike crf, I just like 2pass better. 2pass is quality (of the CRF mode)+control over size (of the bitrate mode) albeit at the expense of encoding time.

You're mixing CBR with 2pass, these are not the same.

By the way I also disagree with this:
"If the implication here is that using a higher screen resolution than the source material and upsampling makes spotting compression artifacts easier, then I disagree. "

 ... I mean providing the upscaling is a classic one that doesn't try to "improve" things (without post-processing). Switching to a 48' 4K screen (and watching a 2K source on it) is like staring at codec shortcomings with a magnifying glass compared to a 24' 2K screen.

Re: The Bitrate Bible

Reply #18
CRF and 2pass will give similar quality IF the bitrate is the same.
But with 2pass you cannot know what bitrate is required in advance, as each movies requires a different bitrate. You're just guessing with your Bitrate Bible.

With CRF you don't have to guess the bitrate. And you save encoding time because it's only one pass.


CBR/AVG/2pass are all the same. You control (restrict) the size in advance, while each movie require a different size because each movies is different. So CBR/AVG/2pass results in quality differences for each movie.

Although the quality does not change for different scenes WITHIN ONE MOVIE with 2pass, quality will be different when comparing different movies.

Re: The Bitrate Bible

Reply #19
No.

"But with 2pass you cannot know what bitrate is required in advance, as each movies requires a different bitrate."

No, because no video "requires" a bitrate. For instance if you use a crf that output 6000kbps on average, there is no input video with a complexity that "requires" 6000Kbps because there is no video codec that is transparent a 6000Kbps (for 2k). It only appears transparent because it's moving too fast for you to spot the artefacts. Take screenshots and compare, only then you will realize what you lost. So, when you say it "requires" a bitrate, it requires that bitrates to achieve what ? At best, the best answer you could give me is to achieve "constant quality" according to the crf level I have chosen.

Even with this answer this is unsatisfaying ... by selecting a crf you are deciding how much information you are throwing out of the window, just like when you select a bitrate with 2 pass ... this is not much different ... the fact that rate control will give more bitrate to hard to encode extract doesn't change this, as 2pass does exactly that too ...

Again you are mixing 1pass CBR without rate control & 2pass CBR with rate control ... you're mixing apple with orange ... the problem is not CBR against VBR ... the problem is "with rate control" or "without rate control" ... and both 2pass & crf do rate control ...

... so when you are trying to teach me that crf is better than 2pass, you're telling that an orange is better than an orange. This is absurd.

Do the test, watch the video you encode, take screenshots (many), compare 2pass with crf ... then you will realize that what you are saying is nonsensical. Two video encoded with 2pass at same bitrate DO have a very consistant quality & the bitrate adjustement of rate control is only necessary in hard to encode extracts (like when there are plenty of moving little objects, like snow for instance) & two pass does exactly that, it gives more bitrate to these hard to encode extracts just as crf does.

crf is not magic, the idea that crf would provide you constant quality is based on metrics like SSIM & this is not completely reliable. It's just better than nothing ... it smoothes the rough edges, but if your crf is too weak, it's just like chosing a low bitrate ... the result will be crap.

So you pretend that I am guessing with my table ... yes, using 12Mbps (for 2k), I am betting on a horse that I know will win 99% of time.

crf will never make you save bitrate because the transparency point of video is much higher than the usual crf settings that people are using. With video encoding, you're always in the lossy zone, so you cannot really say it will make you save bitrate, save bitrate compared to what ? At best, it will make you save bitrate to achieve the subjective quality level that you have chosen when you have chosen your crf level. Does it really matters if it's a little more lossy here & a little less lossy there ? The truth is, it only really matters for complex passages. In the end, it's rate control that you should praise. A clever rate control mode will avoid temporary bitrate starving for complex scenes & two pass does just that.

Saying that crf will make you save bitrate is a langage abuse IMHO (or you should add "compared to 1 pass"). It's not crf itself that will make you save bitrate, it's selecting the right crf that will make you save bitrate ... just like selecting the right bitrate in 2pass mode does ...

Again, I am not telling you that crf is bad, crf is good if you want to trade file size predictability for speed, but this has nothing to do with quality.

Re: The Bitrate Bible

Reply #20
The problem is that your bible only takes resolution into account. The point i'm trying to make is that no movies is the same; the contents of a movie the makes a huge difference.

Let's take two 1920x1080 movies as an example:
1) A clean computer animated movie
2) A noisy old action movie

According to your bible both movies must be encoded at 7Mbps.

In reality the animated movies will look transparent at a much lower bitrate.
And the action movie will look worse that the original source, because all the action and noise requires a higher bitrate to be transparent.


With crf you don't even have to think about bitrate. You get the same quality always.

Re: The Bitrate Bible

Reply #21
Yes but comparing anime & movies is like comparing a 256 color gif with 16,7 millions color jpeg ... at some point the comparison is unfair.

The overall difference in complexity between two regular classic movies is far less than that. Reattributing bitrate throught control rate is (very) interesting for short complex passages, but, providing the extract is long enough, the overall bitrate will be the key parameter of the overall quality of the encode. You can fool yourself as much as you want ... in the end behind crf, there is bitrate.

You're over-estimating the real quality variation, outside of specific cases, there is no sudden drop of quality that would happen for no reason ...

Nothing prevents you from encoding anime in 6Mbps while using 8Mbps for movies (for 2k). You're the final judge on how to make a clever use of the table. The purpose of this table is to prove that some bitrates steps are more logic to use than others, but among the rationnal steps to use (the green ones) use them as you wish.

Personnaly, I dislike the way anime are usually encoded ... I think they deserve better than 720p@2Mbps even if it's a kind of content that is easy to encode ... if you follow this logic then 4k anime content is useless because there is no details & poor anime encodes upscales easyly ... I couldn't disagree more ...

Re: The Bitrate Bible

Reply #22
Do note that depending in some software CRF may not mean rate control but instead refer to quantization or quality control.

Targeting say 95% quality would in theory mean that at any given moment you are only throwing away 5% detail.

It also doesn't help that LAME has V0 and V1 where a lower number means higher quality, or JPEG where on a app per app basis a percentage is given that may mean 85% compression (25% quality) or 85% quality (25% compression).

In ffmpeg CRF is supposed to be a constant quality mode. For making masters (for archival or uploading to youtube) CRF is what you want. For live streaming you want CBR (though in the case of youtube they actually can switch you from one CBR to another as the video is delivered in chunks based on your current bandwidth speed).

If you plan to have the video re-encoded then you want to aim for quality rather than size, always.

Are you claiming that VBR gives better quality than CRF?
This contradict your own statements earlier where you claim 2pass and crf at same bitrate has the same quality. Yet you say that 2pass is better quality than crf.
Or are you saying that 2pass is better than crf when at lower bitrates? Then wouldn't crf at a lower bitrate still have the same quality?

You are making claims that would make sense with CBR vs VBR vs ABR (with lets say MP3s).
But CRF is neither VBR nor CBR. And I'm not sure if CRF can be compared to ABR even.
http://slhck.info/video/2017/02/24/crf-guide.html
"Constant Rate Factor is a little more sophisticated than that. It will compress different frames by different amounts, thus varying the QP as necessary to maintain a certain level of perceived quality."
So we have the visual variant of pyschoacoustics involved now (psychovisuals?).

You also claim that no videocodec is transparent if you freeze frame, nor are/should they be, they are animated in-motion, they only need to be transparent when played in rapid succession. Also note that a lot of software do not handle reversing of frames, heck a few struggle with forwarding of frames as they do not cache past or future frames and frame accurate frame skipping may not be directly supported (or may not be precise).

There is another issue you have with your table. It may actually be correct, but ONLY for your sample video. And unless your 30 sec sample video is the average representative of all video you process the table will end up wrong.
That table will only remain consistent for your test sample. This is codec developers and listening (an viewing) tests uses as many varying inputs as possible and in the case of pyschoacoustics/visuals as many test viewers as possible to make sure their numbers are correct.

What you can do is use a few shorter test samples (a clip calm camera pans, a clip with lots of shaky cam, a clip with lots of grain, a clip with sharp angles, clips with no lens flares vs ones with lots, old disney animations vs new ones, lossless original vs lossy source etc.) to dial in a acceptable average. And THEN you can do tests with actual full video content to see if your assumptions are correct or not.

Then you have the issue of re-compression, crf vs vbr may re-compress differently and either may look better or worse than the other, possibly even on a per film, basis. For youtube I myself use similar CRF settings for certain video and check to make sure that the bitrate is at least 50mbps and not too much above 100mbps when I upload to youtube (it varies based on the frame rate and resolution I upload at). I basically do a "manual" two pass.
If I'm happy with the resulting size of the CRF setting I use that, if the bps is too low I reduce the CRF value, if the size is too large I increase the CRF value.

I never noted down my exact settings, but if I recall I've mostly used CRF settings that resulted in approximately 1GB per minute in size. This means 1 hour of video at 3840x2160p@60fps would take up around 60GB (136mbps?), so half the fps would be 30GB, 1080p at 30fps would just be 15GB etc.
I care less about disk size for shorter clip than longer clips obviously. And it also varies a little what the clip is about, if it has a lot of detail and the camera is calm and viewers need to read stuff then higher bitrate (and thus lower CRF) is needed.

Also remember that in the case of uploading to youtube, while their suggested bitrates are "OK" please keep in mind that often what gets uploaded to youtube has been compressed using a lossy codec at least 2 times before youtube compresses it again. (camera or capture software compresses, then a interim format may be used for editing so another compression, then after editing it's exported so yet another compression).

Soon Youtube will start encoding to AV1 which has recently had it's bitstream frozen, this means Youtube can start to slowly re-encode the original uploads to AV1, and good quality uploads will benefit from this vs low quality ones, the cleaner the original the "easier" the work of the encoder is.

Note! I know, I'm probably way overkill on my bitrates, but I got a upload bandwidth and clip sizes that makes that not totally insane currently, and the better quality the original is the better quality the video the viewer sees.