This is a board for topics that don't fit on other boards, but that are still otaku/hobby related.
[Return] [Entire Thread] [Last 50 posts]
Posting mode: Reply
Name
Email
Subject   (reply to 29901)
Message
BB Code
File
File URL
Embed   Help
Password  (for post and file deletion)
  • Supported file types are: BMP, EPUB, GIF, JPEG, JPG, MP3, MP4, OGG, PDF, PNG, PSD, SWF, TORRENT, WEBM
  • Maximum file size allowed is 10000 KB.
  • Images greater than 260x260 pixels will be thumbnailed.
  • Currently 4545 unique user posts.
  • board catalog

File 14851056887.png - (512.28KB , 908x738 , 44100.png )
29901 No. 29901 [Edit]
Apparently, you're supposed to listen to music in 44.1hz.
Computers are default to 48hz (standard for film and such) and without some research there's no obvious indication that this is how it's supposed to be.
With how much people listen to music, I feel I should have come across this knowledge sooner as it should be more common parlance. It's not hard at all to change either.
>> No. 29902 [Edit]
Why's that?
>> No. 29903 [Edit]
>>29902
my guess is jewish conspiracy
>> No. 29905 [Edit]
>>29902
Higher 'hz' doesn't mean better quality, when it comes to audio processing; it's more about matching the 'hz', to process it at the same...whatever it is.
I guess it's like getting a image that fills half your screen, in its original form, and stretching it to fill your entire screen.
>> No. 29906 [Edit]
I guess I'm lucky then but it was in 44.1hz by default.
>> No. 29932 [Edit]
>>29903
Most likely. I'm usually really sensitive to minute changes in tone, pitch, etc, but I didn't notice any difference at all when changing it.

It's still interesting information, though.
>> No. 29945 [Edit]
>>29932
I noticed it heaps. In fact, in the beginning, I wasn't sure what I was hearing was good or not because it was so different to what I was used to.
It's less dense, in 44.1, in that the sound is more spread out so you can hear more of each instrument.
>> No. 29946 [Edit]
>>29945
And I don't mean spread out as in pulling it thin but spread out as in to lay perfectly flat as before it was squished.

I think it may be more pronounced at higher volumes. I tested it first on speakers which I turn to a moderate-loud volume.
>> No. 29954 [Edit]
>>29945
I think I noticed a slight increase in clarity for songs with a very high pitch, but I wasn't 100% certain if it was legitimate or only seemed that way because I was listening for it.

Then again, it could boil down to a difference in our sound systems. I'm running some fairly old speakers.
>> No. 29998 [Edit]
>>29954
My sound system on computer is basic stuff as well.
I also don't hear too much of a difference on my computer and, actually, the first time I tried it and heard the amazing difference was when I changed it on my PS3.
My computer is stock from an office so the sound card is pretty trash, I learnt recently, but the PS3 actually has a very good sound card and so that's why I noticed such an improvement on that.
>> No. 40586 [Edit]
Audio/video synchronization is a weird rabbit hole to get into. For video, most displays have a refresh rate of 60hz. If you have a 30fps video source this works nicely, but most anime is 24fps so some frames end up being repeated more than others in 3:2 pulldown. More advanced techniques here might involve frame blending or interpolation plus resampling.

Then you get to audio. Most soundcards will be configured to accept a given range of sample rates, and provide an audio clock to synchronize with (usually defaulting to 48khz). If your audio driver is smart it can switch the soundcard to the appropriate mode, otherwise it has to resample the input to the output (interpolating if necessary). I don't think either osx or windows dynamically set the sample rate of the soundcard [1]. The 44.1khz -> 48khz upconversion might technically introduce some artifacts depending on the specific filter used (because in the real world you are limited to finite impulse response filters), but I really doubt you'd be able to perceive any difference.

Finally you have audio/video sync. I don't really understand under what conditions this happens. In an ideal world if both audio and video clocks were stable and we are able to keep up with the clocks (the individual stream successfully plays independently without drops), then I don't think there would be any sync issues since you can just let them play independently. But in the real world I guess for some reasons if we cannot keep up with the clocks we are given (maybe we're doing some post-processing on the video so can't make it in time for the next tick) or maybe a frame can't be decoded at all, then we are at a situation where desync might occur if they played independently. So we have to tie them together, and the easiest naive solution is to just skip video frames as needed in order to match the audio (this works nicely if audio clock is 48khz and video is 24fps).

[1] https://lists.apple.com/archives/coreaudio-api/2008/Jan/msg00280.html
>> No. 40588 [Edit]
>>40586
And then there's other things like the fact that just because you deliver samples to the soundcard at time X doesn't mean it gets delivered to the user (i.e. played_ at time X. There's usually some latency there, which will be higher for wireless headphones. So a good audio driver must also estimate the latency to the user, and then present this to the software so it can delay the video by the same amount. Similarly there might also be a lag between data delivered to the gpu and the actual display on the screen, which should also be factored in if you want the utmost accuracy.

So even if you don't have any fancy interpolation, getting A/V sync is non-trivial. The simplest case I think I can reason about is when you use the audio clock to drive things. If we assumed we had perfect speakers with 0 delay between soundcard and speaker output (speaker_latency), and 0 delay between sending audio samples and sending video frames (code_latency), then all we have to do is send a new frame every 1/fps sec. But to account for real-world delay, we actually have to end up waiting "1/fps + speaker_latency - code_latency" since a large speaker latency means we need to delay the video by the same amount, and conversely a large delay between sending audio and sending video means we need to hurry up and send video sooner.

Post edited on 29th Sep 2022, 9:56pm
>> No. 41218 [Edit]
Does flac or ogg sound better than mp3?
>> No. 41220 [Edit]
>>41218
>flac
This is lossless compression, so theoretically this is as good as you can get to source.

>ogg
Offers perceptual transparency at lower bitrates.

In practice you won't be able to tell a difference by A/B comparing lame 320k or v0 mp3 and a flac. (In fact v0 is slightly better than 320k as it allows better use of bit reservoir). And most times unless you play on some expensive setup you probably won't even be able to tell a difference between 192k mp3 and a flac. Most of difference will be in the high end of frequencies.
[Return] [Entire Thread] [Last 50 posts]

View catalog

Delete post []
Password  
Report post
Reason  


[Home] [Manage]



[ Rules ] [ an / foe / ma / mp3 / vg / vn ] [ cr / fig / navi ] [ mai / ot / so / tat ] [ arc / ddl / irc / lol / ns / pic ] [ home ]