Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Last post by chrisdukes -
fbuser, you literally replied while I was writing this new post; awkward timing, but deeply appreciated none-the-less. I made a bunch of changes since my last one, but I'm going to read your reply and see what changes I should make.
SELECT CASE WHEN genre LIKE '%Score%' THEN 'Score' WHEN genre LIKE '%Stage & Screen%' THEN 'Stage & Screen' END AS Category1, CASE WHEN genre LIKE '%Score%' AND genre_mv IN ('Anime', 'Movie', 'Musical', 'Television', 'Video Game') THEN genre_mv WHEN genre LIKE '%Stage & Screen%' AND genre_mv NOT IN ('Soundtrack', 'Score', 'Stage & Screen') THEN genre_mv END AS Category2, album
FROM tabletest GROUP BY Category1, Category2;
I still a have a lot to learn about SQL so thanks for any help!
Last post by Kraeved -
The difference in size between FSLAC -2 and -3 never ceases to amaze me. But the hand still reaches for -3, because its spectrogram is almost indistinguishable from the original, which welcomes further lossy compression for the portable player. Of course, the spectrogram is not everything, you need to check with your ears, but it would still be unpleasant to upgrade the hardware tomorrow and discover that the -2 collection has flaws. Curiously, achieving the same visual indistinguishability with Wavpack in hybrid mode is not easy, it stubbornly retains some content (quantization noise, I guess) in the upper segment.
Is there anyone present here who would take a stab at "debunking" this supposed critical weakness of cones, and thus advantage of ESLs?
Sure. I'll start with a simple omission in one of the statements:
"Interestingly and importantly, the way an ESL converts an audio signal to sound is the exact inverse of how a recording microphone converts sound into an audio signal. In a microphone, pressure creates voltage, and in an ESL, voltage creates pressure. This contributes to the exceptional accuracy of ESLs. It is not the case for cone speakers, where electrical current supplies non-linear force to a multiple spring-mass-damper system.
This ignores that electrostatic speaker membranes are usually rather large. Microphone membranes are often less than a centimeter (roughly half an inch) in diameter, and with larger membranes one often sees a roll-off in the high frequencies. The size of the membrane matters, so one cannot say an ESL is a direct inversion of a microphone.
More in general: Sure, electrodynamic speakers are not perfect. Tuning a mass-spring-damper system is a compromise, that is correct. But in return, electrostatic speakers are large and thus deviate from the ideal point-source. The lack of a crossover actually works against electrostatics in this respect: the higher the frequency, the more the size of the driver matters. So, a two-way speaker system can use a small element to come as close as possible to a point source for those high frequencies that need it the most, while using a larger driver for low frequencies that have long enough wavelengths for the size not to matter too much.
As with most technologies, there is compromise. Just because one technology outperforms the other in one aspect doesn't mean it is better, as it might be flawed in another.
Finally: Almost all nearfield studio monitors, which are built for the sound production with the highest accuracy, are electrodynamic, not electrostatic. That is not a coincidence. You could also say: the recording you're listening too has probably been approved by the musician while listening to an electrodynamic speaker. There is no reason to assume electrostatic loudspeakers will be closer to the artists intent if that artist worked on the recording using electrodynamic speakers.
Last post by magicgoose -
What counts as loss of information? How much of it is too much? If there is an imperfection that can be near perfectly negated by some equalization, is that also loss of information? I would trust measurements more than intuitive reasoning. Are there any such measurements to back up this argument?
Last post by Hakan Abbas - @Kraeved Codecs developed by certain teams (Google, Cloudinary, Apple, AOM), such as WebP, AVIF, HEIF and JXL, work by choosing according to the processor architecture (SSE, AVX, AVX2). Of course, this option could have been added, but it was slightly behind in the order of priority. Because it really wasn't easy for me (speed/ratio/memory/mt) to cope with such powerful image codecs.
HALIC can run a little faster when compiled in AVX2 mode. However, despite the request, I did not do this in order to support slightly older architectures. I thought AVX (2011) would be enough. But until now, such a request had come from outside of you. However, it only takes me a few minutes to prepare a version for older processors. You can access the SSE2 version I compiled for HALIC from the link below. https://github.com/Hakan-Abbas/HALIC-High-Availability-Lossless-Image-Compression-/releases
HALIC is by far the best Lossless Image Codec according to "F_Score (universal score)". F_Score = C+2·D+(S+F)/10⁶ "Here, C and D are respectively the total compression and decompression execution time (in seconds), S is the total compressed size in bytes, and F is the submission packet size."