Skip to main content

Recent Posts

1
General Audio / Re: Merge two audio files (ADX extension)
Last post by kode54 -
ADX is a lossy ADPCM format, and decodes to 16 bits per sample. No need to use dithering when mixing them, but I would advise using whatever lossless format you want as an intermediate format, and possibly use something like VGAudio to encode it to either ADX again, or HCA.

If you turn off looping in VGMStream before converting them to WAV, mix them, then encode the resulting mix to ADX or HCA with the correct loop offsets, you can have a looping track with minimal fuss. Or you could look up on hcs64.com how to tag an Ogg Vorbis track with loop info, and rename it to .logg for VGMStream.
2
General Audio / Re: Merge two audio files (ADX extension)
Last post by DVDdoug -
What format do you usually listen to?

FLAC is often the best format.  It's lossless, compressed to almost half the size as WAV, and tagging (metadata) is better supported than WAV.   The only downside is that not every computer will play it without installing a CODEC.   You may want to save a FLAC archive even if you want to listen to an MP3 version.

MP3 (or AAC) can often be transparent (sound identical to the original) so there's nothing wrong with choosing it.

Quote
Should I use Dither as well?
Dither is for downsampling the bit depth.   But, dither (or the effects of dither) are not audible at 16-bits or more under any reasonable conditions so in reality it doesn't matter one way or the other.

What you should look out for is clipping.   Mixing is done by addition (summation) so you can get clipping.    If you get clipping, or if you're not sure, run Amplify or Normalize after mixing and before exporting.   Audacity uses floating-point internally/temporarily so it can go over 0dB, but normal WAV or FLAC files are hard-limited to 0dB and you can get clipping when you export.    If you have peaks over 0dB, Amplify  will default to whatever negative dB gain (attenuation) that's required to bring the peaks down to 0dB.

 
3
General - (fb2k) / Re: New HDD and Mass lossless conversion
Last post by wcs13 -
OK, here's the Bit Compare result : "4 out of 34290 track pairs could not be compared."

How can I find those 4 track pairs in that huge text log ?... What kind of text string am I supposed to search for ?
And what is wrong with them ? Why couldn't they be compared ? @Peter , are you listening ?
foo_bitcompare's log needs some work in order to be really useful, IMHO.

Please help. Thank you.
4
General Audio / Merge two audio files (ADX extension)
Last post by notbugme -
Hello everyone,

I have two ADX files from a game, vocals and instruments, of the same song. They are 48 Khz/16 Bit Depth.
Importing both in Audacity, I only cut the last part of the vocals (silence only), and then export them into a single file.

What's the best way to do it?
Is FLAC the recommended way to save them to preserve quality, even with a doubled file size, or should I just use MP3 or another lossy format? Should I use Dither as well?

Thanks for reading.
5
Here
Code: [Select]
void instantiate(service_ptr_t<output> & p_out, const GUID & p_device, double p_buffer_length, bool p_dither, t_uint32 p_bitdepth) {
    p_out = new service_impl_t<output_asio_instance>(p_device, p_buffer_length, p_dither, p_bitdepth);
};
you need to return instance of class derived from output (or output_v2), not from output_entry_asio.
6
How about splitting these files using converter (at least all lossless, so conversion will not change audio quality)?

Friend you did not understand my problem, I need to embed the cue sheets in my single batch files, or at least read the CUE file and update the single lossless file in your TAGS like GENRE and DATE, it is not feasible to do it on the finger, as I demonstrated with foobar2k, and I need to keep these files unique with CUE, because many albums are for shows, and others are for continuous tracks like Pink Floyd, Tha Alan Parsons Project, and the vast majority of media servers / the infamous GAP between the separate tracks which makes it unfeasible to listen to these albums.

Att.

Druid®.
7
I hope somebody can help me with the following issue:

I have installed foobar v1.3.16 and added foo_spdifer_1.0 component in order to play DTS 5.1 audio via S/PDF (optical) output.
I also added WASAPI.fb2k output component but only stereo audio appeared on my external Samsung player.
Can somebody tell me what should I do in order to setup the correct 5.1 output?

(VLC player can send the 5.1 signal via optical output, but I would like to use foobar instead)
8
I personally have converted all my cuesheet combined files to separate files using foobar2000. Works flawless.

Friend your response is totally out of context for what I'm needing, yet thank you for responding, and no I do not need what you did, even though there are many media servers / tank on the market that has the infamous GAP between the album tracks with separate tracks which is not feasible for concert albums and albums with continuous tracks (Pink Floyd, The Alan Parsons Project, etc.), so I preserve many unique albums with their corresponding CUE.

Att.

Druid®.
9
Hi,

Thank you for your answer. It was done in the meantime.
I think i'm not so far from the goal.

However when I try to register the class, I have these errors, and my current code is below.
It speaks about a reinterpret_cast, but i'm surprised to use that to be able to register the output.
Any idea ?

Regards.

Code: [Select]
c:\sdk\fb2k\pfc\primitives.h(186): error C2440: 'return': cannot convert from 'service_impl_t<output_asio_instance> *' to 'output::t_interface *'
1>  c:\sdk\fb2k\pfc\primitives.h(186): note: Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast
1>  c:\sdk\fb2k\sdk\service.h(82): note: see reference to function template instantiation 't_ret *pfc::safe_ptr_cast<T,t_source>(t_param *)' being compiled
1>          with
1>          [
1>              t_ret=output::t_interface,
1>              T=output::t_interface,
1>              t_source=service_impl_t<output_asio_instance>,
1>              t_param=service_impl_t<output_asio_instance>
1>          ]

Code: [Select]
// ********************************** /
// class output_asio_instance
//

class output_asio_instance : public output_entry_asio
{
public:


output_asio_instance(const GUID & p_device, double p_buffer_length, bool p_dither, t_uint32 p_bitdepth)
{
Init();
OpenDriver = false;
}

~output_asio_instance(void)
{
if (OpenDriver) ParamMsg(MSG_CLOSE_DRIVER).Call();
}

static void g_advanced_settings_popup(HWND p_parent, POINT p_menupoint) {};
static int g_advanced_settings_query() { return output_entry::flag_needs_advanced_config; };
static int g_needs_dither_config() { return output_entry::flag_needs_dither_config; };
static int g_needs_bitdepth_config() { return output_entry::flag_needs_bitdepth_config; };
static int g_needs_device_list_prefixes() { return output_entry::flag_needs_device_list_prefixes; };
static const GUID g_get_guid(void)
{
// {3A5EDE8E-840D-497c-9774-156A12FC4275}
static const GUID guid =
{ 0x3a5ede8e, 0x840d, 0x497c,{ 0x97, 0x74, 0x15, 0x6a, 0x12, 0xfc, 0x42, 0x75 } };

return guid;
}

static const char* g_get_name(void)
{
return NAME2;
}

virtual const char* get_config_page_name(void)
{
return NAME2;
}

void instantiate(service_ptr_t<output> & p_out, const GUID & p_device, double p_buffer_length, bool p_dither, t_uint32 p_bitdepth) {

p_out = new service_impl_t<output_asio_instance>(p_device, p_buffer_length, p_dither, p_bitdepth);
};

void enum_devices(output_device_enum_callback &) {};
GUID get_guid() { g_get_guid(); };
const char *get_name(void) { g_get_name(); };
void advanced_settings_popup(HWND, POINT) {};
t_uint32 output_entry::get_config_flags(void) {};

void
output_asio_instance::open(const t_samplespec &) {};
void
output_asio_instance::pause(bool) {};
void
output_asio_instance::on_flush(void) {};
void
output_asio_instance::volume_set(double) {};
void
output_asio_instance::on_update(void) {};
void
output_asio_instance::write(const audio_chunk &) {};
t_size
output_asio_instance::can_write_samples(void) {
t_size val;
return val;
};
t_size
output_asio_instance::get_latency_samples(void) {
t_size val;
return val;
};


int
output_asio_instance::open_ex(int srate, int bps, int nch, int format_code)
{
const int RetCode = ParamMsg(MSG_OPEN, srate, format_code, bps, nch).Call();
OpenDriver = RetCode != 0;
return RetCode;
}

int
output_asio_instance::can_write(void)
{
return ParamMsg(MSG_CAN_WRITE).Call();
}

int
output_asio_instance::write(const char* data, int bytes)
{
return ParamMsg(MSG_WRITE, bytes,
reinterpret_cast<unsigned char*>(const_cast<char*>(data))).Call();
}

int
output_asio_instance::get_latency_bytes(void)
{
return pPcmAsio->MsgGetLatency();
}

void
output_asio_instance::pause(int state)
{
ParamMsg(MSG_PAUSE, state).Call();
}

void
output_asio_instance::force_play(void)
{
ParamMsg(MSG_PLAY).Call();
}

int
output_asio_instance::do_flush(void)
{
return ParamMsg(MSG_FLUSH).Call();
}

int
output_asio_instance::is_playing(void)
{
return !!pPcmAsio->MsgGetLatency();
}

};

output_factory_t<output_asio_instance> foo2;
10
Comment prepending class output_entry_impl_t from SDK/output.h states that "output_entry methods are forwarded to static methods of your output class". And in its code you can find calls like
Code: [Select]
if (T::g_advanced_settings_query()) flags |= output_entry::flag_needs_advanced_config;
'T' there is your output_asio class. In other words, output_asio implementation should contain static methods like
Code: [Select]
class output_asio {
public:
  static bool g_advanced_settings_query() { ... }
  static bool g_needs_bitdepth_config() { ... }
  // etc.