This chip has a rather peculiar sample format, it's 13 bit floating point with 10 bits for mantissa and 3 bits for exponent. The chip requires 3.58 MHz input, uses 8 data bits and one address bit. I used a cheap FPGA board to generate the clock for the chip and fed the digital samples back to the FPGA. I converted the floating point number into regular 16-bit integer and used a software delta-sigma modulator running at 50 MHz to convert back to analog signal.
The chip contrary to the datasheets 16 bits seemed to be output 18 bits during a single SYNC cycle (SYNC is cyan in the above image, CLK is yellow). First 5 extra bits were irrelevant. According to the datasheet only first 3 bits were supposed to be irrelevant. It also appears this chips seemed to work fine with 3.3 V where in the datasheet the minimum voltage was listed as 4.5 V. Operating it with 3.3 V was convenient as my FPGA supplied this voltage and could also only accept 3.3 V logic.
The above is the signal directly measured from the 1-bit digital FPGA pin which is sigma-delta modulated (1st order) digital-to-analog conversion of the signal generated by the YM3812 chip. The sampling frequency is just so high (50 MHz) that it looks very good analog signal when low pass filtered.
![]() |
Taking a running average (a kind of low pass filter) of the delta-sigma modulated signal yields an analog signal. The higher the "oversampling" rate, the more faithful the filtered signal is to the original (up to some high frequency cutoff). |
![]() |
Second order delta-sigma modulation shapes the noise in order to improve the signal-to-noise ratio even further at the expense of the S/N at the higher frequencies. |
The effect of higher order DSMs on quantization noise.
Below, 1-bit 1st order DSM and 4-bit 2nd order DSM demonstration code in Matlab.
out = 0;
int = 0;
for x = 1:length(t)
int = int + sin(pi*t(x)) - out(x);
if int > 0
out(x+1) = 1;
else
out(x+1) = -1;
end
end
%%
out2 = 0;
int1 = 0;
int2 = 0;
for x = 1:length(t)
int1 = int1 + 6*sin(pi*t(x)) - out2(x);
int2 = int2 + int1 - out2(x);
out2(x+1) = round(int2);
end