We also need to expose an AudioParam to control the pan value. A ConstantSourceNode looks very promising for that job. Unfortunately it needs to be started somehow and there is no functionality of the StereoPannerNode which we could abuse to start it.
A better approach is to use a WaveShaperNode that is connected to our input. We give it a DC curve (which is actually a line) of [ 1, 1 ]. This will guarantee that the output of that WaveShaperNode is always 1 no matter what input it has. We chain that WaveShaper with a GainNode and expose its gain AudioParam as the pan AudioParam. We end up with a simulated AudioParam that feeds its current value into the internal graph.
Next we need to apply the formula from the panning algorithm. The pan value needs to be transformed a bit in order to get the values which will then have to be multiplied with the left and right channel respectively.
// left channel
sample * Math.cos(((pan + 1) / 2) * Math.PI / 2)
// right channel
sample * Math.sin(((pan + 1) / 2) * Math.PI / 2)
The algorithm requires the pan value to be mapped to a value between 0 and 1. This is what the ((pan + 1) / 2) part of the formula stands for. Later on that value gets fed into the cosine (or sine) function to produce the final value. This can be achieved by using a WaveShaper with a lot of precomputed values. A reduced version of the curve might look like this:
[
1,
1,
1,
1,
Math.cos(0 * Math.PI / 2),
Math.cos(0.25 * Math.PI / 2),
Math.cos(0.5 * Math.PI / 2),
Math.cos(0.75 * Math.PI / 2),
Math.cos(1 * Math.PI / 2)
]
The values of a WaveShaper's curve do always cover the range from -1 to 1. As you can see that would waste a lot of values in our curve. All values from -1 to 0 exclusively are never actually used because the range of possible input values starts at 0. A nice shortcut is therefore to not map the pan value from [ -1; 1 ] to [ 0; 1 ] in the first place and instead spread the curve across the whole range.
[
Math.cos(0 * Math.PI / 2),
Math.cos(0.125 * Math.PI / 2),
Math.cos(0.25 * Math.PI / 2),
Math.cos(0.325 * Math.PI / 2),
Math.cos(0.5 * Math.PI / 2),
Math.cos(0.625 * Math.PI / 2),
Math.cos(0.75 * Math.PI / 2),
Math.cos(0.825 * Math.PI / 2),
Math.cos(1 * Math.PI / 2)
]
This little trick allows us to use the full range of the WaveShaperNode's curve and also saves us from doing some unnecessary value mapping. However we still need two of those WaveShaperNodes as the computation for the right channel is using the sine instead of the cosine function. Those two WaveShaperNodes are abbreviated as WSN in the following diagram.
We also create a GainNode for each channel. The WaveShaperNodes are not connected to the GainNodes directly. They are controlling their gain AudioParam. In other words they multiply their value with the input signal.
And with that we built a fully working clone of the StereoPannerNode. I think it's a nice example which shows what is already possible without using an AudioWorklet. But at the same time it also stresses the need for the AudioWorklet as there is no way to know if an input signal is mono or stereo without it. The only solution I can think of is requiring the channelCountMode to be explicit for now. This ensures that the channelCount is predictable and can't be changed dynamically by the input signal.
The algorithm for a stereo signal is a bit more complicated but can be built with the same technique. For the curious, the implementation of the stereo algorithm can be looked up in the source code.
It's worth noting that the accuracy of this method depends on the size of the curves used for the WaveShaperNodes.
An interesting fun fact is that the algorithm for mono signals will actually modify the signal if the pan value is zero but that is absolutely intenional.