Description
Hello,
I'm trying out the "Sound Field Synthesis - Circular loudspeaker arrays - Point source" examples (for WFS and NFC-HOA), and and I have found that the amplitude of the synthetized sound field is much stronger than the amplitude of the simulated sound field of a point source (sfs.fd.source.point). I know that the simulated sound field of a point source is normalized/multilpied by 1/(4*pi), but even denormalizing it the amplitude mismatch still holds.
Why is this happening? Is this to be expected by the nature of the methods (WFS and NFC-HOA)? Or maybe there is a missing normalizing factor in the implementation?
The code that I used is almost the same as the ones of the examples:
number_of_secondary_sources = 40
frequency = 300 # in Hz
radius = 5 # in m
r = 1
xs = [0, 3 * radius, 0] # position of virtual point source in m
grid = sfs.util.xyz_grid([-r * radius, r * radius], [-r * radius, r * radius], 0, spacing=0.01)
omega = 2 * np.pi * frequency # angular frequency
def sound_field(d, selection, secondary_source, array, grid, tapering=True):
if tapering:
tapering_window = sfs.tapering.tukey(selection, alpha=0.3)
else:
tapering_window = sfs.tapering.none(selection)
p = sfs.fd.synthesize(d, tapering_window, array, secondary_source, grid=grid)
sfs.plot2d.amplitude(p, grid, xnorm=[0, 0, 0])
sfs.plot2d.loudspeakers(array.x, array.n, tapering_window)
plt.show()
array = sfs.array.circular(number_of_secondary_sources, radius)
d, selection, secondary_source = sfs.fd.wfs.point_25d(omega, array.x, array.n, xs)
sound_field(d, selection, secondary_source, array, grid)
d, selection, secondary_source = sfs.fd.nfchoa.point_25d(omega, array.x, radius, xs)
sound_field(d, selection, secondary_source, array, grid)
p = sfs.fd.source.point(omega, xs, grid)
normalization = 4 * np.pi
sfs.plot2d.amplitude(normalization * p, grid)
plt.show()
The synthetized sound field for WFS, NFC-HOA and the simulated sound field of the point source are:
Thank you,
Pedro Izquierdo L.