-
Notifications
You must be signed in to change notification settings - Fork 20
Amplitude mismatch between simulated point source and synthetized sound field #173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
import sfs
import numpy as np
import matplotlib.pyplot as plt
number_of_secondary_sources = 2**7
frequency = 343 # in Hz
radius = 5 # in m
r = 1
xs = [0, 100 * radius, 0] # position of virtual point source in m
normalization = 4 * np.pi * np.linalg.norm(xs) # consider also distance!
grid = sfs.util.xyz_grid([-r * radius, r * radius], [-r * radius, r * radius], 0, spacing=0.01)
omega = 2 * np.pi * frequency # angular frequency
def sound_field(d, selection, secondary_source, array, grid, tapering=True):
if tapering:
tapering_window = sfs.tapering.tukey(selection, alpha=0.3)
else:
tapering_window = sfs.tapering.none(selection)
p = sfs.fd.synthesize(d, tapering_window, array, secondary_source, grid=grid)
p *= normalization # normalize to compensate 1/(4*pi*distance)
sfs.plot2d.amplitude(p, grid, xnorm=None)
sfs.plot2d.loudspeakers(array.x, array.n, tapering_window)
plt.grid()
plt.show()
array = sfs.array.circular(number_of_secondary_sources, radius)
d, selection, secondary_source = sfs.fd.wfs.point_25d(omega, array.x, array.n, xs)
sound_field(d, selection, secondary_source, array, grid)
d, selection, secondary_source = sfs.fd.nfchoa.point_25d(omega, array.x, radius, xs)
sound_field(d, selection, secondary_source, array, grid)
p = sfs.fd.source.point(omega, xs, grid)
sfs.plot2d.amplitude(normalization * p, grid)
plt.show() The code above is probably what you want to have, I slightly changed the general parameters, since with a plane wave like wavefront it is more convenient to see what is going on. However, the changes that are needed to solve your question:
We should probably modify the examples and docstrings to cover these normalizations with more convenient user experience. Thanks for bringing this up! |
Thank you very much Frank for your quick and complete answer! |
FYI, |
Hello,
I'm trying out the "Sound Field Synthesis - Circular loudspeaker arrays - Point source" examples (for WFS and NFC-HOA), and and I have found that the amplitude of the synthetized sound field is much stronger than the amplitude of the simulated sound field of a point source (sfs.fd.source.point). I know that the simulated sound field of a point source is normalized/multilpied by 1/(4*pi), but even denormalizing it the amplitude mismatch still holds.
Why is this happening? Is this to be expected by the nature of the methods (WFS and NFC-HOA)? Or maybe there is a missing normalizing factor in the implementation?
The code that I used is almost the same as the ones of the examples:
The synthetized sound field for WFS, NFC-HOA and the simulated sound field of the point source are:
Thank you,
Pedro Izquierdo L.
The text was updated successfully, but these errors were encountered: