Jump to content
IGNORED

Concert Hall sound


Recommended Posts

1 hour ago, Fitzcaraldo215 said:

Nonsense, as usual, Frank.

 

Fas is not entirely wrong. I would agree to a modified version of his post as follows:-

 

Quote

 

As people have been pointing out, mics don't discriminate; they pick up all the sound energy that is impinging on their diaphragms ... we are the ones that separate the content, and choose to focus on what's important, to us ...

 

Mics being closer to the "instruments" doesn't stop the acoustics pickup, dead - it just means that the contribution of the latter is lower in level - and that requires a higher standard of replay for those acoustics to register, when listening. To appreciate this, one needs to have a rig working at a level where one can play a recording which is a studio mix of various instruments, all recorded separately - and one can turn one's focus on each of those sounds in turn, and "see" the acoustic where each one was located; a complex recording is a layering of acoustics, each of which still retains its identity within the whole.

 

Rule of thumb: poorer quality playback == listening room is everything; convincing standard playback == listening environment is irrelevant.

 

 

 

Link to comment
6 hours ago, Fitzcaraldo215 said:

I want my listening to be based on a more comprehensive and accurate transfer of the sonic information from the recording venue to my listening room.

 

Excellent write up. In concert halls, the ambiance acoustics is everything. That's what reaches our ears by more than 8 fold of direct energy. Although, there are some research that suggest we are particularly sensitive to the 45 degree (sic) frontal reflection, I personally feel the rear half reflection is more important for orchestra. However, the front half is useful to smoothen the room response provided the reflection itself is perfect.

 

Despite falling in love with multichannel SACD's in the early 2000s, I never pursued that direction due to lack of material at that time. However, now with some of the 2L 5.1 label, I think the rear ambience could be reproduced more accurately via halls impulse response using stereo and a much better presentation with utilizing both front and rear channels of 5.1. Even with stereo, I would still say a proper setup to recreate the concert hall ambience require at least 30 over channels. 

Link to comment
4 hours ago, Fitzcaraldo215 said:

I see the apparent logic of what you are trying to say.  But, I don't think it squares with listening experiments.  People just don't prefer listening in dead, anechoic rooms, regardless of the recordings.  I personally have heard many rooms that were too dead to be enjoyable.  Toole talks about "listening through the room", where the brain compensates for the room's reverb and reflections, making the room "disappear".  But, there are obviously finite limits to that.  Some rooms are better than others.

 

I think you're missing my point, my friend. It wouldn't be a "dead room" when the music is playing because recordings designed to be played in such an ideal listening room would bring with them an approximation of the venue acoustics in which the performance was played. That is to say, that the acoustics of a standard living room would be replaced by the acoustics of the place where the recording was made. Ostensibly, the room would then be a place that transported the listeners to the performance rather than bringing the performance into one's living room where the acoustics of the performance are overlaid by that room's acoustics (good or bad).

 

4 hours ago, Fitzcaraldo215 said:

There is also an excellent case to be made for speaker/room setups where, while the room is not dead in terms of reflections, the frequency response is smoothly downward sloping, with bass frequencies EQed to be flat or near flat to eliminate room modal variations, which are often huge.  Toole prefers EQing just the bass below about 500 Hz and using speakers with smooth directivity so that room reflections do not alter frequency response.  Others, like me, use full range DSP EQ to control frequency response at the listening position.  But, Toole is no fan of that approach.

With today's recording paradigm, that is a useful way to do things. What I was referring to is an ideal listening environment where the room acoustics don't have to be dialed out of the equation with DSP technology because such a listening room would not require that due to the fact that it has no acoustics. My room is, like yours, "neutralized" (to the extent possible) by DSP. The main thing that I find that this does is make the crossover between my main speakers and my powered subwoofers absolutely seamless. As far as altering my room acoustics, switching the DSP in and out of my system while listening yields only a very subtle and often indistinguishable difference (except for the subwoofers).

George

Link to comment
22 minutes ago, Ralf11 said:

how about: While imperfect, mics do not actively pay attention in the way a human listener does.

 

 

by "pay attention" I mean the sorts of things cognitive psychos study in attentional processes

 

- subject to change when Siri becomes self-aware

 

IMO, psychoacoustics aspect of human hearing is more of how sound is processed in our brain rather than attention.  Take for an example of precedence effect. A microphone is capable of capturing the sound as accurate as our ears but we just lose the ability to discern the two different source  if it is within a few ms.

Link to comment
1 hour ago, Ralf11 said:

no my post is not in jest - one hopes that yours are

 

no microphone is perfect

 

It may not be "perfect", but it is an exceedingly simple device - sound energy will move the diaphragm, unless it somehow is stuck in its motion, and an extremely sensitive electrical tranducing mechanism registers that movement. Now, the translation to an electrical signal may not be 100% correct, but it will still produce a replica of  the air's vibrational motion.

Link to comment
1 hour ago, STC said:

 

Fas is not entirely wrong. I would agree to a modified version of his post as follows:-

 

 

 

 

At one level I find this amusing - people are so ernest in the rejection of the concept that the quality of playback is so critical as to how the presentation registers subjectively ... as someone who has heard this changing of the subjective qualities literally thousand of times over the years, for decades, on numerous systems of my own, and of those belonging to others - it's like saying there is no such thing as colour TV, because I haven't actually come across one yet; black and white is all that's possible - so there!!!

Link to comment
1 hour ago, Ralf11 said:

how about: While imperfect, mics do not actively pay attention in the way a human listener does.

 

 

by "pay attention" I mean the sorts of things cognitive psychos study in attentional processes

 

- subject to change when Siri becomes self-aware

 

What a playback rig has to do is regurgitate absolutely everything the microphone 'heard', hopefully not too contaminated by the following stages of processing - which gives the ear/brain all the necessary data, allowing the human listener "to actively pay attention" - the recording/playback chains are a conduit for putting the ear back to where the microphone was; a sound time machine.

Link to comment
2 hours ago, Ralf11 said:

he said would but meant wouldn't...

 

see:

mics always discriminate; they never pick up all the sound energy that is impinging on their diaphragms

Yeah, they do Raf11, its just that they usually don't do it accurately or uniformly. That depends on the design of the mike. For instance, Omnidirectional mikes are supposed to pick-up their frequency range equally from all directions, but they don't. Cardioid mikes are supposed to pick-up the sounds only from the front while attenuating the sounds coming from the back - except that the ability of the mike to make that distinction is frequency dependent. A cardioid mike might be a textbook uni-directional mike at some frequencies and damn near an omnidirectional one at others.  

George

Link to comment
16 minutes ago, fas42 said:

....as someone who has heard this changing of the subjective qualities literally thousand of times over the years, for decades, on numerous systems of my own, and of those belonging to others....

 

Fas, it is hard to accept or even understand your statement when you also conclude that what you hear is

 

Quote

 ear/brain has accepted the illusion that one is hoping the rig can trigger - a switch has gone on inside one's head, "I'm happy to keep being fooled ... ".

 

Link to comment
5 hours ago, STC said:

 

Fas, it is hard to accept or even understand your statement when you also conclude that what you hear is

 

 

 

 What I'm saying is that a particular setup can be in a mode of convincing sound at one point in time, and not so at another point of time - it all depends on the state of tune of the overall chain. What the specific components are in that chain is nowhere near as important as the integrity of the chain in key areas of behaviour - I've had a rig switch between the two states numerous times on a single day, depending upon precisely what I've changed, stabilised, and quite often stuffed up by accidentally moving or disturbing, during that period.

 

If the rig is unconvincing, then it just sounds like the normal stereo we all know and 'love', ^_^; a plateau of performance has to be reached which triggers the "fool you!" illusion - there is no conscious control over this, I can't "decide" that the SQ is sufficient; it's either good enough, or it ain't ...

Link to comment
40 minutes ago, fas42 said:

What the specific components are in that chain is nowhere near as important as the integrity of the chain in key areas of behaviour - I've had a rig switch between the two states numerous times on a single day, depending upon precisely what I've changed, stabilised, and quite often stuffed up by accidentally moving or disturbing, during that period.

 

I have not reached that stage yet.

Link to comment
14 hours ago, fas42 said:

 

As people have been pointing out, mics don't discriminate; they pick up all the sound energy that is impinging on their diaphragms ... we are the ones that separate the content, and choose to focus on what's important, to us ...

 

Mics being closer to the "instruments" doesn't stop the acoustics pickup, dead - it just means that the contribution of the latter is lower in level - and that requires a higher standard of replay for those acoustics to register, when listening. To appreciate this, one needs to have a rig working at a level where one can play a recording which is a studio mix of various instruments, all recorded separately - and one can turn one's focus on each of those sounds in turn, and "see" the acoustic where each one was located; a complex recording is a layering of acoustics, each of which still retains its identity within the whole.

 

Rule of thumb: poorer quality playback == listening room is everything; convincing standard playback == listening environment is irrelevant.

 

Close-mic'ing increases the level amplitude between direct and reflected sound (or ambience) to a point where the acoustic cues are no longer audible, which is why engineers use extra mics for ambience.

 

The large majority of studio recordings are close-mic'ed and many studios are semi-anechoic environments so no ambience cues, only fabricated soundstage through the use of pan-potting and reverb...

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
13 hours ago, STC said:

 

Excellent write up. In concert halls, the ambiance acoustics is everything. That's what reaches our ears by more than 8 fold of direct energy. Although, there are some research that suggest we are particularly sensitive to the 45 degree (sic) frontal reflection, I personally feel the rear half reflection is more important for orchestra. However, the front half is useful to smoothen the room response provided the reflection itself is perfect.

 

Despite falling in love with multichannel SACD's in the early 2000s, I never pursued that direction due to lack of material at that time. However, now with some of the 2L 5.1 label, I think the rear ambience could be reproduced more accurately via halls impulse response using stereo and a much better presentation with utilizing both front and rear channels of 5.1. Even with stereo, I would still say a proper setup to recreate the concert hall ambience require at least 30 over channels. 

Thank you.  

 

As far as channel count is concerned, yes, we know that real images from speakers are more precise than phantom images between speakers, and also that phantom images are better and more precise over a smaller distance = a narrower angle between them.  Hence, 5.1/7.1 offer an improved frontal soundstage over stereo or '70's Quad due to the center channel.  That is in addition to better handling of dialogue in relation to the screen on video material than can be done with phantom center imaging.

 

But, there are sharply diminishing returns from adding additional channels.  Toole's experiments show the channel count has to be increased substantially in order to make much perceptable difference to listeners.  So, your guess of 30 channels would be consistent with that.  The downside, of course, is the difficulty of recording that way discretely and of distributing files with such large channel counts.  

 

Artificial simulation of that increased surround channel count is possible, except can it be done in a natural way without artifacts or an imposed single sonic signature?  So far, I don't find artificial upmixing algorithms to be the equal of discrete Mch recordings, and I distrust upmixing.

 

There is another aspect to this also.  Contrary to popular belief, the surround channels in discrete Mch recordings do not just contain diffuse reflected sound.  If you think about it from the perspective of a distant surround mike, what is picked up is inseparably both the direct sound from that perspective plus the diffuse reflected sound from that perspective.  This is clear from listening to surround channels up close.  On playback, that combination of direct plus reflected sound creates phantom images heard by the listener as a result of interaction with the front channels and other surround channels.  

 

The effect is to not only bring the diffuse hall sound to the listener, but also to enhance the frontal soundstage in terms of width, depth and dimensionality.  I personally find that discrete Mch recordings pull the frontal soundstage out into the room somewhat in front of the plane of the front speakers.  A sense of added natural sounding depth and dimensionality over stereo playback is notably apparent to me, since the front speaker array can, like stereo, simultaneously produce phantom depth behind the front speaker plane.

 

My point is it may not be quite so simple to introduce added algorithmic channels that naturally convey the sound in the hall.  One must consider both the direct sound output of those channels and their interaction via phantom imaging with other speaker channels in the array.

Link to comment
2 hours ago, semente said:

Close-mic'ing increases the level amplitude between direct and reflected sound (or ambience) to a point where the acoustic cues are no longer audible, which is why engineers use extra mics for ambience.

 

Don't forget, semente, that there's another tool for doing that, it's called the mike's pickup pattern. For instance a stereo pair of the "coincident" type (X-Y, A-B, ORTF, Blumlein, etc.) are generally cardioid patterned mikes. These pick-up from the front with lobes to the side, but highly attenuated pickup from the rear. This gives the best stereo and is particularly useful in a live concert where there's an audience because the lack of sensitivity from the rear attenuates audience noise. In a situation where there is no audience, recording a live concert with a "coincident" pair of bi-directional or figure-of-eight patterned mikes. This allows for the mikes to not only pick-up the ensemble in true stereo, but it also allows the "back" of the mikes (which have the same pickup characteristics as the front lobe of the mike) to capture the hall sound. Then the only difference between the amplitude of the direct sound from the players and the reflected sound from the hard surfaces in the hall is the difference in distance. I.E., ostensibly, the musicians are a lot closer to the front lobes of the stereo pair than the hard surfaces reflecting the sound back to the rear lobes of the bi-directional mikes. Of course, omnidirectional mikes cannot be used for coincident miking. The overlapping patterns are so complete and the mikes are so close together on a stereo "T-bar" that the listener's ears hear each mike's output as the same -I.E. mono.  

George

Link to comment
2 minutes ago, gmgraves said:

Don't forget, semente, that there's another tool for doing that, it's called the mike's pickup pattern. For instance a stereo pair of the "coincident" type (X-Y, A-B, ORTF, Blumlein, etc.) are generally cardioid patterned mikes. These pick-up from the front with lobes to the side, but highly attenuated pickup from the rear. This gives the best stereo and is particularly useful in a live concert where there's an audience because the lack of sensitivity from the rear attenuates audience noise. In a situation where there is no audience, recording a live concert with a "coincident" pair of bi-directional or figure-of-eight patterned mikes. This allows for the mikes to not only pick-up the ensemble in true stereo, but it also allows the "back" of the mikes (which have the same pickup characteristics as the front lobe of the mike) to capture the hall sound. Then the only difference between the amplitude of the direct sound from the players and the reflected sound from the hard surfaces in the hall is the difference in distance. I.E., ostensibly, the musicians are a lot closer to the front lobes of the stereo pair than the hard surfaces reflecting the sound back to the rear lobes of the bi-directional mikes. Of course, omnidirectional mikes cannot be used for coincident miking. The overlapping patterns are so complete and the mikes are so close together on a stereo "T-bar" that the listener's ears hear each mike's output as the same -I.E. mono.  

 

Thanks for that. Very educational. ?

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
15 hours ago, gmgraves said:

I think you're missing my point, my friend. It wouldn't be a "dead room" when the music is playing because recordings designed to be played in such an ideal listening room would bring with them an approximation of the venue acoustics in which the performance was played. That is to say, that the acoustics of a standard living room would be replaced by the acoustics of the place where the recording was made. Ostensibly, the room would then be a place that transported the listeners to the performance rather than bringing the performance into one's living room where the acoustics of the performance are overlaid by that room's acoustics (good or bad).

.

As is true in the concert hall, what we hear in our rooms is an inseparable combination of direct plus reflected sound.  Speakers have a dispersion pattern, a directivity index which is very much part of what we hear.  Eliminating that reflected component by deadening all reflections produces sound that is unpleasant and not preferred, even with music.  Experiments have established  this with listeners.  Scientists and engineers who work with speakers in anechoic chambers do not go home and try to make their listening rooms anechoic.
 
Yes, aspects of uncontrolled room reflections can be detrimental, such as room modal variations, floor bounce, glare at certain frequencies, etc.  But, other aspects of room reflections are positive contributions to the sound that we like.  And, listening anechoically would require much more amplifier power and speaker efficiency.  That, in addition to being costly and potentially ugly.
 
Attenuating the bad aspects of the reflections while keeping the good aspects does result in the venue acoustics on the recording overriding and masking the listening room acoustics.  Partly, this is helped by our brain adapting to and "listening through" the room, which we do quite naturally and subconsciously.  It produces a very good and enjoyable "you are there" illusion of the concert hall in Mch, much more so than in stereo, I have found.
 
 
 
Link to comment
1 hour ago, Fitzcaraldo215 said:

Eliminating that reflected component by deadening all reflections produces sound that is unpleasant and not preferred, even with music.

Yet many audiophiles prefer listening through headphones.

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
1 hour ago, semente said:

Yet many audiophiles prefer listening through headphones.

And cross feed is employed by many of those.  

 

Cross feed is not like reflections exactly though. 

 

I've recorded some music via close miking.  I can do a simple mix where everything is full right, full left, or dead center.  That isn't uncommon.  Sounds pretty good over speakers.  Sounds odd over headphones according to me and other people who listened over phones.  I add about 20% from one channel to the other, and over speakers it isn't a big difference.  Over headphones everyone was much happier with the result. 

 

Headphones would lean toward the dead acoustic.  One problem with phones is imaging inside your head rather than out in front of your.  I've read listening to stereo in an anechoic chamber with your head kept still sounds much like that with in the head imaging.  If people were allowed to move there head, the imaging pops out front.  Apparently even minor head movement provides our ears with the ability to process sound out in front in a way it can't with no reflections present and no head movement. 

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
1 hour ago, semente said:

Yet many audiophiles prefer listening through headphones.

Do they prefer it because they have lousy speakers, a bad room, grouchy neighbors, kids who go to bed early, etc.? Or, do they really think it sounds, in the context of this thread, more like "concert hall realism" than even a decent stereo?  If it is the latter reason,  all I can do is shake my head and wish them listening enjoyment.  I don't think most of them listen to classical music, or if they do, they have little experience with live concerts.

 

I used headphones in the '60's for many of the former reasons.  I quickly outgrew it never to return as I acquired a decent stereo.  It simply does not image properly with normal, non-binaural recordings.  Plus, it is fatiguing.

Link to comment
4 hours ago, Fitzcaraldo215 said:
As is true in the concert hall, what we hear in our rooms is an inseparable combination of direct plus reflected sound.  Speakers have a dispersion pattern, a directivity index which is very much part of what we hear.  Eliminating that reflected component by deadening all reflections produces sound that is unpleasant and not preferred, even with music.  Experiments have established  this with listeners.  Scientists and engineers who work with speakers in anechoic chambers do not go home and try to make their listening rooms anechoic.
 
Yes, aspects of uncontrolled room reflections can be detrimental, such as room modal variations, floor bounce, glare at certain frequencies, etc.  But, other aspects of room reflections are positive contributions to the sound that we like.  And, listening anechoically would require much more amplifier power and speaker efficiency.  That, in addition to being costly and potentially ugly.
 
Attenuating the bad aspects of the reflections while keeping the good aspects does result in the venue acoustics on the recording overriding and masking the listening room acoustics.  Partly, this is helped by our brain adapting to and "listening through" the room, which we do quite naturally and subconsciously.  It produces a very good and enjoyable "you are there" illusion of the concert hall in Mch, much more so than in stereo, I have found.
 
 
 

I have no dispute with that, but ideally, the best result would be multi-channel recordings (pure stereo for the two front channels, ambience only in the side, rear and overhead channels [separate subwoofer channels optional])played in a room with no acoustical signal of it's own. Such a listening experience would be totally immersive, and from the standpoint of the concert experience startlingly real. Of course, binaural sound does that to some degree. I've been listening to the BBC Proms from the Royal Albert Hall in London streamed in binaural AAC sound. At times the result is almost spooky, but you can't move your head, or the illusion disappears like being awaken from a dream. 

George

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...