RogerJoensson skrev:Laila skrev:Jag har svårt att tolka texten på annat sätt att det rörde sig om konstaterade avvikelser
som kunde detekteras i frekvensspannet 20Hz-20kHz.
Hm. En följdfråga blir, hur har de gjort för att få fram skillnaden vid enbart eller enbart över 20kHz? Handlar det här om sinustoner eller brus/musik + filter (hur smalbandigt filter i så fall)? osv.
Jag har utelämnat en del såsom anekdoter och lite annat som inte är betydelsefullt. Eftersom artikeln är i AES bibliotek så finns vissa restriktioner huruvida hela artiklar får kopieras eller distribueras. Jag hoppas det här är ok. Artikeln är betydligt längre än så här. I programme source i citatet nedan berättar han om programmaterial.
Paul Frindle i AES skrev:Historically, the measurement and quality assessment of audio systems has provided much controversy. The process of determining the factors which constitute good audio reproduction, and which system performance parameters can provide measurements that reflect this, has been a difficult process that has evolved over many years. The advent of digital audio systems, with a new set of possible error mechanisms has refuelled this debate. This paper aims to explore some of the historical issues that have arisen from this evolution and provide a view of some lessons to be learnt from it. Examples drawn from personal experiences are presented to illustrate these issues and provide evidence to support an openminded and balanced approach to the problem. In particular, the benefits of properly conducted comparative listening tests are presented. A general philosophy is presented that utilises the salient points from the many, often divided attitudes that exist in this field, in order to arrive at a working practice which can provide benefits to sound quality research.
Introduction.
The debate over the factors contributing to audio quality has raged for many years. The most intractable of these debates has been the efforts of the audio community, in trying to reconcile what is actually experienced with the accepted science of measurement of the time. The process has seemed like a constant battle for pioneering engineers to obtain acceptance for each new parameter that has been suspected of affecting audio quality. Over the years a distinct and damaging split has occurred between the 'listeners' and the technical community, since the latter tend to regard the methods of the former as unscientific.
To worsen matters, the growing presence of vested interest of manufacturers under pressure from 'specmanship' and the increasingly powerful role of HiFi reviewers and 'Gurus' has added a dimension of distrust. Many of the real issues are clouded by the ever increasing 'jargonism', spilling over from the Hi-Fi fraternity into the professional domain. In the highly competitive and cynical environment currently provided by media business, a serious researcher could be forgiven for suspecting that every new technical issue is invented primarily for the purposes of marketing alone. Since this attitude is seen as contrary to strict scientific ethics, many of the views from the Hi-Fi community tend to be discounted before any serious testing is actually done.
This environment, and the increasingly polarised arguments conducted in public, initially in Hi-Fi press, and more recently in professional industry periodicals, has produced an atmosphere of scepticism amongst many audio professionals. The potentially important contribution provided by these people is thus declining as they become reluctant to join the argument.
All of the above issues have made it difficult for researchers to determine the important aspects of measurement in the audio field.
Open minded approach.
In any truly scientific approach, all input from whatever source, should be regarded as potential input to research. In reality, the comments and observations of the practical users of audio equipment is a rich source of information. As such, it is quite wrong to discount this evidence because it is subjective or arrived at by unscientific means in the strictest sense. Even professional users do not always have the facility, time or technical background to pursue detailed research into causes of phenomena. Therefore, the validity of their claims should not be dismissed, and the opinions formed even in the absence of firm data can provide valuable clues as to where problems may reside.
In the same vein, it would be unreasonable to expect a designer or researcher to have the continuous access to the use of equipment, in the actual environment that the user has. Therefore, it would be equally wrong to assume that the opinions formed in this more rigorously scientific situation are necessarily entirely correct, or are the result of complete data. In practice, it is all too easy to develop dogmatic ideas in the laboratory simply as a result of isolation from the outside world.
Many long held and theoretically plausible opinions I have held have been shattered by subsequent investigation into reported phenomena. I have needed to offer many apologies during my career.
New understanding and subsequent measurement criteria emerge most often as a direct result of investigation into unexpected phenomena. And it could be argued that subjective results from practical experience is the main 'fuel' for research into audio quality, as much as in any other scientific research.
It is also worth noting that technical dogma can be inappropriate and unhelpful when applied to real system limitations imposed on users. It is of little value for instance, to chastise researchers who attempt to quantify the effects of signal cables on audio, by simply announcing that they are foolish to be using any at all. Whilst everyone would agree that cables are a potential source of problem and should be minimised, no one as yet can do without any since no system is infinitely small! The effect of this kind of analysis, despite being strictly correct, is to further alienate users and restrict the flow of opinion.
"At each point in my career that I thought I knew all the factors affecting sound quality in my design, I was amazed to find yet another issue I would not have imagined I could hear ".
Above all, it is a complete mistake to consider that all is understood in the field of audio quality and become too comfortable with one’s own beliefs.
Audio Testing.
“All sound that emanates from a loudspeaker is the result of electrical stimulus. Therefore, any effect due to electronic circuitry, that we hear, can ultimately be measured. The problem is; when do you know you've measured everything that you can hear?”
The process of audio measurement must start and finish with listening. By definition we are concerned specifically and only with that which can be heard. The problem over the years of audio measurement has been the continuous re-assessment of what we can hear. The sensitivity of the human hearing system repeatedly surpasses expectation and therefore the technical specifications of the day have not adequately defined the system. For instance, measurements that gave valid comparisons in the analogue era, became misleading at the arrival of digital audio since assumptions implicit in the specifications were either not appreciated or forgotten.
A good example of this is the THD+N measurement which served well as a method of comparison for analogue systems. In this case an assumption of the test was that harmonic content due to distortion would reduce in proportion with signal level. Although this was generally true for analogue systems, it was horribly misleading in a digital system where the linearity to signal profile was complex and generally disadvantageous at low levels, where noise energy dominated the measurement.
Another misconception along the same lines was the specification of digital system resolution in 'bits' Signals were said to be subject to 'quantisation noise' at the level of the least significant bit present in the system. It took quite a while for people to realise that the said 'noise' was actually comprised almost entirely of harmonic distortion. Something that escaped the THD+N test, but definitely did not escape the ears. Although this issue has largely been put to rest by work into statistical dithering by researchers such as Stanley Lipshitz, the memory of the sound provided by undithered systems persists in the industry. Indeed, some equipment still does not make adequate provision for dithering. Many phenomena noted by users, when trying differing combinations of equipment, are caused by dither violations and the. misconception that the audio quality of a digital system can be defined as resolution in 'bits' remains popular even today.
Both the above issues were caused by a basic underestimation both of the ears ability to differentiate between noise and low level harmonic distortion and our ability to differentiate between complex harmonic signatures produced by musical instruments. In other words it got fixed only after people heard it.
What is a listening test?
The above anecdote is of course the complete reverse of what we actually need in a rigorously comparative listening test. The listening test as such has received bad press mostly as a result of the apparent subjectivity of the kind described in Hi-Fi reports. It is perfectly reasonable to conduct a listening test to decide which equipment sounds more pleasing, but this does not constitute evidence supporting the accuracy of a system. To the contrary, it has been demonstrated that some classes of error can enhance the perceived audio experience under certain conditions.
The role of the listening test, when used as a tool to determine what can be heard and what we are hearing, is to eliminate all subjective analysis in favour of comparison only. In other words, we are Wing to detect only a difference between systems. We can therefore confidently state that if we can hear a difference, something is wrong and the system under test is not sonically transparent.
As designers of audio equipment, we can then utilise the listening test set up to discover the causes of the quality differences experienced, attempt to fix them, and verify that we were correct by listening again.
Subjective comparisons only occur after this process when compromises are required between known artefacts in non-ideal systems. In this case we want to find an optimum balance between parameters such as circuit cost, complexity and perceived quality.
Setting up a listening test environment.
In order to be fully confident that differences heard in tests are actually the result of the equipment under test, it is necessary to provide a test system that exhibits no external difference effects of its own. The absolute quality of the switching system is less important, since both A and B signals will be subject to the same changes.
A design we have used, consisted of a switching box utilising mercury wetted relays and high-quality stereo level controls to ensure good left right level matching. The internal amplifier for buffering the output from the inputs was specially designed using a hybrid OPAMP and transistor stage and was itself subject to ABX tests before we accepted it.
"After it took me 3 days to design a simple buffer amp that was transparent in an ABX test, I began to wonder ifa transparent A/D converter would ever be a realistic goal!"
The verification of the test system is the condition where a wire link connects A and B inputs and no difference can be heard between them.
Programme source.
The selection of suitable material for listening test requires some consideration. For instance, a digital recording could already have artefacts that are similar to those of a digital system under test, therefore masking the differences to some degree. In general, the widest variety of music and recording types from clean classical to loud popular music can show up different varieties of problem.
Also the use of sinewaves, square waves and noise source can have uses in timing to narrow down specific effects. We have found that pink noise is particularly useful in identifying response and stereo imaging problems caused by differential delays.
Conducting the test.
The single most important parameter required to get a valid test is to set the relative levels between direct and the system under test very accurately. A difference of just 0.1dB in level will be readily heard, but it will not necessarily manifest itself as level change. We have shown that with differences in level below 0.2dB, we subconsciously search for the cause. Any listener will tell you there is a difference, but each will interpret it as a different artefact. These seem to change with attention and can come up as anything ranging from frequency response differences to stereo image shifting. Once the person has convinced himself what the problem seems to be, all further listening reinforces this erroneous interpretation.
As a point of interest, no listener ever believes that it is only a level change. Therefore, it takes much self-discipline for even the most experienced listener to make himself check levels if effects occur.
Learning the difference.
Even with the degree of care previously described, a listening test can be arranged to hide subtle differences if conducted too hastily. We have found that the only fundamental difference between people, of normal hearing faculties, in their ability to detect audio artefacts is experience. Even a trained professional may not be sensitive to an artefact he has not heard before. For these reasons, it is essential that any double-blind test be preceded with a lengthy period when the operator can know which he is auditioning, compare freely and familiarise himself with the sound aspects of both sources.
This effect gives some evidence to explain why it is often audio professionals who highlight problems in a system others have missed. This is due to long term familiarisation by constant exposure.
This is particularly important if testing any new technology since the artefacts produced can be entirely new even to the designer.
"When a visiting engineer was invited to do an AB comparison of a system, the wrong test coefficients were inadvertently loaded into the reconstruction FIR of the DAC converters, these were copies of those used in a popular multitrack tape machine. He was not only able to reliably tell the difference, but could even tell me the tape machine model it resembled correctly'.
Med vänlig hälsning
Peter