Which of the following describes the change in stimulus strength required to detect a difference between the stimuli?

Vision, Psychology of

L.O. HarveyJr., in International Encyclopedia of the Social & Behavioral Sciences, 2001

1.5 Psychometric Function

The most immediate consequence of noisy internal representations is that there is no single stimulus intensity above which one will always see a stimulus and below which one will not see it. Rather, as stimulus intensity is increased, the probability that an observer will say that the stimulus is visible increases. This relationship between the probability of a response and the stimulus intensity is called a psychometric function. Psychometric functions are S-shaped (ogive) when stimulus intensity is plotted on a logarithmic axis, as is illustrated at the right side of Fig. 2.

Which of the following describes the change in stimulus strength required to detect a difference between the stimuli?

Figure 2. Five psychometric functions from the three authors of the classic 1942 experiment by Hecht, Shlaer, and Pirenne. The smooth functions on the left are the psychometric functions predicted by assuming that internal representations of the stimuli are Poisson probability distributions and that the observer needs a fixed number of quanta or more in order to say ‘yes’

The psychometric function is a powerful tool for inferring the properties of the sensory process and the internal representation. In 1942, Hecht, Shlaer, and Pirenne sought to answer the question ‘how many quanta have to be absorbed by rod receptors in order to say “yes, I saw the stimulus”?’ Since the processes of generating quanta and of absorbing quanta are described by Poisson probability distributions, and since the theoretical Poisson psychometric functions have shapes that depend on the number of quanta required to ‘see’ (smooth curves on the left side of Fig. 2), and since only a fraction of the quanta striking the cornea of the eye get absorbed by photoreceptors (quantum efficiency), both the number of quanta required for seeing and the quantum efficiency may be estimated by finding the theoretical psychometric function that best fits the observed psychometric functions. In Fig. 2, the data of Hecht, Shlaer, and Pirenne have been shifted to the left by an amount corresponding to the quantum efficiency of each observer so that the data coincide as well as possible with a Poisson psychometric function. Each set of data fits extremely well with one, and only one, of the theoretical Poisson functions. It is concluded that from four to 10 quanta are needed, and that the quantum efficiency is about 5 percent.

How one interprets the probability distributions depends on the model of the detection process one adopts. All models of detection have a sensory process and a decision process. One widely held, but now thoroughly discredited, model of the sensory process is the high threshold model, in which the sensory process has a threshold that must be exceeded by the stimulus before the sensory process generates an internal representation of the stimulus. With such a model, the psychometric function represents the integral of the underlying probability distribution of the fluctuating threshold values.

The decision process of the high threshold model says ‘yes’ on test trials where an internal representation was generated and says ‘yes’ when no representation was generated by guessing some of the time. In order to estimate the guessing rate, experimenters introduced two types of detection trials: those in which a stimulus was presented (signal trials) and those without any stimulus (blank trials). Performance on such a detection task requires two measures to characterize it fully: the hit rate (probability of saying ‘yes’ when the signal is present); and the false alarm rate (probability of saying ‘yes’ when the signal was absent). The relationship between the hit rate and the false alarm rate as the decision process changes its decision strategy is called the receiver operating characteristic (ROC). The high threshold model predicts that the ROC will be a straight line, illustrated on the ROC inserted in the upper panel of Fig. 3. Hit rate–false alarm rate pairs from actual experiments in which decision strategy is manipulated form a bow-shaped ROC, shown in the upper panel of Fig. 3 by the nine filled circles. The fact that the high threshold model fails to predict the correct ROC is one reason it has been rejected as a viable model of the sensory process.

Which of the following describes the change in stimulus strength required to detect a difference between the stimuli?

Figure 3. Three variations of the dual-Gaussian, variable-criterion signal detection model. The Gaussian distributions are the internal representations generated by the sensory process. The vertical lines are the decision criteria used by the decision process to generate responses. Upper panel: single criterion generates two detection responses. The insert shows the receiver operating characteristic (ROC). Middle panel: four decision criteria generate five responses expressing confidence that the signal was present. Lower panel: four decision criteria and five different stimuli

The widely accepted replacement for the high threshold model is the signal detection model. The sensory process of this model has no sensory threshold and is always generating an output even with no stimulus: any nonzero stimulus adds to this output. The decision process uses one or more decision criteria to decide what response to generate. The ROC predicted by the dual-Gaussian, variable-criterion signal detection model is plotted as the smooth curved line in the insert in the upper panel of Fig. 3 and provides an excellent fit to the data.

Three variations of the Gaussian model are illustrated in Fig. 3. Two measures of sensory process sensitivity are widely used: the distance between the mean of the no-signal distribution and the mean of the signal distribution, da, and the area under the ROC generated by the two distributions, Az. The model in the bottom panel of Fig. 3 describes detection data from an experiment in which on each trial an observer was presented with one of five visual targets, each of different intensity, or no stimulus at all. The observer rated their confidence that a visual target had been presented. The ROCs for detecting these stimuli are shown in the inserts on the graphs of Fig. 4. As stimulus intensity increases, the ROC for that target becomes increasingly bowed. The psychometric function of the sensory process is formed using either da or Az (Fig. 4), since in the model they represent the sensitivity of the sensory process to the stimuli. The variability contributed by the decision process to actual performance is effectively removed.

Which of the following describes the change in stimulus strength required to detect a difference between the stimuli?

Figure 4. Two forms of the psychometric function of the sensory process derived from the signal detection model. The upper panel shows that stimulus detectability increases as a linear function of log stimulus intensity. The lower panel shows that the classic S-shaped psychometric function is obtained when the area under the ROC is used to characterize the sensory process. The inserts show the ROC for each stimulus intensity

In the past 20 years, the application of the signal detection model of noisy internal representations has been expanded well outside its humble beginnings as a model of the sensory and decision processes of vision and audition. Fields in which this model has been shown to provide extremely good descriptions of observed data include recognition memory, medical diagnosis, weather forecasting, lie detection, clinical evaluation, drug detection, releasing prisoners on parole, and computer-guided decision making.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767014686

Sensation Seeking: Behavioral Expressions and Biosocial Bases

M. Zuckerman, in International Encyclopedia of the Social & Behavioral Sciences, 2001

6 Psychophysiology

Differences in the psychophysiological responses of the brain and autonomic nervous system as a function of stimulus intensity and novelty have been found and generally replicated (Zuckerman 1990). The heart rate response reflecting orienting to moderately intense and novel stimuli is stronger in high sensation seekers than in lows, perhaps reflecting their interest in novel stimuli (experience seeking) and disinterest in repeated stimuli (boredom suceptibility).

The cortical evoked potential (EP) reflects the magnitude of the brain cortex response to stimuli. Augmenting–reducing is a measure of the relationship between amplitude of the EP as a function of the intensity of stimuli. A high positive slope (augmenting) is characteristic of high sensation seekers (primarily those of the disinhibition type) and very low slopes, sometimes reflecting a reduction of response at the highest stimulus intensities (reducing), is found primarily in low sensation seekers. These EP augmenting–reducing differences have been related to differences in behavioral control in individual cats and strains of rats analogous to sensation seeking behavior in humans (Siegel and Driscoll 1996).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767017721

Mathematical Psychology

J.-C. Falmagne, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3.3 Signal Detection Theory

Response strategies are often available to a subject in a psychophysical experiment. Consider a situation in which the subject must detect a low intensity stimulus presented over a background noise. On some trials, just the background noise is presented. The subject may have a bias to respond ‘YES’ on some trials even though no clear detection occurred. This phenomenon prevents a straightforward analysis of the data because some successful ‘YES’ responses may be due to lucky guesses. A number of ‘signal detection’ theories have been designed for parsing out the subject's response strategy from the data. The key idea is to manipulate the subject's strategy by systematically varying the payoff matrix, that is, the system of rewards and penalties given to the subject for his or her responses. These fall into four categories: correct detection or ‘hit’; correct rejection; incorrect detection or ‘false alarm’; and incorrect rejection or ‘miss.’ An example of a payoff matrix is displayed in Fig. 2. (Thus, the subject collects four monetary units in the case of a correct detection, and looses one such unit in the case of a false alarm.)

Which of the following describes the change in stimulus strength required to detect a difference between the stimuli?

Figure 2. An example of payoff matrix. The subject collects 4 monetary units in the case of a correct detection (or hit)

For any payoff matrix θ, we denote by ps(θ) and pn(θ) the probabilities of a correct detection and of false alarm, respectively. Varying the payoff matrix θ over conditions yields estimates of points (ps(θ), ps(θ)) in the unit square. It is assumed that (except for experimental errors) these points lie on a ROC (Receiver Operator Characteristic) curve representing the graph of some ROC function ρ: pn(θ)↦ps(θ). The function ρ is typically assumed to be continuous and increasing. The basic notion is that the subject's strategy varies along the ROC curves, while the discriminating ability varies across these curves. The following basic random variable model illustrates this interpretation. Suppose that to each stimulus s is attached a random variable Us representing the effect of the stimulus on the subject sensory system. Similarly, let Un be a random variable representing the effect of the noise on that system. The random variables Us and Un are assumed to be independent. We also suppose that the subject responds ‘YES’ whenever some threshold λθ (depending on the payoff matrix θ) is exceeded. We obtain the two equations

(8)ps(θ)=P( Us>λθ),pn(θ)=P( Un>λθ)

The combined effects of detection ability and strategy on the subject's performance can be disentangled in this model, however. Under some general continuity and monotonicity conditions and because Us and Un are independent, we get

(9)P(Us>Un)=∫−∞∞P(Us>λ)dP(Un≤λ)= ∫01ρ(p)dp

with ρ the ROC function and after changing the variable from λ to pn(λ)=p. Thus, for a fixed pair (s, n), the area under the ROC curve, which does not depend on the subjects' strategy, is a measure of the probability that Us exceeds Un. Note that Eqn. (9) remains true under any arbitrary continuous strictly increasing transformation of the random variables. For practical reasons, specific hypotheses are often made on the distributions of these random variables, which are (in most cases) assumed to be Gaussian, with expectations μs=E(Us) and μn=E(Un), and a common variance equal to 1. Replotting the ROC curves in (standard) normal–normal coordinates, we see that each replotted ROC curve is a straight line with a slope equal to 1 and an intercept equal to μs–μn.

Obviously, this model is closely related to Thurstone's Law of Comparative Judgements. Using derivations similar to those leading to Eqns. (5) and (6) and defining d′(s, n)=μs−μn, we obtain P(Us>Un)=Φ(d′(s, n)/2), an equation linking the basic signal detectability index d′ and the area under the ROC curve. The index d′ has become a standard tool not only in sensory psychology, but also in other fields where the paradigm is suitable and the subject's guessing strategy is of concern. Multidimensional versions of the Gaussian signal detection model have been developed. Various other models have also been considered for such data, involving either different assumptions on the distributions of the random variables Us and Un, or even completely different models (such as ‘threshold’ models). Presentations of this topic can be found in Green and Swets (1974), still a useful reference, and MacMillan and Creelman (1991) (see Signal Detection Theory).

Mathematical models for ‘multidimensional’ psychophysics were also developed in the guise of Geometric representations of perceptual phenomena, which is the title of a recent volume on the topic (Luce et al. 1995; see in particular Indow's chapter).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767005908

Advances in Imaging and Electron Physics

Zofia Barańczuk, ... Peter Zolliker, in Advances in Imaging and Electron Physics, 2010

2.1 Human Vision

First, color is not a physical parameter but a sensation (e.g., see Schmidt et al., 2005; Sekuler and Blake, 2002; Wandell, 1994). Derivation of a quantitative color specification requires the correlation of a perceived light stimulus intensity with the magnitude of its sensation. Some aspects, in particular lightness, follow general principles, such as Weber–Fechner's law or Steven's power function, but the most interesting one, namely, chroma, does not. Additionally, human sensation is strongly dependent on the actual viewing conditions. For that reason, it is not surprising that the development of device-independent color spaces is a tedious process that cannot be considered completed. However, before going into detail we briefly discuss the retina and its influence on vision.

The retina includes several layers of neural cells, especially the photoreceptors, rods, and cones. The rods contribute mostly to vision at low luminance levels (i.e., less than 1 cd/m2), whereas the cones serve vision at higher levels. The ability of photoreceptor cells to adapt their visual sensitivity to the luminance level of the considered scene is called dark adaptation. Roughly speaking, humans differentiate between day and night, which may involve luminance ratios of a factor of 10,000 or more. Consequently, color coordinates usually do not involve physical dimensions. Actually, human vision discriminates relative differences in stimulus intensity. Because there is a lower bound of 1% to 2% for this ability, humans can distinguish only between 50 and 100 grey levels in one scene. Hence, an 8-bit encoding of color coordinates is sufficient.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S1076567010600018

Neural Plasticity of Spinal Reflexes

M.M. Patterson, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3.1 Sensitization and Habituation

Researchers have long recognized that spinal reflexes could show some activity alterations. Sherrington (1906) described spinal fatigue or decreases in spinal reflex activity secondary to repeated activation in spinalized animals. The decrease in spinal function was transient, however, and recovered soon after stimulus cessation. The converse process was also recognized; with more intense stimulus inputs, the reflex response could increase, rapidly returning to baseline after the stimulus was discontinued. Groves and Thompson (1970) summarized much of this work and proposed a neural model. With repetitive inputs to the spinal circuits, relatively low intensity stimuli would activate ‘H-type’ interneurons that decreased their activity, resulting in decreased output. Stronger stimuli would also activate an additional circuit whose interneuron activity would increase, resulting in increased output. The resultant behavior would be an algebraic summation of the two output streams and could be decreasing behavior (habituation), no change, or increasing behavior (sensitization).

A variant of sensitization studied for many years is termed spinal fixation (Patterson 1976, 2001). A strong stimulus given to a spinalized animal for 15–45 minutes will produce an alteration of spinal postures that are retained for weeks. This nonassociative behavioral alteration has been shown to be stimulus intensity dependent and to occur as a neural excitability alteration in the interneuron circuits of the cord. It appears to be a longer-term variant of sensitization that occurs with high level nociceptive stimuli. Its behavioral consequences remain obscure, but it may be a process underlying some forms of chronic pain syndromes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767036238

Delayed Interpretation, Shallow Processing and Constructions: the Basis of the “Interpret Whenever Possible” Principle

Philippe Blache, in Cognitive Approach to Natural Language Processing, 2017

1.2 Delayed processing

Different types of delaying effects can occur during language processing. For example, at the brain level, it has been shown that language processing may be impacted by the presentation rate of the input. This phenomena has been investigated in [VAG 12], claiming that when the presentation rate increases and becomes faster than the processing speed, intelligibility can collapse. This is due to the fact that language network seems to work in a constant of time: cortical processing speed is shown by the authors to be tightly constrained and cannot be easily accelerated. As a result, when the presentation rate increases, the processing speed remaining constant, a blocking situation can suddenly occur. Concretely, this means that when the presentation rate is accelerated, and because the processing speed remains constant, part of the input stream has to be buffered. Experiments show that the rate can be accelerated to 40% before reaching a collapse of intelligibility. This situation occurs when the buffer becomes saturated and is revealed at the cortical level by the fact that the activation of the higher-order language areas (that are said to reflect intelligibility [FRI 10]) drops suddenly, showing that the input signal becomes unintelligible.

This model suggests that words can be processed immediately when presented at a slow rate, in which case the processing speed is that of the sensory system. However, when the rate increases and words are presented more rapidly, the processing speed limit is reached and words cannot be processed in real time anymore. In such a situation, words have to be stored in a buffer, from which they are retrieved in a first-in-first-out manner, when cognitive resources become available again. When the presentation rate is higher than the processing speed, the number of words to be stored increases. A lock occurs when the maximal capacity of the buffer is reached, entailing a collapse of intelligibility.

Besides this buffering mechanism, other cues indicate that the input is probably not processed linearly, word-by-word, but rather only from time to time. This conception means that even in normal cases (i.e. without any intelligibility issue), the interpretation is only done periodically, the basic units being stored before being processed. Several studies have investigated such a phenomenon. At the cortical level, the analysis of stimulus intensity fluctuation reveals the presence of specific activity (spectral peaks) after phrases and sentences [DIN 16]. The same type of effect can also be found in eye-movement during reading: longer fixations are observed when reading words that end a phrase or a sentence. This wrap-up effect [WAR 09], as well as the presence of different timescales at the cortical level described above, constitute cues in favor of a delaying mechanism in which basic elements are stored temporarily, and an integration operation is triggered when enough material becomes available for the interpretation.

Which of the following describes the change in stimulus strength required to detect a difference between the stimuli?

Figure 1.1. Illustration of the bottleneck situation, when the presentation rate exceeds the processing speed

(reproduced from [VAG 12])

At the semantic level, other evidence also shows that language processing, or at least language interpretation, is not strictly incremental. Interesting experiments have been performed, which reveal that language comprehension can remain very superficial: [ROM 13] has shown that, in an idiomatic context, the access to the meaning of words can be completely switched off, replaced by a global access at the level of idiom. This effect has been shown at the cortical level: when introducing a semantic violation within an idiom, there is no difference between hard and soft semantic violations (which is not the case in a comparable non-idiomatic context); in some cases, processing a word does not mean integrating it into a structure. On the contrary, in this situation there is a simple shallow process of scanning the word, without doing any interpretation. The same type of observation has been made in reading studies: depending on the task (e.g. when very simple comprehension questions are expected), the reader may apply a superficial treatment [SWE 08]. This effect is revealed by the fact that ambiguous sentences are read faster, meaning that no resolution is done and the semantic representation remains underspecified. Such variation in the level of processing depends then on the context: when the pragmatic and semantic context carries enough information, it renders the complete processing mechanism useless, the interpretation being predictable. At the attentional level, this observation is confirmed in [AST 09], showing that the allocation of attentional resources to certain time windows depends on its predictability: minimal attention is allocated when information is predictable or, on the contrary, maximal attention is involved in case of mismatch with expectations. The same type of variation is observed when the listener adapts its perceptual strategy to the speakers, applying perceptual accommodation [MAG 07].

These observations are in line with the good-enough theory [FER 07] for which the interpretation of complex material is often considered to be shallow and incomplete. This model suggests that interpretation is only done from time to time, on the basis of a small number of adjacent words, and delaying the global interpretation until enough material becomes available. This framework and the evidence on which it relies also reinforce the idea that language processing is generally not linear and word-by-word. On the contrary, it can be very shallow and delayed when necessary.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781785482533500019

Stevens, Stanley Smith (1906–73)

R. Teghtsoonian, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Stevens's Psychophysical Power Law

Scholars familiar with his work quickly could agree in making a list of Stevens's major achievements, but could and do disagree about their relative importance and their lasting value. For some the single most important accomplishment was his discovery of the psychophysical power law that sometimes does and always should carry his name (Stevens 1957). He not only revived a view broached by some in the nineteenth century, but also created a body of evidence that firmly established what is sometimes called the Psychophysical Power Law and sometimes Stevens's Law. It concerns the relation between the strength of some form of energy, such as the sound pressure level of a tone, and the magnitude of the corresponding sensory experience, loudness in this example. It is easily discovered that sensation strength is nonlinearly related to stimulus intensity (two candles do not make a room seem twice as bright as one), but it is harder to say what that relation is. For over a century before Stevens approached the problem, the prevailing view, Fechner's Law, asserted that, as stimulus strength grew geometrically (by constant ratios), sensation strength increased arithmetically. This logarithmic relation had little supporting evidence but remained in many texts for lack of a good alternative. Stevens came to a different conclusion on the basis of data obtained by asking observers who were presented with stimuli varying in intensity to make numerical judgments of their subjective experience. In the case of loudness he found that judged loudness grew as a power function of sound pressure (with an exponent of about 0.6), not as the logarithmic relation predicted by Fechner (Stevens 1961). A similar relation was found to hold between judged brightness and luminance, and eventually, such a relation was found for dozens of other continua. In each case, as the amount or intensity of a stimulus grew by ratio steps, so too did the observer's estimate of his subjective experience of it. Stevens's Law seemed applicable to all intensive perceptual continua.

Equally interesting was the discovery that for each perceptual continuum, there was a distinctive value of the exponent relating numerical judgment to stimulus intensity. In hearing, for example, a huge range of sound pressures was compressed into a much smaller range of loudness, whereas for electric shock applied to the fingertips, a quite small range of currents between detectability and pain was expanded into a larger range of perceived intensities.

Although Stevens originally argued that the method of magnitude estimation provided a ‘direct measure’ of sensation (see Sensation and Perception: Direct Scaling) and that the power law described how energy in the environment was transformed through the relevant sensory apparatus into sensation, critics soon showed this assumption to be untenable. What remained was an empirical principle relating judgment to stimulus intensity.

Stevens went on to show that numerical magnitude estimation was a special case of a more general paradigm, cross-modal matching. If the observer is asked to match loudness, for example, by manipulating the luminance of a light source until its brightness matches the level of the target loudness, the power relation is again the result. And, neatly enough, the exponent of that function is exactly predictable by the exponents identified through magnitude estimation of brightness and loudness alone (Stevens 1969). It is this ability of observers to match perceived magnitudes across many different perceptual attributes, and the discovery that in all cases the result is described by a power function, that are the two pillars on which Stevens's Law rests. Despite the many controversies over its meaning, this simple empirical principle stands securely on a mountain of evidence and must be accommodated by any theory of psychophysics.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767003417

Context effects at the level of the sip and bite

Armand V. Cardello, in Context, 2019

3.7 Temporal contextual effects in chemosensory perception

All of the contextual and interaction effects outlined above involve the effect of one sensory stimulus, or attribute on another when these stimuli/attributes occur simultaneously. However, context effects also occur when stimuli are temporally separated. For example, the same sensory stimulus, when presented following a more intense stimulus, or together in a series of higher intensity stimuli, will be perceived as less intense than when that same stimulus is presented following, or within, a series of less intense stimuli. This is commonly referred to as “contextual contrast,” that is, the standard stimulus is contrasted with the greater or less intense stimuli in the temporal series. Such contrast effects are commonly reported with taste stimuli (Conner, Land, & Booth, 1987; Hallowell, Parikh, Veldhuizen, & Marks, 2016; Lawless, 1983; Lawless, Horne, & Spiers, 2000; Lee & O'Mahony, 2007; Marks, Shepard, Burger, & Chakwin, 2012; Rankin & Marks, 1991; Riskey, 1982; Schifferstein & Frijters, 1992; Schifferstein & Oudejans, 1996), odor stimuli (Nakano & Ayabe-Kanamura, 2017; Pol, Hijman, Baaré, & van Ree, 1998), auditory stimuli (Marks and Warner, 1991; Arieh & Marks, 2003, 2011), and visual stimuli (Arieh and Marks (2002). In addition, this effect can occur with extremely long intervals between the standard and contextual stimuli, that is, up to 25 minutes in the case of odor (Pol et al., 1998), and even across test sessions in the case of sucrose intensity (Vollmecke, 1987), capsaicin burn (Stevenson & Prescott, 1994), and the flavor of raspberry cordials (Walter & Boakes, 2009). A related contrast effect, known as condensation, has also been demonstrated, in which less intense stimuli within a context of more intense stimuli are less discriminable from one another (Parker, Murphy, & Schneider, 2002; Ward, Armstrong, & Golestani, 1996).

Contrast effects are often explained in terms of adaptation-level theory (Helson, 1964), which postulates that each stimulus in a series is compared relative to an average of all of the stimuli in the contextual range. Thus, a control stimulus presented within a range of high-intensity stimuli will be judged lower than when the same stimulus is presented within a range of lower-intensity stimuli. Alternatively, contrast effects can be explained through range-frequency theory (Parducci, 1963, 1983), which postulates that such effects result from a combination of subjects dividing the entire stimulus range into a finite set of equal subjective intervals (range principle), and subjects assigning the same number of stimuli to each perceptual category (frequency principle). However, contrast effects are not the only type of temporal contextual effects that have been reported. In certain cases, assimilation has been reported, that is, the standard stimulus is perceived as being more intense when presented within a series of greater intensity stimuli than when presented within a series of less intense stimuli. Assimilation effects are relatively rare in taste, but have been reported (Schifferstein & Kuiper, 1997). However, assimilation effects are common in other sensory modalities, such as in hearing and vision (Stewart, Brown, & Chater, 2005; Ward, 1979, 1982, 1985). Assimilation effects can be explained using integration theory (Anderson, 1981), which states that the rating for any stimulus is a weighted average of its own value, and the value of all other stimuli in its contextual range.

One important characteristic of the relationship between the standard stimulus and the contextual stimuli that seems to influence the likelihood of the occurrence of a context effect and whether or not contrast or assimilation occurs is the similarity between the standard stimulus and the contextual stimuli (Coren & Enns, 1993; Marks & Warner, 1991; Rankin & Marks, 1991). To the extent that the standard stimulus is perceived to be similar to, a part of, or within the same category as the contextual series, a contrast effect is more likely to occur (Marks & Warner, 1991; Schifferstein, 1995a, 1995b; Schifferstein & Oudejans, 1996). To the extent that the standard stimulus is different from, not perceived as part of, or in a different category as the contextual stimuli, this effect is less likely to occur.

Before concluding this section, it is worth mentioning another form of temporal contextual effect. This is the phenomenon of perceptual priming. For example, in the case of odors, it has been shown that presenting a series of food odors to subjects will predispose the subjects to more quickly perceive those same odors as being associated with foods (Koenig, Bourron, & Royet, 2000). This effect is presumed to occur by the first stimulus activating neural patterns that are stored in neural memory and are then more readily activated when the same stimulus is presented a second time. The interested reader is referred to Smeets and Dijksterhuis (2014) and Dijksterhuis (2016) for a detailed discussion of multisensory flavor priming.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128144954000039

Visual Perception, Neural Basis of

O. Braddick, in International Encyclopedia of the Social & Behavioral Sciences, 2001

6 Neural Basis of Visual Sensitivity and Thresholds

Much detailed knowledge of vision comes from the analysis of psychophysical thresholds—measures of how sensitive the visual system is in making fine discriminations. If these data are to be integrated with knowledge from neurophysiology, the relation between psychophysical threshold and patterns of neural activity needs to be understood.

‘Threshold’ conveys the suggestion of a stimulus intensity below which no neural activity is elicited. However, in practice, neurons are rarely silent, and their activity, even with a fixed stimulus, shows variability or ‘noise.’ Thus, the ability to detect weak stimuli is determined by the statistical problem of distinguishing a stimulus-driven change in activity from those due to noise.

Several studies have determined the ‘thresholds’ of single neurons in monkey visual cortex, in terms of the stimulus level which achieves a statistically defined level of reliable response. These can be compared with behavioral performance in detection; in some cases behavioral and neural measures can be made simultaneously. The information carried by single neurons proves to be remarkably close to that implied by the behavioral responses. Remarkable, because the animal has the opportunity to pool the information from thousands of neurons, which would be expected to yield much more reliable information. The data can be reconciled by realizing that although large numbers of neurons may be involved, the random variations in their activity are correlated so the number of genuinely independent neural signals is much smaller (Newsome et al. 1995).

Whether a particular judgment depends on pooling activity in tens, hundreds, or thousands of neurons, there must be many more neurons in the same brain area whose activity is irrelevant because they do not respond selectively to the stimulus concerned. If activity of such neurons was included in the pooling process, it would introduce noise and degrade sensitivity. Thus sensitive discrimination, either in the laboratory or in real-life situations, must depend on selecting a small minority of neurons whose output is informative for the task in hand.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767034963

Evoked Potentials

Leif Sörnmo, Pablo Laguna, in Bioelectrical Signal Processing in Cardiac and Neurological Applications, 2005

4.1.1 Auditory Evoked Potentials

Auditory EPs are generated in response to an auditory stimulation usually produced by a short sound wave. This type of evoked response reflects how neural information propagates from the acoustic nerve in the ear to the cortex. The response can be divided into three different intervals according to latency: the brainstem response, constituting the earliest part, followed by the middle and late cortical responses. Brainstem auditory evoked potentials (BAEPs) have primarily been used for the evaluation of different types of hearing loss (“audiometry”), diagnosis of certain brainstem disorders, and intraoperative monitoring in order to prevent neurological damage during surgery [6].

The waveform characteristics of the middle latency AEP are useful for monitoring the depth of anesthesia during surgery [7, 8]. Since a change in concentration of the anesthetic dose has been found to be closely related to latency, appropriate anesthetic depth can be maintained by continuously tracking changes in latency.

Recording setup.

Auditory EPs are elicited by a short duration click sound delivered to the subject through a conventional set of stereo head-phones. One ear is stimulated at a time, while the other ear is masked with bandlimited noise (“pink noise”). The click sound is usually produced by a 0.l-ms square wave pulse, having a repetition rate of 8–10 clicks per second. The stimulus intensity is commonly defined in units of peak equivalent sound pressure level and can vary between 40 and 120 decibels (dB) [9]; the dB scale is logarithmic with 0 dB defined as a sound pressure of 20 μ Pa.

Auditory EPs are usually recorded by placing electrodes behind the left and right ear and at the vertex. The placement is identical to that used in EEG recordings, i.e., the standardized 10/20 electrode placement system described in Section 2.3.

Waveform characteristics.

The three parts of the AEP exhibit considerable differences in signal properties. The BAEP has a very low amplitude, ranging from 0.1 to 0.5 μ V, and occurs from 2 to 12 ms after stimulus. Due to its low amplitude, several thousands of stimuli are required to achieve an acceptable noise level by averaging. The short duration of the BAEP implies that most of its spectral content is contained in the interval from 500 Hz to about 1.5 kHz [10, 11]. In a normal subject, the BAEP consists of up to seven waves, generated by various neural structures in the auditory pathways. By convention, these waves are labeled with Roman numbers, see Figure 4.3. The loss or reduction of individual waves provides clinically important information, as do absolute and interpeak latencies.

Which of the following describes the change in stimulus strength required to detect a difference between the stimuli?

Figure 4.3. : Auditory evoked potentials. (a) Recording setup and (b) typical morphology of a brainstem auditory evoked potential.

The middle AEP occurs from 12 to 50 ms, and is followed by the late response [6]. The amplitudes of these later components are considerably larger (1–20 μ V) than those of the BAEP and increase with latency. One hundred to 1000 stimuli are usually sufficient for adequate noise reduction. While the early brainstem response is quite reproducible from stimulus to stimulus, the middle and late responses can exhibit considerable variability in morphology.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124375529500040

What describes the change in stimulus strength required to detect a difference between the stimuli?

Sometimes, we are more interested in how much difference in stimuli is required to detect a difference between them. This is known as the just noticeable difference (JND) or difference threshold.

What must happen for a difference between two stimuli to be detected by an observer?

The amount of difference that can be detected depends on the size of the stimuli being compared. As stimuli get larger, differences must also become larger to be detected by an observer.

When we are able to detect a change has occurred in a stimulus it is called the?

Sensation is the process of detecting external stimuli and changing those stimuli into nervous system activity. 1.

What describes the conversion of sensory stimulus energy into neural impulses that allow for perception?

The conversion from sensory stimulus energy to action potential is known as transduction. You have probably known since elementary school that we have five senses: vision, hearing (audition), smell (olfaction), taste (gustation), and touch (somatosensation).