If someone is training a pigeon to peck a key, autoshaping will not be observed unless

  • Journal List
  • J Exp Anal Behav
  • v.86(1); 2006 Jul
  • PMC1601948

J Exp Anal Behav. 2006 Jul; 86(1): 1–10.

Abstract

Twelve pigeons were exposed to negative automaintenance contingencies for 17–27 sessions immediately after brief (14–16 sessions) or extended (168–237 sessions) exposure to positive automaintenance contingencies, or after 4–10 sessions of instrumental training. In all conditions, negative automaintenance contingencies virtually eliminated responding, reducing response rates to an average 1.3 responses per min. This reduction in response rate was validated by a model of transition between early and late response rates that assumed exponential transition of rates from one set of contingencies to the next. The model faithfully reproduced cumulative records, and yielded estimates of terminal rates under negative automaintenance that were close to operant level.

Keywords: Negative automaintenance, omission training, persistence, autoshaping, key peck, pigeons

Animals approach stimuli that indicate the availability of reinforcers (Thorndike, 1911). Hearst and Jenkins (1974) called this robust phenomenon sign tracking. It typically has been demonstrated using a preparation introduced by Brown and Jenkins (1968), who repeatedly presented food-deprived pigeons with an illuminated response key followed by brief access to food. This simple Pavlovian delay conditioning paradigm invariably elicited key pecking.

An extraordinary aspect of sign tracking is that it appears to persevere despite operant omission contingencies that discourage it. Williams and Williams (1969) modified the design so that food access was cancelled if a key peck occurred while the response key was illuminated. This modified version is known as negative automaintenance (NA), and is a type of omission training. In contradistinction, Brown and Jenkins's (1968) delay conditioning procedure has come to be called positive automaintenance (PA). Many researchers subsequently provided data suggesting that negative automaintenance, as the name suggests, maintains some—usually low—rate of unreinforced responding. Responding appears to be maintained not only in pigeons (Brownstein & Balsam, 1975; Deich & Wasserman, 1977; Griffin & Rashotte, 1973; Killeen, 2003; Schwartz & Williams, 1972; Wilkie, 1976; Woodard, Ballinger, & Bitterman, 1974), but also in rats (O'Connell, 1979), rabbits (Gormezano & Hiller, 1972), Japanese quail (Crawford & Domjan, 1993), and dogs (Sheffield, 1965).

Sign-tracking in NA is a counterintuitive phenomenon: It is maintained not because of any explicit instrumental reinforcement contingency, but despite contingent cancellation of reinforcement. Woodard et al. (1974) argued that Pavlovian contingencies (key light - food pairings) operate when key pecks are absent, and thus maintain key pecking in NA. Herrnstein and Loveland (1972) and Hursh, Navarick, and Fantino (1974) provided an alternative account based on the conditional reinforcement properties of keylight offset.

In instrumental conditioning both instrumental and Pavlovian contingencies also may operate concurrently but redundantly. Accordingly, NA, which eliminates positive response - reinforcer contingency, has been used to study the interaction of Pavlovian and instrumental contingencies in instrumental conditioning (Locurto, 1981). Negative automaintenance also has been considered a candidate model for impulsive behavior (Monterosso & Ainslie, 1999) and drug addiction (Tomie, 1995). Omission training has been widely used to reduce undesirable behavior without resorting to punishment (e.g., Vollmer, Ringdahl, Roane, & Marcus, 1997).

Killeen (2003) suggested that the contingencies of reinforcement specified by NA provide an expedient “test bed” for theories of conditioning. In the absence of key pecks, NA is indistinguishable from PA and elicits key pecks. Persistent key pecking in NA, however, degrades the pairing of key light and food, discouraging further key pecking. Once key pecking is eliminated, the key light reemerges as a signal to be tracked, and this cycle of extinction and reconditioning may continue ad infinitum. The continual succession of learning and extinction cycles may be used to evaluate theories of learning and extinction in Pavlovian conditioning. The cycles may involve subtle differentiation of topography from on-key to off-key responding; or they may involve processes that are more general.

The generality of negative automaintenance has been challenged by reports of omission contingencies effectively eliminating the target response. Some of these results may be attributed to the particular species used (crows: Powell & Kelly, 1976; guinea pigs: Poling & Poling, 1978; humans: Pithers, 1985; squirrel monkeys: Gamzu & Schwam, 1974), but other maintenance failures have been reported using the same species in which the sustained automaintained responding had been demonstrated (pigeons: Eldridge & Pear, 1987; Lucas, 1975; McSweeney, Swindell, & Weatherly, 1996; Powell & Kelly, 1976; rats: Locurto, Terrace & Gibbon, 1976; Tomie et al., 2003). One of the few explanations offered for these contradictory findings suggests that pretraining with intermittent instrumental reinforcement of key pecks—a frequent condition in experiments with pigeons, but rarely specified or controlled—interferes with the acquisition of behavior directed away from the response key under subsequent NA contingencies (Powell & Kelly, 1976; see also Dickinson, Squire, Varga, & Smith, 1998; for an alternative account, see Griffin & Rashotte, 1973). This may explain the persistent key pecking (8000+ trials) of Killeen's (2003) pigeons, which had extensive histories of reinforcement in his laboratory. Most studies that exposed experimentally naïve pigeons to a large number (≅ 600+) of NA trials eliminated responding, whereas those that used experienced pigeons often reported key-peck maintenance.

What is required to generate responding that persists despite the omission contingencies of NA? Is the exposure to an instrumental response–reinforcer relation necessary? If so, what length of exposure would be required? Would a very brief exposure suffice? Or is it possible that the preexposure to only one of the components of the instrumental relation—the pairing of response key and food—suffices to maintain responding under omission contingencies? We attempted to answer these questions by pretraining pigeons with brief and extended exposures to PA contingencies, and with a brief exposure to instrumental contingencies. NA training followed each pretraining condition. Response rates maintained exclusively by NA were then analytically extracted and evaluated separately. Because previous reports of key peck elimination due to NA contingencies indicate that some non-key peck responses oriented towards the key may be maintained by NA (Eldridge & Pear, 1987; Lucas, 1975), we also verified the topography of behavior maintained by NA contingencies.

Method

Subjects

Twelve experimentally naïve adult homing pigeons (Columba livia) served as subjects. The pigeons were housed individually in a room with a 12 ∶ 12-hr day ∶ night cycle, with the day cycle beginning at 0600 hr. They had free access to water and grit in their home cages. The pigeons' running weights were based on 80% of their free-feeding weights. Each pigeon was weighed immediately prior to an experimental session and was excluded from a session if its weight exceeded 8% of its running weight. When required, supplementary feeding of ACE-HI pigeon pellets (Star Milling Co) was given at the end of each day, no less than 12 hr before experimental sessions were conducted. Supplementary feeding amounts were based 50% on a moving average of the amount fed over the last 15 days, and 50% on current deviations from target running weight.

Apparatus

Experimental sessions were conducted in three MED Associates modular test chambers (305 mm long, 241 mm wide, and 292 mm high), each enclosed in a sound- and light-attenuating box equipped with a ventilating fan. The front and rear walls and the ceiling of the experimental chambers were made of clear plastic, and the front wall was hinged and functioned as a door to the chamber. The two side panels were aluminum, and the floor consisted of thin metal bars positioned above a drip pan. A plastic transparent response key (25 mm in diameter) was centered horizontally on an intelligence panel, which formed one side of the chamber. The response key was located 70 mm from the ceiling. The key could be illuminated by white light emitted from two diodes that were visible though the keys. Activation of the key generated a 100-ms period in which no activations were registered. A rectangular opening (52 mm wide, 57 mm high) located 20 mm above the floor and centered on the intelligence panel could provide access to milo when a grain hopper behind the panel was activated (Coulbourn Instruments, part H14-10R). A houselight was mounted 12 mm from the ceiling on the side wall opposite the intelligence panel. The ventilation fan mounted on the rear wall of the sound-attenuating chamber provided masking noise of 60 dB. Experimental events were arranged via a Med-PC® interface connected to a PC controlled by Med-PC IV® software. Hopper training sessions were conducted in a fourth chamber, identical to the experimental chambers, except that the front and back walls were covered with pink paper.

Procedure

Outline

The pigeons first were hopper trained and then exposed to positive automaintenance (PA) contingencies. At various stages of the experiment, the pigeons were trained with instrumental contingencies. Data analysis focused on the effect of each pretraining condition on subsequent performance under negative automaintenance (NA). Finally, we directly observed the topography of behavior under NA.

Hopper Training

In these sessions, the hopper was activated for 3.5 s at variable intervals averaging 20 s, sampling with replacement from a flat distribution ranging from 5 s to 35 s. The houselight was lit throughout the session. Sessions ended after 50 hopper activations, and were conducted once a day until all pigeons ate consistently from the hopper. Finally, at least one 1-hr session was conducted, in which the pigeon was placed in the chamber but no food was delivered.

Following hopper training, all experimental sessions were conducted daily. The chamber houselight was illuminated throughout each session. The unconditional stimulus (US, or reinforcer) was 2.5 s of hopper activation. All pigeons were initially assigned, unsystematically, to either one of two compound groups (AB or CD). Both groups were initially trained using PA contingencies, and then moved to other conditions (see Table 1), keeping as equal a number of pigeons in each condition as was possible.

Table 1

Chronological order and number of sessions in each experimental condition for each pigeon.

Condition Group
A
B
C
D
39 41 32 42 43 112 114 115 123 117 121 122
1. Brief PA 15 14 15 16 15
2. Early NA 22 20 21 21 21
3. Extended PA 222 222 230 221 217 174 155 166 156 143 158 168
4. PA retraining 15 15 14 14 15 15 16 15 15
5. Instrumental training 4 10 4 4 4 4
6. Late NA 20 20 21 20 27 19 20 20 17 20 20 23
7. Post-NA instrumental training 4 4 4 4 4 4
8. Re-exposure to NA 21 21 20 20 20 21
9. Direct observation 1 1 1 1

Positive Automaintenance (pa)

Each PA trial was preceded by an intertrial interval (ITI), during which the response key light was off. On each trial, the response key was illuminated for an interval ttrial, then the response key was turned off and food was delivered, followed by the next ITI. Table 2 specifies ttrial and ITI duration (tITI) for each pigeon, in each PA condition and other experimental conditions where it was applicable. A variety of ttrial/tITI ratios (1/3 to 1/12) were initially explored unsystematically. Sessions finished after 60 trials when tITI < 192 s, otherwise sessions ended after 28 trials.

Table 2

Trial (ttrial) and ITI duration (tITI) in s.

Condition Group
A
B
C
D
39 41 32 42 43 112 114 115 123 117 121 122
1. Brief PA 4/12 16/48 8/48 4/48 16/192
2. Early NA 4/12 16/48 8/48 4/48 16/192
3. Extended PA 4/12 16/48 8/48 4/48 16/192 8/48 16/192 4/48 16/48 16/192 16/192 *
4. PA retraining 8/48 8/48 8/48 8/48 8/48 8/48 8/48 8/48 8/48

Positive automaintenance contingencies were in effect for either about 15 sessions (Brief PA, compound group AB only) or about 200 sessions (Extended PA); exact numbers are given in Table 1. After the Extended PA condition, each pigeon was assigned to one of four groups (AB to A or B, CD to C or D). This assignment attempted to balance experienced ttrial/tITI ratios across compound groups AC and BD, and response rates obtained during the last 10 sessions of the Extended PA condition across the four groups. In a subsequent condition (PA retraining), ttrial/tITI was changed to 1/6 (ttrial = 8 s, tITI = 48 s) where necessary, in order to equalize ttrial/tITI ratios across pigeons. This ratio was kept constant for all pigeons thereafter.

Instrumental Training

Sessions consisted of six blocks of 10 trials each. Each trial was preceded by a 10-s ITI, when the response key was unlit. During each trial the response key was illuminated until a fixed-ratio requirement was completed. During the first block of trials, the fixed-ratio requirement was 1: After one response the keylight was turned off and food was presented. Afterwards, every new block doubled the previous fixed-ratio requirement, so that the last block required 32 responses for reinforcement. Sessions ended after all scheduled reinforcers were delivered, or after 90 min, whichever happened first.

Negative Automaintenance (na)

Sessions were similar to those in PA conditions, except that: (a) food was delivered after a trial only if the illuminated response key was not pecked during that trial; and (b) sessions finished after 60 deliveries of food or 100 min, whichever happened first. In all NA sessions, ttrial = 8 s, and tITI = 48 s.

Direct Observation

For 102 to 127 days prior to observation, Pigeons 43, 112, and 121 were kept in their home cages, whereas Pigeon 32 received instrumental contingencies similar to those described in Instrumental training. We expected that a prolonged removal from NA would result in a recovery of sign tracking when pigeons were reexposed to experimental conditions (see Rescorla, 1997). Data collected from prior NA conditions suggested that a substantial reduction in response rate would be evident during the first session of NA reinstatement. Direct observation was conducted in that first session. Direct observation sessions were similar to those in NA conditions, but they always ended after 60 trials. During these sessions the sound- and light-attenuating box was open, and an experimenter with an observation protocol that listed the responses in Table 3 sat approximately 1.5 m diagonally from the box, with the front panel easily observable. Observations were conducted in a dark, quiet room.

Table 3

Response frequency during first and last (first-last) 20 trials of the direct observation session.

Response Pigeon
32 43 112 121
1. Pecked key. 9-1 14-7 4-1 15-7
2. Contacted wall adjacent to key with beak. 1-17 7-16 3-0 10-18
3. “Stared” at key for 1 s or more. 12-20 1-1 1-1 5-8
4. Completed pecking movement towards key without contacting the key or adjacent wall. 2-4 11-7 2-1 2-1
5. Pecked any surface of the chamber other than key or adjacent wall. 1-0 1-0 5-13 0-0
6. Wing flapping and turning. 1-0 0-0 4-4 2-0
7. “Stared” at wall opposite of key for 1 s or more. 0-0 0-0 1-0 1-0
8. Hopper exploration before food delivery. 2-2 0-0 0-0 0-0

Analysis

Modeling

The impact of negative automaintenance (NA) on the response rate of a pretrained pigeon may be described as a trajectory drawn between two points: an initial rate (R0) and a terminal rate maintained by NA contingencies (RNA). A trajectory between these two rates that is consistent with the simplest models of learning would start at R0 when NA contingencies are first introduced, and move towards RNA at a rate proportional to its current distance from RNA. This implies that response rate [R(t)] changes exponentially from R0 to the base rate of RNA as a function of time (t) of exposure to NA:

If someone is training a pigeon to peck a key, autoshaping will not be observed unless

1

where c is a time constant. When t = c, the transition is 1– e−1 ≈ 63% complete. When the terminal rate (RNA) is set to zero, this model is equivalent to Clark's (1959) Equation 2, which accounted for response rates under extinction following variable-interval schedules of reinforcement.

To model a cumulative record, Equation 1 is integrated over time and the cumulative number of responses B is obtained as a function of cumulative trial time t (i.e., ITIs are excluded):

If someone is training a pigeon to peck a key, autoshaping will not be observed unless

2

This model was fitted to the cumulative number of key pecks obtained under each NA condition using the method of least squares (Brown, 2001). The parameter R0 captured the response rates carried over from preceding conditions (i.e., positive automaintenance or instrumental training). This model permitted us to analytically separate these persisting rates from those maintained by NA alone, RNA. Estimates of parameter c indicated the rate at which NA contingencies gained control over key pecking.

Results

Pretraining Conditions

All pigeons were responding consistently by the end of their first exposure to positive automaintenance (PA). The median response rate over the last 10 sessions of the first PA exposure (Brief or Extended) was 43.0 rpm; the minimum response rate was 6.0 rpm (Pigeon 122). However, responding in PA after Early NA (Groups A and B) was noticeably reduced; over the last 10 sessions of PA (Extended or retraining), the response rate of all these pigeons but one (# 39) was below 4 rpm; the median response rate was 2.9 rpm.

All 12 pigeons were responsive to instrumental contingencies, even after NA training. All pigeons except one (# 42) obtained all programmed reinforcers in the first session of instrumental training and on every session thereafter. Pigeon 42 started to collect all programmed reinforcers on the fifth session.

Negative Automaintenance (na)

Cumulative response records obtained during NA conditions following the various pretraining manipulations1 are presented in Figure 1, along with the best fit of Equation 2. Parameters R0, c, and RNA of Equation 2 capture the rise, deceleration and terminal rate of these curves. Despite the large variability across individual performances, the data were well accounted for by Equation 2.

If someone is training a pigeon to peck a key, autoshaping will not be observed unless

Cumulative records (thick lines) obtained from NA conditions, and superimposed projections based on Equation 2 (thin lines) for each pigeon.

Pigeon numbers are indicated at the end point of each record. Panel titles indicate preceding training conditions.

Figure 2 shows the parameter estimates of the individual pigeons. The difference in vertical axis scales between the top (R0) and bottom (RNA) panels of Figure 2 indicates the large decrease in response rate due to NA contingencies. The low values of RNA are visible in Figure 1 as the relatively flat terminal slopes of the cumulative response curves. Pooling across conditions, the median RNA/R0 ratio was less than 1/21; NA contingencies effected a 95% reduction in response rates.

If someone is training a pigeon to peck a key, autoshaping will not be observed unless

Equation 2 parameter estimates used in Figure 1.

Roman numerals on the horizontal axis correspond to panels in Figure 1—i.e., NA preceded by: (I) Brief PA; (II) Extended PA and no instrumental training; (III) Extended PA and instrumental training; (IV) Extended PA, prior NA, and instrumental training. Each pigeon is represented by a unique symbol, and median parameter values are represented by horizontal marks.

Figure 2 also shows the median parameter value for each pretraining manipulation across pigeons (horizontal marks). Using these median values, we generated a set of representative cumulative records (Figure 3). For comparison purposes, we fitted Equation 2 to Williams & Williams's (1969) data from 13 pigeons,2 and used the median of those parameter estimates to generate another ideal cumulative record, presented in Figure 3 as Curve V. The relatively low values of RNA seen in Figures 1 and 2 are reflected as relatively flat terminal slopes in Figure 3. Although the curves in Figure 3 rise from the origin with various slopes and differ in curvature, they all stabilize as roughly parallel lines, with the exception of curve IV. The divergence of curve IV (recent preexposure to NA followed by instrumental training) comes from low median estimates of all three parameters (see Figure 2).

If someone is training a pigeon to peck a key, autoshaping will not be observed unless

Curves I through IV: Representative cumulative records based on Equation 2, and elaborated from median best-fitting parameter values (see horizontal marks in Figure 2).

Roman numerals at the end point of each record indicate pretraining condition: (I) Brief PA; (II) Extended PA and no instrumental training; (III) Extended PA and instrumental training; (IV) Extended PA, prior NA, and instrumental training. Curve V: Representative cumulative record elaborated in the same way as Curves I through IV, but based on parameter estimates that provided the best fit to Williams & Williams's (1969, Figure 1) NA data; the continuous curve stops at the median cumulative exposure to key light (44 min), and is extrapolated as a dashed curve for comparison purposes.

Direct Observation

Changes in the response topography of the 4 pigeons that were later reintroduced to NA contingencies were relatively uniform (Table 3): The number of trials with key pecks was reduced by at least half, from the first to the last block of 20 trials. The behavior that substituted for key pecking during the last block of trials was key-oriented behavior that was relatively frequent even during the first block of 20 trials—either: pecking areas of the front panel adjacent to the key (Pigeons 32, 43, and 121); pecking other surfaces (junction of front panel and left wall: Pigeon 112); or head close to and oriented towards the key for more than 1 s (“staring”; Pigeon 32). Only 1 of 4 pigeons (Pigeon 32) engaged in hopper exploration prior to reinforcer delivery (“goal tracking”) during the last block of 20 trials, and only twice.

Discussion

Equation 2 provides an expedient tool for separating the transient effects of pretraining conditions (R0) from response rates maintained by current conditions (RNA). Furthermore, it specifies the time required for transient effects to decline (c): Exponential processes are 95% of the way to asymptote when 3c seconds have elapsed (e−3 ≈ 0.05). Equation 2 permitted the analytic separation of R0 and RNA to evaluate changes in the rate of acquisition of a new topography and the long-term effectiveness of negative automaintenance (NA) contingencies. Despite the various pretraining manipulations, omission contingencies were effective in reducing responding to extinction levels; in fact, the key-pecking rate that was maintained by NA contingencies (mean RNA = 1.3 rpm) was very close to typical response rates reported under post-PA extinction in pigeons (1.4 rpm: Rescorla, 2003, Experiment 3; 1.3 rpm: Woodard et al., 1974; in both experiments, the total keylight exposure in extinction was 32 min, and mean rates were calculated over the last extinction session). This result is not supportive of prior reports of key peck maintenance under NA (e.g., Brownstein & Balsam, 1975), but it is consistent with previous failures to maintain responding under omission contingencies (e.g., Locurto et al., 1976).

The archetype of NA (Williams & Williams, 1969) differs from the procedure reported here in two respects: Williams and Williams reported low initial rates and noticeably shorter exposure to NA contingencies. The first difference is probably due to the type of pretraining provided to the pigeons (none in most cases, one session of PA or instrumental training for 3 pigeons); the shorter exposure precludes a direct comparison between the data presented here and Williams and Williams's data. When the median performance of Williams and Williams's pigeons was projected using Equation 2, however, we were unable to differentiate it from the median performance of pigeons following the various pretraining manipulations reported here. Whereas the performance of Williams and Williams's pigeons is usually presented as evidence of persistent sign tracking under NA (see, e.g., Locurto et al., 1976), Figure 3 shows that estimated asymptotic response rates were close to most of our pigeons'—that is, they were close to operant level. A simple linear projection of Williams and Williams's data could mislead one into assuming high maintenance rates. The median value of c (9.1) shows that final rates were within 1% of asymptote, and that asymptote was not significantly different from those shown in Figure 3.

The nearly parallel curves in Figure 3 indicate that pretraining conditions had only a transient effect on NA performance. The exception (curve IV) suggests that recent preexposure to NA contingencies could have overridden whatever transient impact instrumental pretraining might have had (compare curves II and III). Furthermore, the combined exposure to NA and instrumental contingencies may have resulted in a long-lasting reduction of response rate, generating a performance relatively invariant across pigeons (in Figure 1, compare Panels II and IV). This effect has been reported previously for alternations between variable-interval schedules and extinction (Clark, 1964).

The results presented here are inconsistent with the notion that prior strengthening of a key–food association through repeated exposure can maintain responding in subsequent omission training. Also, the failure of instrumental pretraining to forestall the elimination of pecking suggests that preacquisition of a key peck–food association alone was not sufficient for maintenance under omission. By ruling out alternative explanations, the present results indicate that what little perseveration is found under omission training is dependent on the extensive training with a positive response–reinforcer contingency (Powell & Kelly, 1976). Although omission training virtually eliminates key pecking, it nonetheless maintains other key-approaching behavior in pigeons (Table 3, and Barrera, 1974; Eldridge & Pear, 1987; Jenkins, 1981; Lucas, 1975; Schwartz & Williams, 1972) and various other responses in rats (Davey, Oakley, & Cleland, 1981; Holland, 1979; Stiers & Silberberg, 1974) and dogs (Herendeen & Shapiro, 1975; Shapiro & Herendeen, 1975). Omission training does not so much eliminate cue approach generated by the cue–reinforcer relation of PA, but redirects it to allow the delivery of reinforcers. Prolonged reinforcement of response repetition may interfere with acquisition of this redirection by generating topographically rigid behavior (Page & Neuringer, 1985) that precludes the development of wall pecks, “staring” and other key peck-incompatible responses. Negative automaintenance also may have produced a similar rigidity in behavior, to the extent that it hindered the reacquisition of key pecking when PA was reinstated.

This reassessment of sign tracking suggests a predominant plasticity in what frequently is regarded as a highly preorganized behavioral pattern. The topography of Pavlovian conditional responses (CRs) appears to be strongly modulated by the instrumental reinforcement of competing behavior. The sensitivity of CRs to this interference seems to depend on multiple parameters; we have explored just one, the extent of preexposure to cue–food and response–food contingencies. Other parameters to be considered may include intrinsic properties of the US and conditional stimuli (e.g., Holland, 1979; Wilkie, 1976), aspects of CR topography inconsistent with reinforcement (e.g., Davey et al., 1981; Lucas, 1975), level of US cancellation or devaluation contingent on the CR (e.g., US postponement: Allan & Matthews, 1991; Myerson, Myerson, & Parker, 1979), and level of US deprivation (e.g., Poling & Poling, 1978; Rosenthal & Matthews, 1978). The large variability in these parameters across research reports may account for the inconsistent results obtained under NA procedures. From an applied perspective, the sensitivity of CRs to the differential reinforcement of other responses suggests that undesirable CRs (e.g, craving for drugs of abuse) might be reduced or redirected by reinforcing incompatible responses. This effect would be analogous to the competition of instrumentally conditioned responses described by Herrnstein's (1961) Matching Law, which has provided valuable insights for behavior modification (McDowell, 1982). The potential parallel of the theoretical and applied implications of the results presented here and Herrnstein's Matching Law is a matter that deserves further research.

Acknowledgments

We thank Diana Posadas-Sánchez and Lewis A. Bizo for their contribution to experimental design and management, Douglas Elliffe for introducing us to the Solver function, and Erica Babino, Michelle Gaza, Paul Jellison, Neeley John, Sheena Prakash, Rebecca Rich, Morgan Stanton, Jeffrey Starrick, Eric Thrailkill, and Refina Willie, for collecting the data presented here. Portions of this research were presented at the 2005 Winter Conference on Animal Learning and Behavior, Winter Park, CO.

Footnotes

This research was supported by NIMH grant # 1R01MH066860.

1A substantial portion of Pigeon 42’s responses in Early NA condition was not recorded. The magnitude of data loss made the fitted parameter values ambiguous; all data obtained from this pigeon in this condition, therefore, were excluded from analysis.

2Fitting was conducted only after the first response of each pigeon in Williams and Williams’s (1969) experiment.

References

  • Allan R.W, Matthews M.T. Turning back the clock in serial-stimulus sign tracking. Journal of the Experimental Analysis of Behavior. 1991;56:427–443. [PMC free article] [PubMed] [Google Scholar]
  • Barrera F.J. Centrifugal selection of signal-directed pecking. Journal of the Experimental Analysis of Behavior. 1974;22:341–355. [PMC free article] [PubMed] [Google Scholar]
  • Brown A.M. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet. Computer Methods and Programs in Biomedicine. 2001;65:191–200. [PubMed] [Google Scholar]
  • Brown P.L, Jenkins H.M. Auto-shaping of the pigeon's key-peck. Journal of the Experimental Analysis of Behavior. 1968;11:1–8. [PMC free article] [PubMed] [Google Scholar]
  • Brownstein A.J, Balsam P.D. Search for conditioned reinforcement effects in negative automaintenance of keypecking. Bulletin of the Psychonomic Society. 1975;6:165–168. [Google Scholar]
  • Clark F.C. Some quantitative properties of operant extinction data. Psychological Reports. 1959;5:131–139. [Google Scholar]
  • Clark F.C. Effects of repeated VI reinforcement and extinction upon operant behavior. Psychological Reports. 1964;15:943–955. [Google Scholar]
  • Crawford L.L, Domjan M. Sexual approach conditioning: Omission contingency tests. Animal Learning & Behavior. 1993;21:42–50. [Google Scholar]
  • Davey G.C.L, Oakley D, Cleland G.G. Autoshaping in the rat: Effects of omission on the form of the response. Journal of the Experimental Analysis of Behavior. 1981;36:75–91. [PMC free article] [PubMed] [Google Scholar]
  • Deich J.D, Wasserman E.A. Rate and temporal pattern of key pecking under autoshaping and omission schedules of reinforcement. Journal of the Experimental Analysis of Behavior. 1977;27:399–405. [PMC free article] [PubMed] [Google Scholar]
  • Dickinson A, Squire S, Varga Z, Smith J.W. Omission learning after instrumental pretraining. Quarterly Journal of Experimental Psychology Section B-Comparative and Physiological Psychology. 1998;51:271–286. [Google Scholar]
  • Eldridge G.D, Pear J.J. Topographical variations in behavior during autoshaping, automaintenance, and omission training. Journal of the Experimental Analysis of Behavior. 1987;47:319–333. [PMC free article] [PubMed] [Google Scholar]
  • Gamzu E, Schwam E. Autoshaping and automaintenance of the key-press response in squirrel monkeys. Journal of the Experimental Analysis of Behavior. 1974;21:361–372. [PMC free article] [PubMed] [Google Scholar]
  • Griffin R.W, Rashotte M.E. A note on the negative automaintenance procedure. Bulletin of the Psychonomic Society. 1973;2:402–404. [Google Scholar]
  • Gormezano I, Hiller G.W. Omission training of jaw-movement response of rabbits to a water US. Psychonomic Science. 1972;29:276–278. [Google Scholar]
  • Hearst E, Jenkins H.M. Sign-tracking: The stimulus-reinforcer relation and directed action. Austin, TX: Psychonomic Society; 1974. [Google Scholar]
  • Herendeen D.L, Shapiro M.M. Extinction and food-reinforced inhibition of conditioned salivation in dogs. Animal Learning & Behavior. 1975;3:103–106. [PubMed] [Google Scholar]
  • Herrnstein R.J. Relative and absolute strength of response as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behavior. 1961;4:267–272. [PMC free article] [PubMed] [Google Scholar]
  • Herrnstein R.J, Loveland D.H. Food-avoidance in hungry pigeons, and other perplexities. Journal of the Experimental Analysis of Behavior. 1972;18:369–383. [PMC free article] [PubMed] [Google Scholar]
  • Holland P.C. Differential effects of omission contingencies on various components of Pavlovian appetitive conditioned responding in rats. Journal of Experimental Psychology-Animal Behavior Processes. 1979;5:178–193. [PubMed] [Google Scholar]
  • Hursh S.R, Navarick D.J, Fantino E. “Automaintenance”: The role of reinforcement. Journal of the Experimental Analysis of Behavior. 1974;21:117–124. [PMC free article] [PubMed] [Google Scholar]
  • Jenkins P.E. The determiners of keypeck duration. Animal Learning & Behavior. 1981;9:501–507. [Google Scholar]
  • Killeen P.R. Complex dynamic processes in sign tracking with an omission contingency (negative automaintenance). Journal of Experimental Psychology-Animal Behavior Processes. 2003;29:49–61. [PMC free article] [PubMed] [Google Scholar]
  • Locurto C.M. Contributions of autoshaping to the partitioning of conditioned behavior. In: Locurto C.M, Terrace H.S, Gibbon J, editors. Autoshaping and conditioning theory. New York: Academic Press; 1981. pp. 101–135. In. eds. [Google Scholar]
  • Locurto C, Terrace H.S, Gibbon J. Autoshaping, random control, and omission training in rat. Journal of the Experimental Analysis of Behavior. 1976;26:451–462. [PMC free article] [PubMed] [Google Scholar]
  • Lucas G.A. Control of keypecks during automaintenance by prekeypeck omission training. Animal Learning & Behavior. 1975;3:33–36. [Google Scholar]
  • McDowell J.J. The importance of Herrnstein's mathematical statement of the law of effect for behavior therapy. American Psychologist. 1982;37:771–779. [PubMed] [Google Scholar]
  • McSweeney F.K, Swindell S, Weatherly J.N. Within-session changes in responding during autoshaping and automaintenance procedures. Journal of the Experimental Analysis of Behavior. 1996;66:51–61. [PMC free article] [PubMed] [Google Scholar]
  • Monterosso J, Ainslie G. Beyond discounting: Possible experimental models of impulse control. Psychopharmacology. 1999;146:339–347. [PubMed] [Google Scholar]
  • Myerson J, Myerson W.A, Parker B.K. Automaintenance without stimulus-change reinforcement: Temporal control of key pecks. Journal of the Experimental Analysis of Behavior. 1979;31:395–403. [PMC free article] [PubMed] [Google Scholar]
  • O'Connell M.F. Temporal distributions of responding during discrete-trial omission training in rats. Journal of the Experimental Analysis of Behavior. 1979;31:31–40. [PMC free article] [PubMed] [Google Scholar]
  • Page S, Neuringer A. Variability is an operant. Journal of Experimental Psychology: Animal Behavior Processes. 1985;11:429–452. [Google Scholar]
  • Pithers R.T. The roles of event contingencies and reinforcement in human autoshaping and omission responding. Learning and Motivation. 1985;16:210–237. [Google Scholar]
  • Poling A, Poling T. Automaintenance in guinea pigs: Effects of feeding regimen and omission training. Journal of the Experimental Analysis of Behavior. 1978;30:37–46. [PMC free article] [PubMed] [Google Scholar]
  • Powell R.W, Kelly W. Responding under positive and negative response contingencies in pigeons and crows. Journal of the Experimental Analysis of Behavior. 1976;25:219–225. [PMC free article] [PubMed] [Google Scholar]
  • Rescorla R.A. Spontaneous recovery after Pavlovian conditioning with multiple outcomes. Animal Learning & Behavior. 1997;25:99–107. [Google Scholar]
  • Rescorla R.A. Protection from extinction. Learning and Behavior. 2003;31:124–132. [PubMed] [Google Scholar]
  • Rosenthal R.L, Matthews T.J. The effects of prefeeding in autoshaping and omission training. Bulletin of the Psychonomic Society. 1978;11:153–156. [Google Scholar]
  • Schwartz B, Williams D.R. Role of response-reinforcer contingency in negative automaintenance. Journal of the Experimental Analysis of Behavior. 1972;17:351–357. [PMC free article] [PubMed] [Google Scholar]
  • Shapiro M.M, Herendeen D.L. Food-reinforced inhibition of conditioned salivation in dogs. Journal of Comparative and Physiological Psychology. 1975;88:628–632. [PubMed] [Google Scholar]
  • Sheffield F.D. Relation between classical conditioning and instrumental learning. In: Prokasy W.F, editor. Classical conditioning: A symposium. New York: Appleton-Century-Crofts; 1965. pp. 302–322. In. ed. [Google Scholar]
  • Stiers M, Silberberg A. Lever-contact responses in rats: Automaintenance with and without a negative response-reinforcer dependency. Journal of the Experimental Analysis of Behavior. 1974;22:497–506. [PMC free article] [PubMed] [Google Scholar]
  • Thorndike E.L. Animal intelligence. New York: Macmillan; 1911. [Google Scholar]
  • Tomie A. CAM - An animal-learning model of excessive and compulsive implement-assisted drug-taking in humans. Clinical Psychology Review. 1995;15:145–167. [Google Scholar]
  • Tomie A, Di Poce J, Aguado A, Janes A, Benjamin D, Pohorecky L. Effects of Autoshaping Procedures on 3H-8-OH-DPAT-labeled 5-HT1a binding and 125I-LSD-labeled 5-HT2a binding in rat brain. Brain Research. 2003;975:167–178. [PubMed] [Google Scholar]
  • Williams D.R, Williams H. Auto-maintenance in pigeon: Sustained pecking despite contingent nonreinforcement. Journal of the Experimental Analysis of Behavior. 1969;12:511–520. [PMC free article] [PubMed] [Google Scholar]
  • Wilkie D.M. Keypecking under different intertrial intervals in negative automaintenance. Bulletin of the Psychonomic Society. 1976;8:431–432. [Google Scholar]
  • Woodard W.T, Ballinger J.C, Bitterman M.E. Autoshaping: Further study of negative automaintenance. Journal of the Experimental Analysis of Behavior. 1974;22:47–51. [PMC free article] [PubMed] [Google Scholar]
  • Vollmer T.R, Ringdahl J.E, Roane H.S, Marcus B.A. Negative side effects of noncontingent reinforcement. Journal of Applied Behavior Analysis. 1997;30:161–164. [PMC free article] [PubMed] [Google Scholar]


Articles from Journal of the Experimental Analysis of Behavior are provided here courtesy of Society for the Experimental Analysis of Behavior


Which of the following is an example of a conditioned reinforcer?

These reinforcers are also known as Conditioned Reinforcers. For example: money, grades and praise are conditioned reinforcers.

What makes shaping such a useful technique in behavior modification is that it ______?

autoshaped responses persist even when each response cancels the reinforce delivery. What makes shaping such a useful technique in behavior modification is that it: can be used to produce behaviors the learner has never displayed before.