Individual differences among participants in a random groups design are controlled by

I. Key Points

Why Psychologists Conduct Experiments

Researchers conduct experiments to test hypotheses about the causes of behavior.

Experiments allow researchers to decide whether a treatment or program effectively changes behavior.


Logic of Experimental Research

Researchers manipulate an independent variable in an experiment to observe the effect on behavior, as assessed by the dependent variable.

Experimental control allows researchers to make the causal inference that the independent variable caused the observed changes in the dependent variable.

Control is the essential ingredient of experiments; experimental control is gained through manipulation, holding conditions constant, and balancing.

An experiment has internal validity when it fulfills the three conditions required for causal inference: covariation, time-order relationship, and elimination of plausible alternative causes.

When confounding occurs, a plausible alternative explanation for the observed covariation exists, and therefore, the experiment lacks internal validity. Plausible alternative explanations are ruled out by holding conditions constant and balancing.


Random Groups Design

In an independent groups design, each group of subjects participates in only one condition of the independent variable.

Random assignment to conditions is used to form comparable groups by balancing or averaging subject characteristics (individual differences) across the conditions of the independent variable manipulation.

When random assignment is used to form independent groups for the levels of the independent variable, the experiment is called a random groups design.

    Block Randomization

    Block randomization balances subject characteristics and potential confoundings that occur during the time in which the experiment is conducted, and it creates groups of equal size.

    Threats to Internal Validity

    Randomly assigning intact groups to different conditions of the independent variable creates a potential confounding due to preexisting differences among participants in the intact groups.

    Block randomization increases internal validity by balancing extraneous variables across conditions of the independent variable.

    Subjective subject loss, but not mechanical subject loss, threatens the internal validity of an experiment.

    Placebo control groups are used to control for the problem of demand characteristics, and double-blind experiments control both demand characteristics and experimenter effects.


Analysis and Interpretation of Experimental Findings
    The Role of Data Analysis in Experiments

    Data analysis and statistics play a critical role in researchers' ability to make the claim that an independent variable has had an effect on behavior.

    The best way to determine whether the findings of an experiment are reliable is to do a replication of the experiment.

    Describing the Results

    The two most common descriptive statistics that are used to summarize the results of experiments are the mean and standard deviation.

    Measures of effect size indicate the strength of the relationship between the independent and dependent variables, and they are not affected by sample size.

    One commonly used measure of effect size, d, examines the difference between two group means relative to the average variability in the experiment.

    Meta-analysis uses measures of effect size to summarize the results of many experiments investigating the same independent variable or dependent variable.

    Confirming What the Results Reveal

    Researchers use inferential statistics to determine whether an independent variable has a reliable effect on a dependent variable.

    Two methods to make inferences based on sample data are null hypothesis testing and confidence intervals.

    Researchers use null hypothesis testing to determine whether mean differences among groups in an experiment are greater than the differences that are expected simply because of error variation.

    A statistically significant outcome is one that has a small likelihood of occurring if the null hypothesis were true.

    Researchers determine whether an independent variable has had an effect on behavior by examining whether the confidence intervals for different samples in an experiment overlap. The degree of overlap provides information as to whether the sample means estimate the same population mean or different population means.


Establishing the External Validity of Experimental Findings

The findings of an experiment have external validity when they can be applied to other individuals, settings, and conditions beyond the scope of the specific experiment.

In some investigations (e.g., theory-testing), researchers may choose to emphasize internal validity over external validity; other researchers may choose to increase external validity using sampling or replication.

Conducting field experiments is one way that researchers can increase the external validity of their research in real-world settings.

Partial replication is a useful method for establishing the external validity of research findings.

Researchers often seek to generalize results about conceptual relationships among variables rather than specific conditions, manipulations, settings, and samples.


Matched Groups Design

A matched group design may be used to create comparable groups when there are too few subjects available for random assignment to work effectively.

Matching subjects on the dependent variable (as a pretest) is the best approach for creating matched groups, but scores on any matching variable must correlate with the dependent variable.

After subjects are matched on the matching variable, they should then be randomly assigned to the conditions of the independent variable.


Natural Groups Design

Individual differences variables (or subject variables) are selected rather than manipulated to form natural groups designs.

The natural groups design represents a type of correlational research in which researchers look for covariations between natural groups variables and dependent variables.

Causal inferences cannot be made regarding the effects of natural groups variables because plausible alternative explanations for group differences exist.


II. Glossary

internal validity Degree to which differences in performance can be attributed unambiguously to an effect of an independent variable, as opposed to an effect of some other (uncontrolled) variable; an internally valid study is free of confounds.

independent groups design Each separate group of subjects in the experiment represents a different condition as defined by the level of the independent variable.

random assignment Most common technique for forming groups as part of an independent groups design; the goal is to establish equivalent groups by balancing individual differences.

random groups design Most common type of independent groups design in which subjects are randomly assigned to each group such that groups are considered comparable at the start of the experiment.

block randomization The most common technique for carrying out random assignment in the random groups design; each block includes a random order of the conditions, and there are as many blocks as there are subjects in each condition of the experiment.

threats to internal validity Possible causes of a phenomenon that must be controlled so a clear cause-effect inference can be made.

mechanical subject loss Occurs when a subject fails to complete the experiment because of equipment failure or because of experimenter error.

selective subject loss Occurs when subjects are lost differentially across the conditions of the experiment as the result of some characteristic of each subject that is related to the outcome of the study.

experimenter effects Experimenters' expectations that may lead them to treat subjects differently in different groups or to record data in a biased manner.

placebo control group Procedure by which a substance that resembles a drug or other active substance but that is actually an inert, or inactive, substance is given to participants.

double-blind procedure Both the participant and the observer are kept unaware (blind) of what treatment is being administered.

replication Repeating the exact procedures used in an experiment to determine whether the same results are obtained.

effect size Index of the strength of the relationship between the independent variable and dependent variable that is independent of sample size.

Cohen's d A frequently used measure of effect size in which the difference in means for two conditions is divided by the average variability of participants' scores (within-group standard deviation). Based on Cohen's guidelines, d values of .20, .50, and .80 represent small, medium, and large effects, respectively, of an independent variable.

meta-analysis Analysis of results of several (often, very many) independent experiments investigating the same research area; the measure used in a meta-analysis is typically effect size.

Null Hypothesis Significance Testing (NHST) A procedure for statistical inference used to decide whether a variable has produced an effect in a study. NHST begins with the assumption that the variable has no effect (see null hypothesis), and probability theory is used to determine the probability that the effect (e.g., a mean difference between conditions) observed in a study would occur simply by error variation ("chance"). If the likelihood of the observed effect is small (see level of significance), assuming the null hypothesis is true, we infer the variable produced a reliable effect (see statistically significant).

statistically significant When the probability of an obtained difference in an experiment is smaller than would be expected if error variation alone were assumed to be responsible for the difference, the difference is statistically significant.

confidence interval Indicates the range of values which we can expect to contain a population value with a specified degree of confidence (e.g., 95%).

matched groups design Type of independent groups design in which the researcher forms comparable groups by matching subjects using a matching variable and then randomly assigns the members of these matched sets of subjects to the conditions of the experiment.

individual differences variable A characteristic or trait that varies consistently across individuals, such as level of depression, age, intelligence, gender. Because this variable is formed from preexisting groups (i.e., it occurs "naturally"), an individual differences variable is sometimes called a natural groups variable. Another term sometimes used synonymously with individual differences variable is subject variable.

natural groups design Type of independent groups design in which the conditions represent the selected levels of a naturally occurring independent variable, for example, the individual differences variable age.

What are the 3 Types of experimental design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

What is an independent group design also known as?

A between-subjects design is also called an independent measures or independent-groups design because researchers compare unrelated measurements taken from separate groups.

What is the independent measures design?

Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

What are different Types of experimental designs?

There are three primary types of experimental design: Pre-experimental research design. True experimental research design. Quasi-experimental research design.