Which outcome occurs when subjects in a study do not adequately represent the target population

Cost-effectiveness

Louise Barnsbee, ... Son Nghiem, in Mechanical Circulatory and Respiratory Support, 2018

Target Population and Subgroups

The target population is the group of individuals that the intervention intends to conduct research in and draw conclusions from. In cost-effectiveness analysis, characteristics of the target population and any subgroups should be described clearly. The choice of characteristics depends on the medical literature and practices, the objectives of the study, and contextual information. For studies of VAD and ECMO, the key characteristics could be age, gender, risk factors such as smoking, and common health conditions such as heart failure [5]. For example, a target population was briefly described in a cost-effectiveness analysis of VADs as “a cohort of patients with advanced heart failure seen at the Toronto General Hospital” [6].

Randomized control trials use two random groups: the treatment group (those who receive the intervention) and the control group (those who do not receive the intervention). For cost-effective analyses, the treatment group will ideally be randomly selected from the control group as this would likely make the two groups comparable in terms of their baseline health status. If random allocation is not possible, then statistical approaches, such as propensity score matching, can be used to make treatment and base case groups more comparable [7].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128104910000242

New and Emerging Testing Technology for Efficacy and Safety E valuation of Personal Care Delivery Systems

David Tonucci, in Delivery System Handbook for Personal Care and Cosmetic Products, 2005

44.4.1 Identifying the Target Population

Identifying the target population and product use conditions are usually an integral part of the product development process. As such, this information is typically defined during the product conceptualization process. It is important to determine target consumers in order to identify candidates who will be used to evaluate product safety. An understanding of the intended market and use conditions are essential to development of an appropriate clinical safety plan. In order to address this issue, the following questions are typically asked and answered.

What is the target age group for a product? In determining the target age group for a product there are typically two general populations chosen: adults (e.g., those over 18 yrs of age) and children (e.g., those under 18 yrs of age). If the product will be used preferentially by small children (typically under 12), further considerations in the safety testing plan may need to be considered. A small child has a significantly smaller potential volume of distribution than an adult. These differences may significantly affect the safety of a product applied to the skin at doses designed for adults. A skin care product applied at concentrations anticipated for adults may increase the concentration of product absorbed to toxic levels.[12] Likewise, if the product will be used in a generally aged population, the clinical safety program will be constructed to use people in the appropriate cohort, wherever possible. The reason this age stratification is used is due to differences in the protective and barrier functions of the skin. Typically, older skin is thinner, more fragile and provides a weaker barrier to penetration of chemicals.[13]-[15] It is also more likely to develop irritation and less likely to develop sensitization than younger skin following exposure to products. However, a more important consideration for very young children is systemic exposure to a product.

Will the product be used preferentially by a specific ethnic g roup? The expansion of focus continues for products developed for global markets. This is becoming a major driver of developing a safety testing program. Evidence suggests that Asian and African skin responds differently than European skin to cosmetic and skin care products.[16]-[18] This is especially true for native Asian populations that maintain traditional dietary customs. Products developed for African or South American populations also need to be specifically tested in those populations as there are differences in the sensitivity of these populations to certain product types as well as different rates of occurrences for such skin pathologies (such as “eczema”). If a potential exists for use of the products in markedly different populations (i.e., Europeans and Asians), then separate clinical tests may need to be conducted. This is especially true in the area of sensitization and phototoxicity testing.

Will the product be used by consumers with specific health conditions? The existence of conditions or testing of varying health is certainly important for drug products is often overlooked, however, for cosmetic testing. When testing drug products that will be registered with a national health authority, the initial safety tests are always conducted on normal, healthy volunteers. This works because the product will eventually be tested more extensively, in the target population, and safety will be extensively monitored. However, safety testing for cosmetic products, or other OTC products, is much less regulated, and there is significant potential to overlook the need to test in a certain population. For example, if a product will be used by people with “atopic” dermatitis, then a significant portion of the safety tests should be conducted in subjects with this condition. Another common example is testing products designed to be used in those with especially dry skin. Although these examples are focused on skin care products, the same principles hold true for any cosmetic or drug product.

How many customers are likely to use the product? The answer to this question will assist in determining the size of the studies that are conducted. If one is developing a product for a very well defined population that is easy to quantify, one can modify the size of the studies accordingly. However, it is always safe to assume that the product will find its way into the hands of unintended consumers. As a consequence of this, it is always better to significantly overestimate the size of the target population. Underestimating the size of the target market often causes conscious product developers to reduce the size of the testing program. This often leads to undesirable product liability issues.

Are there certain geographic considerations that will impact product safety? This too is often overlooked in designing a clinical safety testing plan. One must be careful to determine if the product will be used preferentially in a climate that impacts the condition of the skin (i.e., very humid or very dry) and, therefore, impacts the ability of the product to penetrate or damage the skin. Even if the product will be used in different climates, one must look at the impact of these climate effects on product safety. It is often prudent to consider testing the product under multiple conditions if the product will be used in a wide range of environments. This is typically accomplished by seeking out a testing facility in the environment that is desired to ensure the subjects will be acclimated and truly representative of the target consumer population.

The important message here is to test the product in the population that will use the product under anticipated environmental conditions. If a product will truly be used in a global market place, then it should be tested in a diverse population.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780815515043500493

Anthropometry and the design and production of apparel: an overview

D. Gupta, in Anthropometry, Apparel Sizing and Design, 2014

2.4.1 Data set for target population

Anthropometric measures that accurately represent the target population are essential for designing of products. However, it is rare to find such data in most cases. Often, the available anthropometric data are drawn from populations that are markedly different from the target populations (Parkinson and Reed, 2010). Surveys vary in terms of the size of population, age group of subjects, time of collection of data as well as the procedures used. In most countries extensive data are available for military populations and products for civilian use are often designed on the basis of these measures. Even data collected from civilian populations may not be truly representative of the typical user populations. This leads to a mismatch in the dimensions of the product and the user. To combat this problem, statisticians have employed techniques such as ‘down sampling’ and ‘weighting’ to modify existing data sets to make them represent the target population better. Parkinson and Reed (2010) have discussed and reviewed these techniques in detail. They then go on to propose an improved weighting procedure, that can be used on existing anthropometric data sets, to synthesize new data sets that correlate better with the target population. Their method exploits the correlations among measures to produce better estimates of the distributions of variables than are obtained by typical weighting procedures.

To standardise the process of conducting anthropometric surveys in future and make them compatible, a new international standard ISO 15535:2012 (ISO, 2003) has been set up. It lays down the general requirements for establishing anthropometric databases that contain measurements taken as per ISO 7250–1, such as characteristics of the user population, sampling methods, measurement items, database format, anthropometric data sheets and statistics. With this new standard in place, it is expected that in future all anthropometric databases and their associated reports would be available in a standard format and that these various data sets would be fully comparable.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780857096814500025

A Systematic Approach to Scenario Design

Maxime de Varennes, ... Alexandre Lafleur, in Clinical Simulation (Second Edition), 2019

23.3.1.3 Scenario Placement Within Training

Having determined the learners’ needs and described the target population, the educator must identify the learning outcomes of the program or activity. This is a turning point in the process after which the identified learning outcomes become the main objectives around which ID is articulated. It is then that scenario placement within the overall training curriculum becomes important since it affects its design. The scenario must indeed be designed as a single unit within a broader learning experience. This reduction in scale entails a restriction in the application of the intended learning outcomes. In fact, even if the authentic setting of the simulation is especially appropriate for competency acquisition, it is likely unrealistic to pursue the development of a complete competency, a “complex know-how”29 composed of a rich set of knowledge, skills, and attitudes, within a learning experience that lasts mere minutes.30

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128156575000231

How to use the Integrated-Change Model to design digital health programs

Kei Long Cheung PhD, ... Hein de Vries PhD, in Digital Health, 2021

4.3.2 Qualitative research: exploring relevant beliefs

We need to explore the potential important beliefs of the target population, surrounding relevant determinants of behavior. Explorative open-ended questions can be based on the I-Change Model, addressing knowledge, risk perceptions, cognizance, attitude, perceived social norms, perceived social support, perceived social modeling, self-efficacy, and action planning. For this purpose, qualitative research methods can be used, for instance, via focus groups or interviews. Interviews are often employed to provide insights of the beliefs that may be paramount. For example, alcohol consumption beliefs could be understood through interviews of people who abuse from it, and from those who do not. Once these beliefs are extracted, we will be able to choose the matching behavior determinants to be acted on in our behavioral change model so that they foster the healthy behavior.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128200773000080

Causal Inference and Medical Experiments

Daniel Steel, in Philosophy of Medicine, 2011

4 External Validity And Extrapolation

Experiments typically aim to draw conclusions that extend beyond the immediate context in which they are performed. For instance, a clinical trial of a new treatment for breast cancer will aim to draw some conclusion about the effectiveness of the treatment among a large population of women, and not merely about its effectiveness among those who participated in the study. Similarly, a study of the carcinogenic effects of a compound on mice usually aims to provide some indication of its effects in humans. External validity has to do with whether the causal relationships learned about in the experimental context can be generalized in this manner. External validity is an especially obvious challenge in research involving animal models, wherein it is referred to as “animal extrapolation” (cf. [Calabrese, 1991; Steel, 2008]). This section will focus on external validity as it concerns animal research, since there is a more extensive literature on that topic.

Any extrapolation is an inference by analogy from a base to a target population. In animal extrapolation, the base is an animal model (say, laboratory mice) while humans are usually the target population. In the cases of concern here, the claim at issue in the extrapolation is a causal generalization, for instance, that a particular substance is carcinogenic or that a new vaccine is effective. The most straightforward approach to extrapolation is what can be called “simple induction.” Simple induction proposes the following rule:

Assume that the causal generalization true in the base population also holds approximately in related populations, unless there is some specific reason to think otherwise.

In other words, simple induction proposes that extrapolation be treated as a de-fault inference among populations that are related in some appropriate sense. There are, however, three aspects of the above characterization of simple induction that stand in obvious need of further clarification. In particular, to apply the above rule in any concrete case, one needs to decide what it is for a causal general-ization to hold approximately, to distinguish related from unrelated populations, and to know what counts as a reason to think that the extrapolation would not be appropriate. It seems doubtful that a great deal can be said about these three issues in the abstract — the indicators of related populations, for instance, can be expected to be rather domain specific. But it is possible to give examples of the sorts of considerations that may come into play.

Simple induction does not enjoin one to infer that a causal relationship in one population is a precise guide to that in another — it only licenses the conclusion that the relationship in the related target population is “approximately” the same as that in the base population. It is easy to see that some qualification of this sort is needed if simple induction is to be reasonable. Controlled experiments generally attempt to estimate a causal effect, that is, the probability distribution of an effect variable given interventions that manipulate the cause (cf. [Pearl, 2000, p. 70]).

In biology and social science, it is rare that a causal effect in one population is exactly replicated even in very closely related populations, since the probabilities in question are sensitive to changes in background conditions. Nevertheless, it is not rare that various qualitative features of a causal effect, such as positive relevance, are shared across a wide range of populations. For example, tobacco smoke is a carcinogen among many human and non-human mammal populations. Other qualitative features of a causal effect may also be widely shared; for instance, a drug therapy may promote an outcome in moderate dosages but inhibit it in large ones across a variety of species even though the precise effect differs from one species to the next. In other cases, the approximate similarity may also refer to quantitative features of the causal effect — the quantitative increase in the chance of lung cancer resulting from smoking in one population may be a reasonably good indicator of that in other closely related populations. In the case of extrapolation from animal models, it is common to take into account scaling effects due to differences in body size, since one would expect that a larger dose would be required to achieve the same effect in a larger organism (cf. [Watanabe et al., 1992]). Thus, in such cases, the scaling adjustment would constitute part of what is covered by the “approximately.” Depending on the context, the term “approximate” could refer to similarity with regard to any one of the aspects of the causal effect mentioned above, or other aspects, or any combination of them.

Simple induction is also restricted in allowing extrapolations only among related populations, a qualification without which the rule would obviously be unreasonable: no population can serve as a guide for every other. In biology, phylogenetic relationships are often used as a guide to relatedness for purposes of extrapolation: the more recent the last common ancestor, the more closely related the two species are (cf. [Calabrese, 1991, pp. 203-4]). A phylogenetic standard of relatedness also suggests some examples of what might count as a specific reason to think that the base population is not a reliable guide for the target population. If the causal relationship depends upon a feature of the model not shared by its most recently shared common ancestor with the target, then that is a reason to suspect that the extrapolation may be ill founded.

In many biological examples, the simple induction requires only some relatively minimal background knowledge concerning the phylogenetic relationships among the base and target populations, and its chief advantage lies in this frugality of information demanded for extrapolation. Yet the weakness of the simple inductive strategy also lies in exactly this frugality: given the rough criteria of relatedness, the strategy will inevitably produce many mistaken extrapolations. According to one review of results concerning interspecies comparisons of carcinogenic effects:

Based on the experimental evidence from the CPDB [Carcinogenic Potency Database] involving prediction from rats to mice, from mice to rats, from rats or mice to hamsters, and from humans to rats and humans to mice, … one cannot assume that if a chemical induces tumors at a given site in one species it will also be positive and induce tumors at the same site in a second species; the likelihood is at most 49% [Gold et al., 1992, p. 583].

A related challenge for the simple induction is that it is not rare that there are significant differences across distinct model organisms or strains. For instance, aflatoxin B1 causes liver cancer in rats but has little carcinogenic effect in mice [Gold et al., 1992, pp. 581-2; Hengstler et al., 2003, p. 491]. One would expect that extrapolation by simple induction is more frequently justified when the inference is from human to human than when it is from animal to human. But the difference here is likely one of degree rather than kind, since a variety of factors (e.g. gender, race, genetics, diet, environment, etc.) can induce distinct responses to the same cause among human populations. Thus, it is of interest to ask what grounds there are, if any, for extrapolation other than simple induction.

As one might expect, there are more and less optimistic answers to this question in the literature on animal extrapolation. On the more optimistic side, there are discussions of some circumstances that facilitate and some that hinder extrapolation, often presented in connection with detailed case studies. For instance, it has been observed that extrapolation is on firmer ground with respect to basic, highly conserved biological mechanisms [Wimsatt, 1998; Schaffner, 2001; Weber, 2005, pp. 180-4]. Others have observed that a close phylogenetic relationship is not necessary for extrapolation and that the use of a particular animal model for extrapolation must be supported by empirical evidence [Burian, 1993]).8 These suggestions are quite sensible. The belief that some fundamental biological mech-anisms are very widely conserved is no doubt a motivating premise underlying work on such simple model organisms as the nematode worm. And it is certainly correct that the appropriateness of a model organism for its intended purpose is not something that may merely be assumed but a claim that requires empirical support.

Yet the above suggestions are not likely to satisfy those who take a more pes-simistic view of animal extrapolation. Objections to animal extrapolation focus on causal processes that do not fall into the category of fundamental, conserved biological mechanisms. For example, Marcel Weber suggests that mechanisms be conceived of as embodying a hierarchical structure, wherein the components of a higher-level mechanism consist of lower-level mechanisms, and that while lower-level mechanisms are often highly conserved, the same is not true of the higher-level mechanisms formed from them [2001, pp. 242-3; 2005, pp. 184-6]. So, even if one agreed that basic mechanisms are highly conserved, this would do little to justify extrapolations from mice, rats, and monkeys to humans regarding such matters as the safety of a new drug or the effectiveness of a vaccine. Since critiques of animal extrapolation are often motivated by ethical concerns about experimentation on animals capable of suffering (cf. [LaFollette and Shanks, 1996]), they primarily concern animal research regarding less fundamental mechanisms that cannot be studied in such simpler organisms as nematode worms or slime molds. Moreover, noting that the appropriateness of an animal model for a particular extrapolation is an empirical hypothesis does not explain how such a hypothesis can be established without already knowing what one wishes to extrapolate.

The most sustained methodological critique of animal extrapolation is devel-oped in a book and series of articles by Hugh LaFollette and Niall Shanks [1993a; 1993b; 1995; 1996]. They use the term causal analogue model (CAM) to refer to models that can ground extrapolation and hypothetical analogue model (HAM) to refer to those that function only as sources of new hypotheses to be tested by clinical studies. According to LaFollette and Shanks, animal models can be HAMs but not CAMs. A similar, though perhaps more moderate thesis is advanced by Marcel Weber, who maintains that, except for studies of highly conserved mecha-nisms, animal models primarily support only “preparative experimentation” and not extrapolation [2005, pp. 185-6]. Weber's “preparative experimentation” is

similar to LaFollette and Shanks' notion of a HAM, except that it emphasizes the useful research materials and procedures derived from the animal model in addition to hypotheses [2005, pp. 174-6, 182-3].

LaFollette and Shanks' primary argument for the conclusion that model organ-isms can function only as HAMs and not as CAMs rests on the proposition that if a model is a CAM, then “there must be no causally relevant disanalogies between the model and the thing being modeled” [1995, p. 147; italics in original]. It is not difficult to show that animal models rarely if ever meet this stringent requirement. A second argument advanced by LaFollette and Shanks rests on the plausible claim that the appropriateness of a model organism for extrapolation must be demonstrated by empirical evidence [1993a, p. 120]. LaFollette and Shanks argue that this appropriateness cannot be established without already knowing what one hopes to learn from the extrapolation.

We have reason to believe that they [animal model and human] are causally similar only to the extent that we have detailed knowledge of the condition in both humans and animals. However, once we have enough information to be confident that the non-human animals are causally similar (and thus, that inferences from one to the other are probable), we likely know most of what the CAM is supposed to reveal [1995, p. 157].

LaFollette and Shanks presumably mean to refer to their strict CAM criterion when they write “causally similar,” but the above argument can be stated independently of that criterion. Whatever the criterion of a good model, the problem is to show that the model satisfies that criterion without already knowing what we hoped to learn from the extrapolation.

Those who are more optimistic about the potential for animal extrapolation to generate informative conclusions about humans are not likely to be persuaded by these arguments. Most obviously, LaFollette and Shanks' criterion for a CAM is so stringent that it is doubtful that it is could even be satisfied by two human populations. Nevertheless, LaFollette and Shanks' arguments are valuable in that they focus attention on two challenges that any adequate positive account of extrapolation must address. First, such an account must explain how it can be possible to extrapolate even when some causally relevant disanalogies are present. Secondly, an account must be given of how the suitability of the model for extrapolation can be established without already knowing what one hoped to extrapolate.

One intuitively appealing suggestion is that knowledge of the mechanisms underlying the cause and effect relationship can help to guide extrapolation. For example, imagine two machines A and B. Suppose that a specific input-output relationship in machine A has been discovered by experiment, and the question is whether the same causal relationship is also true of machine B. But unfortunately, it is not possible to perform the same experiment on B to answer this question. Suppose, however, that it is possible to examine the mechanisms of the two ma-chines — if these mechanisms were similar, then that would support extrapolating the causal relationship from one machine to the other. Thus, the mechanisms approach to extrapolation suggests that knowledge of mechanisms and factors capable of interfering with them can provide a basis for extrapolation. This thought is second nature among molecular biologists, and some authors concerned with the role of mechanisms in science have suggested it in passing (cf. [Wimsatt, 1976, p. 691]). Although appealing, the mechanisms proposal stands in need of further elaboration before it can answer the two challenges described above. First, since there inevitably will be some causally relevant differences between the mechanisms of the model and target, it needs to be explained how extrapolation can be justified even when some relevant differences are present. Secondly, comparing mechanisms would involve examining the mechanism in the target — but if the mechanism can be studied directly in the target, it is not clear why one needs to extrapolate from the model. In other words, it needs to be explained how the suitability of the model as a basis for extrapolation could be established given only partial knowledge of the mechanism in the target. Further elaboration of the mechanisms approach to extrapolation that addresses these issues can be found in [Steel, 2008].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444517876500064

Foreword

C.J. Kirkpatrick, in Standardisation in Cell and Tissue Engineering, 2013

One of the burning questions for publisher and reviewer alike is the intended target population. To whom is the book addressed and how much background knowledge in the field is required to benefit from it? Dr Salih, who not only edits the book but is also an active scientific coauthor, has successfully brought together a group of expert authors who in their vitae represent the heterogeneity of the field itself. Thus, although most are colleagues from academic institutions, they cover the spectrum from pure to applied science and have invested considerable effort in demonstrating how state-of-the-art technologies in the individual sciences, whether cell culture or surface chemical analysis, must be applied in standardisation procedures. This is anything but an academic exercise, but rather involves a necessary state of mind in approaching cell and tissue engineering. The book is thus highly relevant for those, especially younger colleagues entering the field, either from the academic or industrial side. Good practical advice is given, for example, on sources of expert help, both in the form of literature citations, and reference to organisations with various types of specialisation. Moreover, this is clothed in a language which is, of course, multidisciplinary, but nevertheless not overloaded with technical jargon. Naturally, as the book was intentionally compiled in compact form, there is no claim whatsoever to be an exhaustive treatise. Thus, the reader should approach it with a view to gaining insight into how established methodologies in the materials and life sciences can be used and the resulting data interpreted in the light of standardisation criteria, this process being a pre-requisite for clinical translation. Despite the academic stimulation inevitably generated by interdisciplinary approaches, we should never lose sight of the fact that biomaterials are per definitionem intended for human application. The latter serves to focus our attention on the constant challenges and dangers in extrapolating from even the most sophisticated of standardisation models and technologies to the human application.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780857094193500169

Physical Therapy and Rehabilitation

Adam D. Goodworth, ... Marko B. Popovic, in Biomechatronics, 2019

12.2 Learning Objectives

At completion of this chapter, students will be able to

1.

Understand how different target populations require different technology and approaches.

2.

Understand how a human-centered design approach can improve the fit of products for people with disabilities.

3.

Describe the current treatment strategies for addressing populations (injured, disabled, and elderly) with some type of need for therapy—in a wide variety of spaces.

4.

Describe technical examples that span the therapy field for functional recovery.

a.

upper-limb therapy,

b.

lower-limb therapy, and

c.

balance therapy.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128129395000124

Adaptable digital human models from 3D body scans

Femke Danckaers, ... Jan Sijbers, in DHM and Posturography, 2019

1 Introduction

When designing wearables, realistic virtual mannequins that represent body shapes that occur in a specific target population are valuable tools for product developers. Such tools (digital human models) are already widespread (Caporaso et al., 2017, pp. 479–488; Kakizaki, Urii, & Endo, 2016; Mochimaru, 2017; Shen et al., 2017) but are often an oversampled representation of the population, based on 1D measurements. Therefore, the entire 3D shape variation is not incorporated (Blanchonette, 2010; Moes, 2010; van der Meulen & Seidl, 2007). The body shape is modified by scaling the body parts. This is not sufficient for designing products that have to fit tightly to the body (Bragança, Arezes, Carvalho, & Ashdown, 2016).

An alternative way to capture the variability of shapes in a population is to represent these shapes by statistical shape models (SSMs) (Cootes, Taylor, Cooper, & Graham, 1995; Park & Reed, 2015). Statistical shape modeling is a well-known technique in 3D anthropometric analyses to map out the variability of body shapes. It allows gaining a better understanding of the variation in shapes present in a population. SSMs are highly valuable for product designers because ergonomic products for a specific target population can be designed from these SSMs. By adapting the parameters of the SSM, a new, realistic shape can be formed. Product developers may exploit SSMs to design virtual design mannequins and explore the body shapes belonging to a percentile of a target group, for example, to visualize extreme shapes. Moreover, an SSM allows to simulate a specific 3D body shape (Park, Lumeng, Lumeng, Ebert, & Reed, 2015), which is useful for customization in a (possibly automated) workflow.

When scanning people in a standing pose, posture differences may occur over the population. If SSMs are built from these 3D scans, body posture will have a significant and often undesired influence on the shape modes. Even when the subjects are instructed to maintain a standard pose, slight posture changes are unavoidable, especially in the region of the arms. As a result, some shape variances are unintentionally correlated with posture. Posture changes are, for example, also present in the Civilian American and European Surface Anthropometry Resource (CAESAR) database (Robinette, Daanen, & Paquet, 1999). In addition, this results in a noncompact model, as posture variances lead to large deviations from the mean shape. Therefore, the computational cost when using more compact SSMs will be significantly reduced as less modes are necessary to describe the population. In this chapter, we propose a framework that has low computational complexity to build a posture-invariant SSM by capturing and correcting the posture of an instance.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128167137000337

Defining the Medical Problem

Joseph Tranquillo PhD, ... Robert Allen PhD, PE, in Biomedical Engineering Design, 2023

3.7.3 Developing Questions

Generating good questions is an artform and is the basis for surveys, interviews, and focus groups. The general goal of a question is to obtain some particular desired information. For example, you may guess from the literature that it requires about 30 minutes to close a surgical site and that the surgeon does the suturing. In an interview, survey, or focus group, rather than asking “does it take you 30 minutes to close the site?” (which leads the subject), you should instead ask, “who closes the site and how long does it typically take?”

Some additional considerations in developing questions are:

Learn as much as you can about your target population so that you ask questions that they are best suited to answer

Ask open-ended questions that do not prompt a simple yes or no answer, but rather elicit a nuanced judgement or explanation

Be careful not to bias your questions by assuming that there is a “correct” answer. Likewise, be mindful of how you phrase a question; stating that a surgery has a 10% risk of death is perceived very differently than stating that it has a 90% success rate

Generate questions using a tried-and-true method used by journalists; the Five W’s (Who, What, Where, When, and Why)

Develop a long list of questions and then refine, recombine, and rephrase. Make sure each question targets some desired information

Arrange your questions in order of the most critical, which you should ask first

It is often helpful to begin with an easier question that breaks the ice

Reserve time toward the end of the survey, interview, or focus group for participants to add their own thoughts on how to solve the problem.

Breakout Box 3.7 contains examples of questions that were generated for a survey of patients to better understand noncompliance. Make a similar table to plan questions for your own survey, focus group, or interview.

Breakout Box 3.7

Designing Questions to Study Noncompliance

Imagine you are working with a cardiologist who is studying noncompliance among patients prescribed to use a wearable defibrillator while they wait to have one implanted. The goal of your project is to improve the current design so patients become more compliant. To understand the problem, it is critical to know why patients are noncompliant. Table 3.3 is an example of how you could plan the questions that you could ask in a survey, interview, or focus group.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128164440000031

What is it called when subjects in a study do not adequately represent the target population?

A sampling error occurs when the sample used in the study is not representative of the whole population. Sampling is an analysis performed by selecting a number of observations from a larger population.

What are the 4 essential elements for evaluation of qualitative research?

Integral to the quality framework is the idea that all qualitative research must be: credible, analyzable, transparent, and useful. These four components or criteria are fundamental to the quality framework and its ability to guide researchers in designing their qualitative research studies.

Which measurement would be used to examine the amount of empathy communicated by participants in a study?

In response to that need, the Jefferson Scale of Empathy (JSE) was developed (Hojat et al. 2001, 2002b). The JSE is a 20-item instrument specifically developed to measure empathy in the context of health professions education and patient care for administration to health professions students and practitioners.

Which type of qualitative research questions emphasizes trying to understand the culture of a group people?

Ethnography is used when a researcher wants to study a group of people to gain a larger understanding of their lives or specific aspects of their lives. The primary data collection method is through observation over an extended period of time.