Data functions as the lifeblood of behavior analysis. How does a practicing behavior analyst know if a particular intervention worked? Data. By what means do behavior analytic journals evaluate the effectiveness of experiments? Data. And in what manner do insurance companies assess the medical necessity for behavior analytic services? If you said data, right again!

Saying that data pervades behavior analysis would evoke nods of agreement from fellow behavior analysts (and maybe get you a beer at the conference if you said it enthusiastically). From Skinner to contemporary behavior analysis, data plays a pivotal role in basic research and applied practice.

Yet the sheer amount of data, along with questions about how to use it properly, can pose an overwhelming task for those entering the field. Functional assessment, single case design, and social validity all require data. And each of these behavioral applications uses data in significantly different ways.

Practitioners of the science of behavior (i.e., BCBAs and RBTs) often work directly with individuals. The BCBA conducts an assessment to determine areas of client needs and strengths. From the assessment data, a behavioral plan or program emerges. The behavior analyst or some other person (e.g., parent, registered behavioral technician, teacher) applies the intervention. Someone collects data and evaluates the intervention. However, collecting intervention data and analyzing assessment data sometimes get mixed up.

IVs and DVs

What people examine in science can vary considerably. But all scientific experiments share commonalities, including the concept of variables. Behavior analysis qualifies as a science and has several variables: independent variables, dependent variables, extraneous variables, confounding variables, and controlled variables.

The independent variable (IV) and the dependent variable (DV) form the basis of understanding a functional relation (i.e., one variable operates in a specific manner as a function of another variable).

The IV represents the event or variable the behavior analyst attempts to control. In applied practice, those IVs go by the name of “interventions.”

On the other hand, the DV constitutes a variable measured or tested. The DV will show what, if any, effects the IV has. Some example IVs and DVs in behavioral experiments include:

  • A person may smoke fewer cigarettes (DV) when exposed to negative images portraying the terrible health effects of smoking (IV).
  • A child may raise her hand more often in class (DV) when the teacher praises her for hand raising (IV).
  • A telemarketer may keep a potential customer on the phone longer (DV) when he compliments the customer (IV).

The above examples illustrate the ease with which people can identify IVs and DVs. Yet, sometimes BCBAs have so much intervention and assessment data collected the lines between the two can blur.

Accuracy Building Interventions

Many behavioral interventions help learners acquire or become accurate with content: Discrete Trial Instruction (also called Discrete Trial Training, and Discrete Trial Teaching), Natural Environment Teaching, or Pivotal Response Treatment, to name a few.

Discrete Trial Instruction (DTI) has become a very popular accuracy building intervention, especially for those working with children with autism. With DTI the behavior analyst implements five components: 1. Presenting the discriminative stimulus; 2. Providing a temporary prompt if necessary; 3. Waiting for the behavior to occur; 4. Providing a reinforcer; and 5. Finishing with a brief pause before beginning the next trial (Mayer, Sulzer-Azaroff, & Wallace, 2012). One discrete trial would capture the application of steps 1 through 5.

A behavior analyst working with a client would have a goal for DTI. The intervention may target color identification, gross motor imitation, or matching kitchen utensils. The behavior analyst would use DTI to help the client attain the goal (often expressed in percent correct such as “The child will imitate 25 two-step chains of motor behavior with 80% accuracy on 2 out of 3 sessions across a variety of trainers”).

The question becomes, what data should the BCBA chart? A review of program books or program binders reveals at least two practices.

  1. Some BCBAs will chart prompt levels. The prompt levels display prompt level data. Did the BCBA or RBT use physical (full or partial), modeling, gestural, verbal, or visual prompts?
  2. BCBAs also record plus/minus (i.e., plus for a correct, a minus for incorrect response). The plus/minus data then convert to a percentage. For example, for five trials of matching yield 3 + (pluses) and 2 – (minuses). The data transform to 60% correct (3 correct out of 5 trials).

In both of the previous cases, the data tell a story. For the prompt levels, the data speak to BCBA or RBT behavior. Prompts come from the behavior analyst or behavior technician, and the data communicate what the adult did, not what the client did.

In the second example, the percent correct reports the accuracy of the client’s behavior. Specifically, how well or how accurately did the client perform with the set of discrete trials. The client participated in 5 discrete trials and correctly completed 3 of them (60% correct).

The recorded data on a graph will show trend, level, and variability of the data. But does any of the data answer the question, “Did the client meet his performance goal?” In other words, if a behavior analyst set a goal for a client that involved matching the five primary colors, do the recorded and graphed data answer the question?

The discrete trial data reflect progress with the accuracy building intervention itself, not necessary an independent assessment of client behavior.

Prompt data certainly do not get at how well a client can match the colors. And the discrete trial data reflect progress with the accuracy building intervention itself, not necessary an independent assessment of client behavior. What better options exist for data-scrupulous BCBAs and RBTs?

Data Options

The behavior analyst must first decide what data to chart. A review of the IVs and DVs may help. Imagine the following experimental question:

Will the accuracy building intervention discrete trial instruction improve a client’s ability to label five primary color swatches?

The previous experimental question offers options. The behavior analyst could record data on the IV (DTI), the DV (labeling the five primary colors), or both. Monitoring data on the DV or IV provides the behavior analyst and behavior tech with different information.

Table 1. Difference between data monitoring and analyzing the DV and IV

Table 1 does indicate advantages to recording, graphing, and analyzing data on the intervention (IV). And while the behavior analyst can learn about the intervention, not having any data on the DV means not knowing or understanding the effects on a client’s behavior. The behavior analyst must decide when, on what, and how much data to collect.

The decision to collect data doesn’t end with a data binder. In Precision Teaching, and embedded in Chartlytics software, several options for displaying data exist. The data display choices include Geometric Mean, First, Last, Stacked, Median, Summative, Best, and Worst. Each will provide different information for the DV and IV.

What does each option mean and when should a behavior analyst use each? Table 2 provides the answers. Deciding whether to use First, Best, or Geometric resides with the BCBA, RBT, and in some cases the client. Part of the data process involves different people looking at the data. The options for focusing in on one particular aspect of data display will depend on clinical circumstances.

Table 2. Definition of different points to display with advantages for each

Some other considerations include the following:

  1. Many Precision Teachers use Best when employing “celeration aims.” Using First may also work best with setting daily improvement goals or aims.
  2. Summative may help guard against multiple observers who pass data sheets around. Summative, as with duration data, have great utility when collected across more than one observer.
  3. Remember, none of the previously mentioned data options matter if the BCBA or RBT collects only one data point per day.

The options for displaying intervention or measurement data make for different narratives. An example of multifaceted data appears in Figure 1. In one session an RBT ran six discrete trials that produced six sets of data points.

The data come from Chartlytics and show the accel and decel data for each trial, as well as the date, time, and person collecting the data. The recorded time provides an account of the pace with which the discrete trials occurred.

Figure 1. Data collected from one day of DTI

As shown in Figure 2 below, the BCBA can inspect the data with any of the previously mentioned options from Table 2. The graphical embodiment of the different “Points to Display” bring into focus the advantages listed in Table 2. Each chart segment has a Count Per Day vertical axis and Successive Calendar Days horizontal axis. The yellow aim bands convey the aim or goal for the intervention data (i.e., corrects = 20 and incorrects = 1 to 0).

Figure 2. Data represented with different options

Contrast the First and Last data displays. The First discrete trial had more incorrects than corrects while the Last flips the interpretation. The Geometric Mean and Median look very similar suggesting the mean or average for the data set lie at 4 or 5 corrects with 4 incorrects. The Best and Worst also drastically differ from each other, demonstrating the margins of improvement in the overall session. And Stacked paints a picture of all performance data on one view. The dispersal of corrects and incorrects reveal the variability and accuracy of discrete trials.

The data views in Figure 2 all speak to the IV or intervention (DTI) and not an independent assessment of the skill (labeling five primary colors). The ability to see all of the different displays and Points to Display contextualize the data. A behavior analyst gains understanding and insight when inspecting the data with different options.

Conclusion

Focusing on intervention data communicates how special conditions arranged by the behavior analyst may affect client behavior. The answer to how much client behavior does change becomes visible with an independent assessment of target behavior outside of the intervention. Having different options to display IV and DV data lead a data analyst (e.g., BCBA, RBT, client, stakeholder) down a fruitful path — discovering functional relations and what works best for each client.

Rick Kubina, Ph.D., BCBA-D
Director or Research, CentralReach
Professor, The Pennsylvania State University

References

Mayer, G. R., Sulzer-Azaroff, B., & Wallace, M. (2012). Behavior analysis for lasting change (2nd ed.). Cornwall-on-Hudson, NY: Sloan.

For more information see our Guide to ABA Data Collection Methods