+1 (208) 254-6996 essayswallet@gmail.com
  
  1. What are the research questions?
  2. What are the independent and dependent variables?
  3. Why were these statistical analyses chosen to answer the research questions?
  4. What are the answers to the questions and how do you know?

A COMPARISON OF VIDEO MODELING, TEXT-BASED INSTRUCTION, AND NO INSTRUCTION FOR CREATING MULTIPLE BASELINE GRAPHS IN MICROSOFT EXCEL

BRYAN C. TYNER AND DANIEL M. FIENUP QUEENS COLLEGE AND THE GRADUATE CENTER, CUNY

Don't use plagiarized sources. Get Your Custom Essay on
Read Article And Answer 4 Questions
Just from $13/Page
Order Essay

Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance. Participants who used VM constructed graphs significantly faster and with fewer errors than those who used text-based instruction or no instruction. Implications for instruction are discussed.

Key words: video modeling, task analysis, graphing

The ability to analyze graphed data is a socially significant class of behavior for scientists and professionals in applied behavior analysis. Graphs enable visual analysis, interpretation, and dissemination of behavioral data. Graphs also facilitate the evaluation of experimental control, which may enhance treatment evalua- tion and application (Fahmie & Hanley, 2008). For these reasons, data organization and graph design are competencies in the Behavior Analyst Certification Board (2012) task list; however, learning to graph can be difficult. Task analyses (TAs) have been published for a variety of graphing methods (e.g., Dixon et al., 2009). Task analyses are typically ordered lists of behavior, often presented with pictures of relevant stimuli. Empirical evidence supports the effectiveness of TAs for graphing instruction. For example, Dixon et al. (2009) observed that participants who used an updated TA constructed graphs in less time and with fewer errors than those who used a TA for an older version of the software. To date, no studies have compared text-based TA instruction to any other instructional format.

The static nature of text may limit outcomes when one is learning dynamic computer tasks. A more dynamic instructional approach may be video modeling (VM): the presentation of target responses in video format. Video modeling promotes accurate responding by demonstrating desired task performance and making relevant stimuli more salient, and it has been found to be effective for teaching a variety of target behaviors, including implementation of behavioral proto- cols by staff and parents (e.g., Catania, Almeida, Liu-Constant, & DiGennaro Reed, 2009). Although VM is prevalent online for a wide range of computer tasks, its use for complex skill acquisition with typically developing individuals is largely underresearched. Because of the importance of graphing to behavior analysts, the difficulty of graphing-skill acquisition, and the lack of research on graphing instruction, the present study compared the effects of text-based, video-based, and no instruction on the accuracy and speed of constructing multiple baseline graphs.

METHOD

Participants and Setting Sixty-six undergraduate students participated

and earned extra credit towards their exper- imental psychology courses in which single- subject research design and graphing were competencies. A power analysis, based on data

This research was conducted by the first author in partial fulfillment of the requirements for a PhD in Psychology through the Graduate Center, CUNY.

Address correspondence to Daniel M. Fienup, Depart- ment of Psychology, Queens College, Flushing, New York 11367 (e-mail: daniel.fienup@qc.cuny.edu).

doi: 10.1002/jaba.223

JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2015, 48, 701–706 NUMBER 3 (FALL)

701

reported in Dixon et al. (2009) revealed that a sample size of 16 participants per group was necessary for at least 80% power. In total, 22 participants were randomly assigned to each group. Instruction took place in a research lab

containing seven computer work stations, each equipped with a Windows-based computer that included a 48.26-cm monitor, keyboard, and mouse, and Microsoft Excel 2007 spreadsheet software.

Materials Previously published TAs (Dixon et al., 2009;

Lo & Konrad, 2007) informed the development of a TA for constructing multiple baseline graphs that was then pilot tested with naive and experienced participants. The final version of the TA described the steps for making a multiple baseline graph, including (a) organizing the data table and inserting the graph, (b) formatting the data series and chart area, (c) changing axis values, (d) aligning data points with tic marks, (e) inserting chart and axis labels and phase-change lines, (g) stacking and grouping graph panes, and (h) copying and pasting the graph and all components as an image for publication sub- mission purposes. A tutorial was developed using PowerPoint

presentation software, Windows Media Player, and Camtasia Studio 7 to present the TA to all groups in one of three formats: (a) text-based instruction, (b) video-based instruction, or (c) no-instruction control. The tutorial first dis- played a button with the word “Begin” that recorded the start time of instruction and then presented the first step of TA. Buttons permitted navigation backward or forward one slide and to a table of contents. After the last instructional slide, a slide presented a button with the words “I’m done” that recorded the end time of instruction. Text-based tutorial. The text-based tutorial

included 41 slides that contained two to 15 sentences each. Twelve slides included screenshot

images of the software menus and the graph in progress, as described in the tutorial. Video tutorial. The video tutorial was iden-

tical to the text-based tutorial except the steps of the TA were narrated in 30-s to 3-min video segments during a screen-capture video record- ing of the first author performing each step. The video zoomed in and out to focus on relevant stimuli on the screen. Controls were available for participants to play, pause, fast forward, rewind, and adjust the volume. No-instruction control tutorial. The no-in-

struction control tutorial presented the same slides as the text version, describing conventions for each graph element; however, text that described how to accomplish formatting changes was omitted. Demographics and social validity questionnaire.

After completing the tutorial, participants completed a questionnaire regarding their previous course, computer, and graphing expe- rience and their acceptability of the goals, procedures, and outcomes of the tutorial. Social validity was assessed using a 5-point Likert-type scale (1 ¼ strongly disagree to 5 ¼ strongly agree).

Dependent Variables The dependent variables were graphing

accuracy and duration to graph completion. Graphing accuracy was defined as the number of graph elements scored correct using a 50- question checklist of graph components linked to the steps of the TA. The checklist was tested by scoring pilot participants’ graphs and was revised until interobserver agreement on novel graphs reached at least 90%. Duration to graph completion was defined as the number of minutes that passed from the start to end times.

Procedure A between-groups design compared text,

video, and no-instruction control conditions. Experimenters used block randomization to assign participants to groups. Before a partic- ipant arrived at the laboratory, the researcher

702 BRYAN C. TYNER and DANIEL M. FIENUP

prepared the tutorial so that the “Begin” button was displayed. The researcher arranged com- puter desktops with the tutorial on the left side of the screen and a new spreadsheet open on the right. After seating participants at a computer

station, the researcher provided basic instruc- tions to begin and end the tutorial. No feedback or instructions for graphing were provided other than those presented in the tutorial. When participants completed the graph and clicked the “I’m done” button, the researcher gave them a paper copy of the demographics and social validity questionnaire to complete before leav- ing. To ensure treatment integrity, researchers followed procedures according to a script and checked off each step as it was completed. Self- reported treatment integrity was 100%.

Interobserver Agreement A masters-level research assistant independ-

ently coded 33% of all graphs. Items on the checklist scored by both observers as correct or incorrect were rated as agreements. Interobserver agreement was calculated for each graph by dividing the number of checklist items scored in agreement by the total number of checklist items and converting the result to a percentage. Mean agreement was 94% (SD ¼ 5.11%; range, 82% to 100%) for all graphs. Only one participant’s graph scored 82% agreement; all others were 88% or higher.

RESULTS AND DISCUSSION

A one-way ANOVA and chi-square (x2) statistic indicated no significant differences between groups in preexisting skills and experi- ence (courses completed, computer skills, fre- quency of Excel use, and number of graphs previously made). Separate one-way ANOVAs (with post-hoc Tukey’s HSD pairwise compar- isons) were conducted to evaluate overall differ- ences between groups in graphing accuracy and duration.

Figure 1 (top) displays the average number of graph elements formatted correctly by instruc- tion group and individual performances. An overall significant difference was found for graphing accuracy, F(2, 63) ¼ 15.03, p < .001. On average, participants who used VM for- matted significantly more graph elements cor- rectly compared to those who used text-based instruction, p ¼ .007, and no instruction, p < .001. On average, participants who used text-based instruction formatted more graph elements correctly compared to no instruction; however this difference was not significant, p > .05. Figure 1 (bottom) displays minutes to graph completion. There was an overall signifi- cant difference for the duration to complete the graph, F(2, 63) ¼ 12.41, p < .001. On average, participants who used VM constructed the graph in significantly fewer minutes than participants who used text (p < .026) and no instruction (p < .001). There was no significant difference in time between text-based and no instruction (p > .05). Table 1 displays social validity data. Questions

1 through 7 were compared using separate one- way ANOVAs (and Tukey’s HSD), and Question 8 was compared using a x2 test. No differences were observed between groups on goal-related questions (Questions 1 and 2). Significant differences were found for all questions regarding instructional methods and outcomes, and pair- wise tests revealed significant differences between VM and control and between text and no instruction, but not between VM and text. In other words, participants in all groups agreed that graphing skills are important; however, across all procedure and outcome statements, participants who completed VM and text instruction agreed that the procedures were acceptable and that their performance had improved. No-instruction participants generally disagreed with these statements. The present study extends previous research

that has demonstrated the effectiveness of TA graphing instruction (Dixon et al., 2009; Lo &

GRAPHING INSTRUCTION 703

Konrad, 2007). The present study found that participants who received the TA in video format constructed more accurate graphs in fewer minutes than participants who received the same TA as text with images. Performance differences between VM and text were statisti- cally significant and differences observed be- tween text and no instruction were not significant, demonstrating the relative utility of

VM over text-based instruction for graphing- skill acquisition. Furthermore, both graphing accuracy and duration were less variable for participants who received VM compared to those who used text and no instruction (Figure 1), because the scores are distributed much more closely around the mean for VM than for text and no instruction. Therefore, VM may produce more predictable behavior change compared to

Figure 1. The number of correct checklist items (top) and the number of minutes to graph completion (bottom). Black bars represent group means. Data points represent individual participants’scores. Darker data points represent overlapping scores.

704 BRYAN C. TYNER and DANIEL M. FIENUP

the other two methods. Generalizations of these findings to the overall effectiveness of VM may be premature, however, because this study compared only these two instructional methods. This study benefits applied behavior analysis

by providing a novel comparison between two empirically supported interventions for teaching typically developing learners a complex skill. These results may also be useful for guiding decisions about graphing instruction. Partici- pants who used VM demonstrated more socially valid graphing performance. The observed differences could determine whether a figure is publishable, interpreted with ease, or understood correctly. Practitioners who are capable of quickly graphing a client’s behavior may be more likely to do so and will be better able to identify behavior–environment and functional relations in their client’s behavior (Fahmie & Hanley, 2008), which may improve treatment outcomes. Additional research is needed to clarify the

generality of the reported outcomes. First, participants’ preexisting graphing skills were not directly measured. Some students reported having little to no computer skills or graphing experience, and pilot data suggested that completing both pretest and instruction graphs may be too time intensive. Second, practice effects threaten internal validity; given enough time, participants may have learned either correct or incorrect methods for performing each step, potentially limiting the degree to which differences between instructional methods were detected. Demographic data indicated that random assignment balanced preexisting com- puter skills and experience between groups; however, future research might more directly measure preexisting skills. A third limitation is one inherent to any comparison of multiple treatments: The ideal presentation of each tutorial is unknown. Future research should make parametric analyses of both text-based and video instructions to improve the effectiveness of both methods.

T ab le 1

So ci al V al id it y Q u es ti on

s an d R es p on

se s

Q u es ti on

C on

tr ol

T ex t

V id eo

O ve ra ll

Si gn if ic an ce

M SD

M SD

M SD

C vs . T

C vs . V

T vs . V

1 G ra p h in g sk il ls ar e im

p or ta n t.

4 .1

0 .5

4 .4

0 .6

4 .4

0 .5

ns 2

St u d en ts sh ou ld

b e ab le to

gr ap h in d ep en d en tl y.

3 .7

1 .1

4 .0

1 .0

4 .0

1 .0

ns 3

T h e tu to ri al w as

a go od

m et h od

fo r te ac h in g gr ap h in g.

1 .9

0 .8

3 .9

0 .8

4 .5

1 .1

< .0 0 1

< .0 0 1

< .0 0 1

ns 4

T h e tu to ri al im

p ro ve d m y gr ap h in g sk il ls .

2 .9

1 .2

4 .9

0 .5

4 .5

0 .6

< .0 0 1

< .0 0 1

< .0 0 1

ns 5

I ca n n ow

co n st ru ct

a si m il ar

gr ap h in d ep en d en tl y.

2 .6

1 .1

4 .0

0 .8

4 .0

0 .7

< .0 0 1

< .0 0 1

< .0 0 1

ns 6

I am

b et te r at

gr ap h in g n ow

th an

b ef or e u si n g th e tu to ri al .

3 .0

1 .0

3 .9

0 .9

4 .1

0 .6

< .0 0 1

< .0 0 2

< .0 0 1

ns 7

I w ou ld

p re fe r u si n g th is tu to ri al ov er

cl as sr oo m

le ct u re .

2 .1

1 .0

3 .8

1 .0

4 .3

0 .9

< .0 0 1

< .0 0 1

< .0 0 1

ns 8

I w ou ld

re co m m en d th e tu to ri al to

ot h er s.

N o

Ye s

N o

Ye s

N o

Ye s

1 7

5 3

1 9

1 2 1

< .0 0 1

< .0 0 1

< .0 0 1

ns

GRAPHING INSTRUCTION 705

The present study is one step in the continuing evaluation of instructional effec- tiveness for teaching complex tasks to typically developing adults. The data presented here may specifically inform decisions regarding graphing instruction for instructors and stu- dents of applied behavior analysis. Although response effort for VM development may seem high, after it has been completed it is a permanent product that can be widely dis- seminated and reused. The long-term invest- ment may be smaller than apparent and should be considered when deciding whether perform- ance differences reported in the present study justify the increased response effort. Further- more, free versions of video software are available that are increasingly simple to use and often come preinstalled on new computers. Future research may also publish tutorials for such software to minimize the response effort to educators.

REFERENCES

Behavior Analyst Certification Board. (2012). Task list (4th ed.). Retrieved from http://www.bacb.com/index.php? page¼100165

Catania, C. N., Almeida, D., Liu-Constant, B., & DiGennaro Reed, F. D. (2009). Video modeling to train staff to implement discrete-trial instruction. Journal of Applied Behavior Analysis, 42, 387–392. doi:10.1901/jaba.2009.42-387

Dixon, M. R., Jackson, J. W., Small, S. L., Horner-King, M. J., Mui Ker Lik, N., Garcia,Y., & Rosales, R. (2009). Creating single-subject design graphs in Microsoft Excel 2007. Journal of Applied Behavior Analysis, 42, 277–293. doi:10.1901/jaba.2009.42-277

Fahmie, T. A., & Hanley, G. P. (2008). Progressing toward data intimacy: A review of within-session data analysis. Journal of Applied Behavior Analysis, 41, 319–331. doi:10.1901/jaba.2008.41-319

Lo, Y., & Konrad, M. (2007). A field-tested task analysis for creating single-subject graphs using Microsoft Office Excel. Journal of Behavioral Education, 16, 155–189. doi: 10.1007/s10864-006-9011-0

Received September 10, 2014 Final acceptance May 5, 2015 Action Editor, Mark Dixon

706 BRYAN C. TYNER and DANIEL M. FIENUPhttp://www.bacb.com/index.php?page&x003D;100165http://www.bacb.com/index.php?page&x003D;100165

Order your essay today and save 10% with the discount code ESSAYHELP