First Monday

Student perceptions of the effectiveness of group and individualized feedback in online courses by Phil Ice, Lori Kupczynski, Randy Wiesenmayer, and Perry Phillips



Abstract
While an abundance of research exists on best practices in the face–to–face classroom, the same is not true for online learning. In this new and constantly evolving environment, researchers are just beginning to understand what constitutes effective learning strategies. One of the most well recognized models for explaining online learning is the Community of Inquiry Framework (CoI). However, despite its recent empirical validation, the CoI provides only general indicators of effectiveness, not guides to specific practices. This study looks at a common practice, providing students with feedback, and assesses whether narrowly targeted, individualized feedback or group feedback is more effective. Through mixed methods research, the authors examined student preferences and strategies by student level, finding that while there is no one best solution there are strategies that appear most appropriate for different learner levels. Suggestions for implementing best practices and directions for future research are also discussed.

Contents

Introduction
The study and its context
Findings
Discussion and conclusions

 


 

Introduction

Researchers in higher education have consistently viewed community as an essential element in achieving the higher levels of learning associated with discourse and collaborative learning (Garrison and Arbaugh, 2007). With continual study of this need, especially in higher education, researchers determined through evidence that community can be created and achieved in online learning (Rovai, 2002; Shea, et al., 2005; Thompson and MacDonald, 2005). Moreover, a sense of community is significantly associated with students’ perceived learning (Richardson and Swan, 2003; Shea, 2006; Shea, et al., 2006). This research resulted in the creation, testing and validation of a Community of Inquiry Framework (CoI), a framework for investigating an online community from the interaction of three presences — social, teaching, and cognitive (Garrison, et al., 2000, Swan, et al., 2008).

Social presence, in the context of online learning, is described as the ability of learners to project themselves socially and emotionally in order to not only represent themselves but also to be perceived as “real people” in mediated communication (Arbaugh, 2007; Garrison and Arbaugh, 2007). Cognitive presence is the extent to which learners are able to construct and confirm meaning through reflection and discourse and is argued to be a four–stage process of inquiry (Arbaugh, 2007; Garrison, et al., 2001; Garrison and Arbaugh, 2007). However, in the development of an educational online community, social– and content–related interactions between and among learners are not enough to ensure effective online learning (Garrison and Arbaugh, 2007).

To ensure this effectiveness, teaching presence is also necessary. Garrison, et al. (2000) describe teaching presence as the design, facilitation, and direction of cognitive and social processes in order for students to have personally meaningful and educationally worthwhile learning outcomes through the three specific components of instructional design and organization, facilitation of discourse, and direct instruction. Instructional design and organization within teaching presence involves the planning and design of the structure, process, interaction and evaluation aspects of an online course (Anderson, et al., 2001). This may include the design and implementation of course lectures and activities. Facilitating discourse is described as the means by which students engage in interacting about and building upon the information provided in the instructional materials (Garrison and Arbaugh, 2007). This could include group assignments and projects. Finally, direct instruction, the key component addressed in this paper, is described as providing intellectual and scholarly leadership from a subject matter expert in order to diagnose comments for accurate understanding, inject sources of information, direct useful discussions, and scaffold learner knowledge to a higher level (Anderson, et al., 2001; Garrison and Arbaugh, 2007). This includes the ability of the instructor to:

  1. provide feedback that helps successfully focus on relevant issues;
  2. provide feedback that helps students understand their strengths and weaknesses; and,
  3. provide feedback in a timely fashion (Garrison, 2007; Swan, et al., 2008).

Despite the validation of the CoI on a multi–institutional basis (Swan, et al., 2008), many questions remain as to what factors influence each of the presences and their sub–components. As the direct instruction component addresses student satisfaction with feedback and the role of feedback in the learning process, it is important to understand what types of feedback strategies may impact the CoI indicators for direct instruction. This study specifically looks at whether feedback provided at the individual level as opposed to feedback provided to learners as a group impacts satisfaction and perceived learning.

 

++++++++++

The study and its context

The study was conducted from the summer of 2005 to the spring of 2008 at two institutions of higher education. It consisted of alternating individual and group feedback provided to students through discussion threads in six fully online, graduate level education courses. Two measures were used to assess the perceived value of each feedback type. First, in the end of course survey, students were asked to rate the value of each type of feedback and the degree to which it helped them learn (Appendix A). Second, at the end of each course, volunteers were asked to participate in follow–up interviews. Following suggestions by Patton (2002), an open–ended conversational approach was used to probe for students’ satisfaction with the two different types of feedback used. Given the variety of directions interviews might have taken, it was deemed most appropriate to let students guide the discussion once they were asked to talk about their preferences for individualized or group feedback. Questions related to perceived learning and reasons for preferences were injected on a case by case basis.

Once interviews were completed, transcripts were coded to look for similar themes that served to explain student satisfaction and perceived learning (Patton, 2002), using the facilitation of discourse and direct instruction indicators of the CoI (Swan, et al., 2008) as a guide. An explanatory mixed methods design (Creswell and Clark, 2007) was then used to merge the data and look at relationships based on student level.

For the study, a total of 89 end–of–course surveys were collected for a response rate of 62 percent. For each course section, eight volunteers were interviewed for a total of 48 interviews.

 

++++++++++

Findings

End–of–course surveys

Of the 89 students who completed the end of course survey, 61 were master’s students and 28 were doctoral students. Table 1 shows the number of students, at each level, who preferred individualized feedback, group feedback or a mixture of feedback types.

 

Table 1
 Master’s–level studentsDoctoral–level students
I prefer individualized feedback to my discussion board postings.43 (70.49%)7 (25.00%)
I prefer group feedback to my discussion board postings.4 (6.56%)9 (32.14%)
I prefer a mixture of individualized and group feedback to my discussion board postings.14 (22.95%)12 (42.86%)

 

Table 2 shows the mean responses to students’ perceptions of the impact of feedback strategies on perceived learning.

 

Table 2
 Master’s–level studentsDoctoral–level students
Individualized feedback to my discussion board postings helped me learn.4.684.48
Group feedback to my discussion board postings helped me learn.2.614.55
A mixture of individualized and group feedback to my discussion board postings.3.804.55

 

Interview data

For the students who participated in the interviews, 35 were master’s students and 13 were doctoral students. Interview transcripts were coded using indicators of the direct instruction component of teaching presence as a framework. Perceptions of positive impact were recorded. Table 3 illustrates student perceptions related to each of the indicators by feedback strategy and level.

 

Table 3
 Master’s–level students — Positive impactDoctoral–level students — Positive impact
Focusing discussion on relevant issues to improve learning.22 Individual
6 Group
7 Mixture
1 Individual
10 Group
2 Mixture
Understanding strengths and weaknesses.27 Individual
2 Group
6 Mixture
0 Individual
14 Group
3 Mixture
Providing timely feedback.24 Individual
10 Group
0 Mixture
3 Individual
14 Group
7 Mixture

 

With respect to master’s level students, the overwhelming majority believed that individual feedback was most effective in projecting the three indicators of direct instruction. To understand their reasoning, the following excerpts are informative (pseudonyms have been used to protect student and instructor confidentiality). The first is typical of comments made by the majority of master’s level students who preferred individualized feedback:

Cindy — “When I got individual feedback it was very clear what I had done wrong. You asked about whether or not a certain type of feedback helped me understand the most relevant things; when I got individual feedback it was very clear whether or not I understood a point. Dr. Killjoy would send back my work and highlight where I had done something wrong or didn’t get the big picture. The same was true when she responded to me on the discussion board. It was always really clear to where I was starting to get off course.

The same wasn’t true for when she would give us group feedback. I had to try to read between the lines to figure out how my work compared with the general comments. Then sometimes I still wasn’t sure and had to email Dr. Killjoy and ask if I was right about what I thought she meant.”

In contrast, the following statement was typical of the majority of doctoral students when they were asked about individualized feedback:

Lori — “I didn’t really get much out of individual feedback. It didn’t really consist of anything substantive. For example, individual feedback usually consisted of things like, ‘you make and excellent point here,’ ‘this paragraph shows that you understand the theoretical basis,’ etc. etc. In other words Dr. Ramos didn’t really tell me anything I didn’t already know. There was no depth to it. Now, compare that to the group feedback we received and there is a world of difference. The group feedback really helped me see the big picture.”

For the majority of master’s level students, group feedback was not perceived as being valuable. The following comments were typical of these students:

Bill — “When Dr. Walker put out group feedback it was ok, but the problem was that it was kind of like it was ... . I don’t really know how to say it except that it was vague. It seemed like he was trying to make all of these connections, but I didn’t really see how it applied to what I had done. It just didn’t help me understand where I was doing things right and where I was doing things wrong. Before you asked about which type of feedback helped me learn about relevant issues. When I got the feedback that was intended for everyone I had a hard time figuring out what was relevant and what wasn’t.”

In contrast, the majority of doctoral level students replied in a manner similar to the following:

Jim — “The group feedback was by far the most informative. It allows me to understand the instructors perspective on topics through her synthesis of the outcomes of discussion threads. This was extremely valuable since I could then reflect on my own work and compare it to what Dr. Beam gave us in her summaries. I think that was probably what made this course valuable for me. In online classes there is a tendency for the instructors to turn it into a correspondence course where its all about just submitting, getting a grade and getting done. The group feedback is more like what happens in a seminar where the prof will help us evaluate entire discussions.”

With respect to a mixture of individual feedback and group feedback, the overall perceptions were reflective of the perceptions of individualized and group feedback on a stand–alone basis. In other words, the majority of master’s students believed that when a mixture of feedback was provided, only the individualized components were perceived as being of value. Conversely, doctoral students believed that when a mixed approach was utilized, the most important component was group feedback and that little additional value was gained from the addition of individualized feedback.

In the tables presented earlier in this section, there are a significant number of master’s students who believed group feedback and a mixture of feedback had a positive impact on indicators of direct instruction. A review of the data showed that these were students who were nearing the end of their master’s program and in the majority of instances were applying to doctoral programs. Likewise, there were a few doctoral students who believed a mixture of feedback had a positive impact on the indicators of direct instruction. A review of the data showed that these students were all in the first year of their doctoral program.

 

++++++++++

Discussion and conclusions

When assessing the best application of feedback strategies to promote student satisfaction with indicators of the direct instruction component of Teaching Presence, the differences between student levels detected in this study are informative. Specifically, the data reveal that the majority of master’s level students place a much higher value on individualized feedback and believe it is much more effective in helping them understand relevant topics. In addition, master’s students also believe that individualized feedback is much more effective in helping them understand their strengths and weaknesses.

In contrast, doctoral students generally placed a much higher value on group feedback. Analysis of transcripts revealed that this was related to these students’ desire to compare and contrast their work with syntheses provided by the instructor. Notably, the master’s level students generally preferred to have the instructor provide this type of analysis directly.

Though more research is needed, a plausible explanation is that these orientations reflect students’ comfort with their cognitive abilities, specifically the ability to engage in socially mediated learning, meta–reflection, and soft scaffolding (Vygotsky, 1978). From another perspective, the difference in the way students utilize the information can be viewed as progressing from the application level to the synthesis and evaluation levels of Bloom’s Taxonomy (Krathwohl, 2002). Support for this hypothesis can be found in the sub–groups of students who are nearing the end of their master’s program and those students who had only recently started doctoral studies. Among the first sub–group, a higher value was given to group feedback than among their other master’s level peers. Likewise, the students who were just starting their doctoral programs placed a high value on mixed feedback as they still sought direct positive affirmation and interaction with the instructor.

From an applied perspective, this study is beneficial in that it provides generalized guidelines for providing effective feedback in online courses. As an example, when working with large concentrations of master’s level students, providing individualized feedback will likely provide higher student satisfaction and perceived learning than if group feedback were provided. Conversely, these findings indicate that doctoral students would likely not perceive individualized feedback as being beneficial. Rather, doctoral students would likely perceive group feedback as being of greater benefit as they tend to prefer to evaluate instructor syntheses and contrast them with their own perspective. Finally, with respect to students who are nearing the end of master’s programs and beginning doctoral level work, the instructor may want to incorporate a variety of feedback to help students progress cognitively, vis–á–vis Bloom’s Taxonomy.

While the findings of this study appear compelling, it must be noted that the findings are limited by a relatively small sample size and the fact that students were from only one discipline. More research is needed across other disciplines to determine if these findings apply to other populations. In addition, this study only addressed the impact of group and individualized feedback in relation to the direct instruction component of teaching presence. More research needs to be done to determine the impact feedback types have on facilitation of discourse as well as cognitive presence. However, this research is still important in that it moves toward understanding what constitutes best practices in online learning environments. End of article

 

About the authors

Philip Ice is the Director of Course Design, Research and Development for American Public University System. His research interests focus on two interrelated areas. First, Philip is interested in the Community of Inquiry Framework and how it can be applied to improving the quality of online learning. Second, he is interested in how new and emerging technologies impact Teaching and Cognitive Presence within the CoI. For his work with emerging technologies, Philip won Sloan–C’s 2007 Effective Practice of the Year Award.

Lori Kupczynski is the Instructional Designer at the Center for Online Learning, Teaching and Technology at the University of Texas–Pan American. Her work focus is training faculty to teach successfully in the online environment so that students and faculty feel it has been a high–level learning experience. She is very interested in the online environment, specifically as it applies to the adult learner, and focuses all of her research, design, and teaching in that area.

Randy Wiesenmayer is a professor of curriculum and instruction at West Virginia University. Randy’s research interests are focused on exploring how to more effectively create an awareness of global climatic change issues among pre–service and in–service teachers. With respect to online learning, Randy has been teaching online since 1997 and pioneered a large–scale online continuing education initiative for teachers in West Virginia.

Perry Phillips is retired as an associate professor from West Virginia University in 2007. However, he continues to teach online courses there as an adjunct in the Department of Curriculum & Instruction/Literacy Studies. Perry is interested in how to effectively facilitate collaborative projects in the online environment.

 

References

Terry Anderson, Liam Rourke, D. Randy Garrison, and Walter Archer, 2001. “Assessing teaching presence in a computer conferencing context,” Journal of Asynchronous Learning Networks, volume 5, number 2 (September), pp. 1–17.

J. Ben Arbaugh, 2007. “An empirical verification of the community of inquiry framework,” Journal of Asynchronous Learning Networks, volume 11, number 1 (April), pp. 73–85.

John W. Creswell and Vickie L. Plano Clark, 2007. Designing and conducting mixed methods research. Thousand Oaks, Calif.: Sage.

D.R. Garrison, 2007. “Online community of inquiry review: Social, cognitive and teaching presence issues,” Journal of Asynchronous Learning Networks, volume 11, issue 1, pp. 61–72.

D.R. Garrison and J. Ben Arbaugh, 2007. “Researching the community of inquiry framework: Review, issues, and future directions,” The Internet and Higher Education, volume 10, issue 3, pp. 157–172.http://dx.doi.org/10.1016/j.iheduc.2007.04.001

D.R. Garrison, Terry Anderson, and Walter Archer, 2001. “Critical thinking, cognitive presence, and computer conferencing in distance education,” American Journal of Distance Education, volume 15, number 1 (Spring), pp. 7–23.http://dx.doi.org/10.1080/08923640109527071

D.R. Garrison, Terry Anderson, and William Archer, 2000. “Critical inquiry in a text–based environment: Computer conferencing in higher education,” The Internet and Higher Education, volume 2, numbers 2–3 (Spring), pp. 87–105.

David R. Krathwohl, 2002. “A revision of Bloom’s taxonomy: An overview,” Theory Into Practice, volume 41, issue 4 (September), pp. 212–218.http://dx.doi.org/10.1207/s15430421tip4104_2

Michael Quinn Patton, 2002. Qualitative research and evaluation methods. Third edition. Thousand Oaks, Calif: Sage.

Jennifer C. Richardson and Karen Swan, 2003. “Examining social presence in online courses in relation to students’ perceived learning and satisfaction,” Journal of Asynchronous Learning Networks, volume 7, number 1, (February), pp. 68–88.

Alfred P. Rovai, 2002. “Development of an instrument to measure classroom community,” The Internet and Higher Education, volume 5, number 3 (Autumn), pp. 197–211.http://dx.doi.org/10.1016/S1096-7516(02)00102-1

Karen P. Swan, Jennifer C. Richardson, Philip Ice, D. Randy Garrison, Martha Cleveland–Innes, and J. Ben Arbaugh, 2008. “Validating a measurement tool of presence in online communities of inquiry,” eMentor, volume 24, number 2 (April), at http://www.e-mentor.edu.pl/, accessed 8 August 2008.

Peter Shea, 2006. “A study of students’ sense of learning community in online environments,” Journal of Asynchronous Learning Networks, volume 10, number 1, (February), pp. 35–44.

Peter Shea, Chun Sau Li, and Alexandra Pickett, 2006. “A study of teaching presence and student sense of learning community in fully online and Web–enhanced college courses,” The Internet and Higher Education, volume 9, number 3, pp. 175–190.

Peter Shea, Karen Swan, Chun Sau Li, and Alexandra Pickett, 2005. “Developing learning community in online asynchronous college courses: The role of teaching presence,” Journal of Asynchronous Learning Networks, volume 9, number 4 (December), pp. 59–82.

Terri Lynn Thompson, and Colla J. MacDonald, 2005. “Community building, emergent design and expecting the unexpected: Creating a quality elearning experience,” The Internet and Higher Education, volume 8, number 3, pp. 233–249.http://dx.doi.org/10.1016/j.iheduc.2005.06.004

Appendix A

End–of–course survey items use to measure student satisfaction and perceptions of learning effectiveness as a function of feedback strategy.

1. Please check which of the following three items with which you most agree.
a. I prefer individualized feedback to my discussion board postings.
b. I prefer group feedback to my discussion board postings.
c. I prefer a mixture of individualized and group feedback to my discussion board postings.

 


Editorial history

Paper received 22 August 2008; accepted 2 October 2008.


Copyright © 2008, First Monday.

Copyright © 2008, Phil Ice, Lori Kupczynski, Randy Wiesenmayer, and Perry Phillips.

Student perceptions of the effectiveness of group and individualized feedback in online courses
by Phil Ice, Lori Kupczynski, Randy Wiesenmayer, and Perry Phillips
{$journalTitle}, {$issueTitle}
{$currentUrl}