Setting up an Online Learning Experience

As succinctly posited by Conrad and Donaldson (2011), technology drives the online learning experience. Consequently, knowing the technology available to an instructor in an online environment is crucial and fundamental to starting or setting up the course in the right manner. Establishing that knowledge is a key activity in the course beginning (Boettcher & Conrad, 2016). It prepares the instructor for the potential challenges that may exist and the compatibility issues that may arise during the course. For the learners, this is important so that they are better prepared to forestall any future surprises and to ensure that all that is required in terms of technology is provided for.

Another necessary requirement or something that will help at the beginning of any course is the need to communicate and clarify expectations (Boettcher & Conrad, 2016). This is a best practice (BP) corresponding to BP3 for online learning, and one of the guiding themes for course beginnings. Clarifying expectations would contribute to ensuring mutual understanding and shared satisfaction between the instructor and the learner. Such a clarification addresses and prevents any possible misunderstanding that may arise or be occasioned as a result of the spatial or/and temporal (time zone) distance that may exist between the instructor and learner (Boettcher & Conrad, 2016). Such communication can help establish a learning environment marked by trust and characterized by harmony especially when things or plans do not work out as expected or when life happens to the instructor or the learner (Boettcher & Conrad, 2016).

Other considerations expected of the instructor would include planning for adequate social presence and designing and building a viable learning community that will be nurtured in the subsequent phases of the course offering. Considerations should also be accorded to adhering to other online teaching and learning best practices as research suggests that employing best practices is important for making both teaching and learning effective, efficient, and satisfying (Boettcher & Conrad, 2016). It is equally important that means should be devised to understand the processes that might hitherto be unfamiliar to the student. Furthermore, curiosity should be incorporated into the design of the instruction to make learners look forward to what comes next in the course (Boettcher & Conrad, 2016).

According to the week’s media Learning Resource, successfully launching a course is significant for the online learning experience because that success is a major factor in reducing the level of attrition that usually arises when a course is not successfully launched. According to Dr. Keith Pratt and Dr. Rena Palloff in the media resource, the significance of the first week is such that it is during that week that learners make their decisions to stay or not to stay on the course.

References

Boettcher, J. V., & Conrad, R. M. (2016). The Online Teaching Survival Guide: Simple and Practical Pedagogical Tips. Wiley.

Conrad, R. M., & Donaldson, J. A. (2011). Engaging the Online Learner: Activities and Resources for Creative Instruction. Jossey-Bass.

PLAGIARISM DETECTION AND PREVENTION

Academic dishonesty comes in different forms ranging from cut-and-paste plagiarism (Moore, 2013), to contract cheating, and to the more subtle self-plagiarism depicted by one’s usage of previously submitted works for another course.

A plethora of plagiarism detection software is available to online instructors. My first awareness and interface with such software was as a student of Newcastle University, Newcastle, UK in 2011 during which our major writing assignments and papers had to be submitted through Turnitin — a plagiarism detection software. Years later, I came in contact with the same software as a student of Walden University before the institution changed to the use of Safe Assign (Blackboard SafeAssign | Blackboard Help, n.d.) — another plagiarism detection software. Turnitin is also being used by the institution where I work — Caleb University, Lagos, Nigeria.

From the listing by TrustRadius (n.d.), other plagiarism detection software currently in use are Quetext, CopyScape, Grammarly, Paper RAter, ProWritingAid, Dupli Checker, WhiteSmoke, Copyleaks Plagiarism Checker, Plagiarism Detector, Noplag, PlagScan from Ouriginal, Writer, Unicheck, Rytr, and Textai.ai.

Beyond the use of plagiarism detection software, one way of curbing academic dishonesty online is through the deployment of authentic assessments. Authentic assessments focus more on a learner’s ability to demonstrate a practical knowledge of what has been learned and are a better reflection of the complexities found in the real world of work situations (Ellis et al, 2020).

As an online instructor, one needs to know why students cheat. According to Dr. Rena Palloff in one of the week’s Learning Resources, some students do not even know they indulge in an academic dishonesty when they submit, in a new course, their own materials or works previously used in another course. Educating the learner, therefore, on what constitutes academic dishonesty becomes important as a facilitation strategy. Another strategy is to administer frequent low-stake assessments (which come across as disincentives to cheating) in lieu of high-stake assessments (Moore, 2013). This is because students are more likely to cheat when the stake is high (Moore, 2013).

An additional consideration for online teaching is to let learners know the implication of cheating, warning them of the severity of academic dishonesty and the fact that the institution does not pardon such an act

References

Blackboard SafeAssign | Blackboard Help. (n.d.). https://help.blackboard.com/SafeAssign

Ellis, C., van Haeringen, K., Harper, R., Bretag, T., Zucker, I., McBride, S., Rozenberg, P., Newton, P., & Saddiqui, S. (2020). Does authentic assessment assure academic integrity? Evidence from contract cheating data. Higher Education Research and Development39(3), 454–469. https://doi.org/10.1080/07294360.2019.1680956

Moore, E. M. A. (2013, December 2). 7 Assessment Challenges of Moving Your Course Online (and a Dozen+ Solutions). Faculty Focus | Higher Ed Teaching & Learning. https://www.facultyfocus.com/articles/online-education/7-assessment-challenges-of-moving-your-course-online-solutions/

Pleasants, J., Pleasants, J. M., & Pleasants, B. P. (2022). Cheating on Unproctored Online Exams: Prevalence, Mitigation Measures, and Effects on Exam Performance. Online Learning26(1), 268–284. https://doi.org/10.24059/olj.v26i1.2620

TrustRadius. (n.d.). Unicheck, NoPlag, and Turnitin: Top 3 Plagiarism Checkers for Writers, Students, and Teachers. https://www.trustradius.com/plagiarism-checker

Educational Impact of Multimedia and Technology

Online learning leverages available technology to bring about knowledge construction.

The role and impact of technology and multimedia on the learning environment cannot be overemphasized. The entry of technology into the distance learning domain has significantly transformed learning (Mayer, 2014; Boettcher & Conrad, 2016) regarding its reach, speed and effectiveness. In addition, multimedia allows for multi-modal teaching (Mayer, 2014).

Technology is to be chosen based on how it can engage online learners and engender learners’ ability to construct knowledge. Multimedia and technology should not be deployed just for the fun of it or simply because technology is readily available (technology-centered approach (Mayer, 2014)).  Rather, due consideration should be given to how technology can foster engagement that will enhance learning. Mayer (2014) advises that the approach to take when deploying technology is the learner-centered approach which gives consideration to adapting multimedia technology to aid human cognition. Other considerations as captured in the guidelines provided by Boettcher and Conrad (2016) are:

  1. The need to prioritize pedagogy ahead of technology; and
  2. Keeping technology or multimedia use simple

It is required that technology is both usable and accessible for online teaching. The characteristic of technology that makes the users to be able to utilize it with ease and without prior technical knowledge is what usability is about. Accessibility, on the other hand, is the inherent quality of a chosen technology to cater for the able as well as the differently-able (disabled or physically challenged).

The most appealing technological tools to me for online teaching would be gaming and simulation tools. While I am not yet proficient in the usage of these tools, the level of engagement conferred by these tools can be enormous. Some technology tools also aid in the designing of usable and accessible instructions.

References

Boettcher, J. V., & Conrad, R. M. (2016). The Online Teaching Survival Guide: Simple and Practical Pedagogical Tips. Wiley.

Mayer, R. (2014). Introduction to Multimedia Learning. In R. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning (Cambridge Handbooks in Psychology, pp. 1-24). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139547369.002

Online Learning Communities

According to the video program “Online Learning Communities” which is one of the week’s media Learning Resources, when online communities is the vehicle for course delivery, the students have a sense of belonging in a group as against feeling like lone rangers. Each student feels like they are part of a large group where their own contribution is required and perhaps valued. When the student feels that their presence in the social setting — somewhat referred to as social presence (Boettcher & Conrad, 2016) — is valued, this drives student satisfaction and engenders positive learning outcomes.

Still referencing the video program, the essential elements of online community building are the people that come together to form the community; the purpose that unites them, and the process through which the purpose of learning is achieved (i.e. how the course is delivered).

One way of sustaining the learning communities is through facilitation by the instructor.

Boettcher, J. V., & Conrad, R. M. (2016). The Online Teaching Survival Guide: Simple and Practical Pedagogical Tips. Wiley.

Performance Assessment Benefits, Challenges, and Work-arounds

One of the benefits of performance assessment (also referred to as authentic assessment) to the instructor is that it helps the instructor to know what the learner has really learned and, as such, the instructor is able to plan interventions based on the level of learning revealed by the assessment. An authentic assessment gives a true and fair view of learning and performance (Walden University, 2011).

As indicated by Dr. Rita Conrad in the week’s learning resources, a major challenge faced by instructors in designing authentic assessments is the fact that authentic assessments are difficult to design especially relative to other forms of assessment. Another challenge is that authentic assessment otherwise takes time to grade or to score and would require the use of a scoring plan, rubrics, and checklist as stated by Dr Conrad in the week’s Learning Resources.

One benefit of performance assessment to the learner is that a sincere learner will be able to know whether there is a thorough grasp or understanding of the curriculum content. Performance assessment would give a true state of knowledge acquisition to the learner. Also, a learner can benefit from the immediacy of feedback which can positively impact learning. Furthermore, the student can have the benefit of formative feed-up feedback and feed-forward (Webb & Moallem, 2016) that might have been incorporated into the design of the performance assessment.

A major challenge to learners participating in performance assessments in an online environment could be the knowledge required to use available technologies. A learner would be required to effectively gain some level of mastery of learning technologies to be able to express themselves in an online environment with respect to exhibiting performance capabilities.

To overcome these challenges Dr. Rita Conrad has suggested some steps for developing performance assessment. The Three steps suggested are: “determination of the capability to be assessed, selecting the performance that will be observed, and developing a scoring plan”.

References

Walden University, LLC. (Producer). (2011). Performance assessments in online environments [Video file]. Baltimore, MD: Author.

Webb, A., & Moallem, M. (2016). Feedback and Feed-Forward for Promoting Problem-Based Learning in Online Learning Environments. Malaysian Journal of Learning and Instruction13(2), 1–41.

 

Enlivening Written Assessments

Technology has tremendously impacted the design, development, implementation, and administration of written assessments, especially with the advent of Web 2.0 which gives room for more flexibility in terms of assessment design and content generation as well as reduced difficulties of interaction and collaboration between the generator of contents and those accessing the content (Frankl & Bitter, 2012). Elliot (2008) in mapping Assessment 2.0 with Web 2.0 described Assessment 2.0 as being personalized, problem-oriented, collaboratively produced, and tool-supported. Of special importance is the openness feature of Web 2.0 which allows online users to share their cognitive surplus as they create and learn (Boettcher, 2011)

While the online environment has increased the effectiveness of the written assessment format because of the appeal it creates on the side of assessment takers, the validity would depend on how the test or assessment is designed in terms of the incorporation of items that can ensure the objectives of the learning material are adequately captured. Validity cannot just be conveyed or conferred by the use of a particular technology, rather validity will be achieved intentionally through the incorporation of the various items necessary to be taught and included in the assessment item. This position is made clear by Dr. Conrad in the week’s Learning Resources and corroborated by Corrigan and Craciun (2013) who stressed the importance of having assessments feature asking the right questions. Corrigan and Craciun (2013) further suggested that current technologies can facilitate the generation of these right questions through learners’ contribution to the test banks.

On the part of the instructor, technology has facilitated assessment design such that the instructor can easily incorporate as many items that would enhance the achievement of assessment generalizability. Instructors are also aided in giving quick feedback to students or learners. On the other hand, technologies available appeal to students thus reducing the level of attrition learners would ordinarily display towards traditional assessment. Also, learners may be able to have a say in the assessment process by contributing to the questions that will feature in learners’ assessments.

The role of assessment in the online environment is to foster learning by the achievement of validity and generalizability in assessment item components. Validity is made possible through technological tools for the inclusion of as many items that should be covered based on the objectives of the course. Generalizability is also made possible because a wide range of assessment items are also made possible through multiple choice questions or short answer questions that can easily be incorporated because of available online technological tools.

References

Corrigan, H., & Craciun, G. (2013). Asking the Right Questions: Using Student-Written Exams as an Innovative Approach to Learning and Evaluation. Marketing Education Review23(1), 31–36. https://doi.org/10.2753/MER1052-8008230105

Elliott, B. (2008), “Assessment 2.0: Modernising assessment in the age of Web 2.0,” Scottish Qualifications Authority. Retrieved May 16, 2012, from http://www.scribd.com/doc/461041/Assessment-20.

Frankl, G., & Bitter, S. (2012). Online Exams: Practical Implications and Future Directions. Proceedings of the European Conference on E-Learning, 158–164.

Walden University, LLC. (Producer). (2011). Written assessments in online environments [Video file]. Baltimore, MD: Author.

TOWARDS A LEARNING FEEDBACK SYSTEM THAT WORKS

I have received numerous ineffective feedbacks in the course of my studies over the years, not at Walden University though. The reason for the numerous ineffective feedbacks may not be unconnected with the fact that the educational system in a particular country may be different from that of another country. For instance, the Nigerian educational system is just about 180 years (Nigeria – History Background, n.d.) and has not evolved at par with the American education system which has been for centuries (The American Board, 2019) and has evolved over time like the British educational system. The evolution of an educational system would invariably impact all aspects of the educational system such as the testing/assessment administration system, the scoring system, and the feedback system.

What I have come to learn is that the assessment system in the US is somewhat different from the British system (at least, until COVID-19 forced a change in the assessment system due to the attendant lockdown) from which Nigeria took its offing. While open-book exams are less common in the British system it is commoner in the US educational system.

Now, the kind of assessment administered would determine the kind of feedback that will go with that particular type of assessment. In other words, the way some of our assessment questions are administered and the nature of the assessment can affect the kind of feedback that can be given. In a situation whereby multiple choice questions (MCQs) are administered, the feedback may just be given as scores because not many assessors or instructors may want to explain the implication of the choices made by the student except if feedback is provided generically as to why one answer is right and another is wrong.

In my case, and perhaps due to the level of development or otherwise of my country’s educational system, not many instructors would provide the kind of feedback Dr. Rena Palloff spoke about in the week’s Learning Resources. Consequently, I did not get to receive the kind of feedback that would necessarily inform me on the points to improve upon. And where I received feedback, I just would get words like “fair”, “good”, “very good”, “Excellent” or anything close to these words. Such one- or two-word feedback does not do much to indicate areas for performance improvement.

Draper (2009) asserted that very small interventions occasioned by the right feedback can have a tremendous impact on learning outcomes. Thus, I would want to suggest a change to the system to the effect that it becomes mandatory for assessors and instructors to provide adequate and detailed feedback to learners on learners performances as suggested by Dr. Palloff.

 References

Draper, S. W. (2009). What are learners actually regulating when given feedback? BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY40(2), 306–315. https://doi.org/10.1111/j.1467-8535.2008.00930.

Nigeria – History Background. (n.d.). Nigeria History & Background. Retrieved August 1, 2022, from https://education.stateuniversity.com/pages/1100/Nigeria-HISTORY-BACKGROUND.html

The American Board. (2019, October 17). 11 Facts About the History of Education in America. American Board Blog. Retrieved August 1, 2022, from https://www.americanboard.org/blog/11-facts-about-the-history-of-education-in-america/

Academic Dishonesty in the Online Environment

The issue of academic dishonesty and cheating seems to have no bias for any mode of learning, be it online(virtual), face-to-face (F2F), or hybrid (Weimer, 2020). In the same vein, learners are always “inventing and innovating” newer tactics for cheating across both learning modes in a way that the instructor/designer would always have to catch up. Impersonation, use of prohibited materials such as old materials from students who had previously taken a particular course, and taking ideas from one another’s papers, inter alia, are examples of academic dishonesty common to both online and F2F settings.

However, online learning has its peculiar assessment challenges due to the temporal and spatial differences between the learners and the instructors thus requiring a special approach to managing, preventing or responding to cheating. In addition, it is important to give thought to why learners cheat in other to be able to come up with antidotes to academic dishonesty. Lang (2013) alluded to the fact that learners cheat because of the classroom environment and practices. The author hinted that students are likely to cheat if they are under pressure from a high-stake testing arrangement. Culver (2014) on his part indicated that some students cheat because they do not know what constitutes cheating.

Managing cheating, therefore, requires that students are taught and given orientation on what constitutes cheating (Bart, 2011) with such teaching incorporating the definition and discussion of the constituents of academic dishonesty (Culver, 2014). The constituents of academic dishonesty and cheating should be framed up in the befitting and customized honour code to be developed by the institution as advised by Bart (2011). Lang (2013) also suggested the elimination of high-stakes testing and the administration of frequent low-stakes assessments. Culver (2014) also suggested availing students of assignment options such that the learner can choose from a variety of assessments. He further added that instructors must demonstrate an open-mindedness to continuous education that can enhance better service delivery to learners. Furthermore, Bart (2011) suggested tutorials that constantly remind the students of the honour code, PR campaigns that enhance the visibility of the honour codes that are otherwise tucked away in student and faculty handbooks, as well as classroom strategies that incorporate the honour code in the course syllabus and ensure the first class features talks about why cheating is wrong. Part of classroom strategy is also the development of open-book assignments. Lastly, in the week’s Learning Resources, Dr. Conrad hinted at the need to minimize the incentives for learners to get involved in online cheating or academic dishonesty by matching assessments to the learner, using performance assessments, and incorporating multiple assessments.

Mayo

References

Bart, M. (2011, March 21). Five Ways to Tackle Cheating in College. Faculty Focus | Higher Ed Teaching & Learning. Retrieved March 17, 2022, from https://www.facultyfocus.com/articles/effective-teaching-strategies/five-ways-to-tackle-cheating-in-college/

Culver, T. F. (2014). “I’m a Student. . .Again”—Unexpected Lessons from a Professor Returning to School. College Teaching62(3), 83–85. https://doi.org/10.1080/87567555.2014.896776

Lang, J. M. (2013). Classroom practice – Are teachers creating classroom cheats?: news. The Times Educational Supplement, (5059), 38. https://www.proquest.com/trade-journals/classroom-practice-are-teachers-creating-cheats/docview/1444758895/se-2

Weimer, M., PhD. (2015, November 6). Do Online Students Cheat More on Tests? Faculty Focus | Higher Ed Teaching & Learning. Retrieved July 21, 2022, from https://www.facultyfocus.com/articles/online-education/do-online-students-cheat-more-on-tests/

Baking the Perfect Assessment

The students in the online cooking class spent a week reading about how to make several kinds of cookies. The objective was: “The student will be able to accurately, completely, and without assistance, follow a written recipe to make a batch of one kind of cookie studied this week”.

According to Dr. Conrad in one of the week’s Learning Resources, it is important to align assessment to the performance or learning objective. Three items in the objectives stand out which are: accuracy, completeness, and independence. A valid assessment would want to focus on these three important components of the objective.

The learner’s ability to recall the various steps to be followed in making cookies is a kind of declarative knowledge that might be tested through the administration of multiple-choice questions. Hence, the idea of testing the students by giving them a 10-question multiple-choice quiz and asking them to recall details about specific cookies via the learning management system is actually not out of place.

Bear in mind that the assessment must align with the objectives of the course. Since the objective of this course is for students to be able to accurately, completely, and independently follow a written recipe to make a batch of one kind of cookies studied during the week, it is first important for students to be able to remember the steps. A test of accuracy would be giving multiple-choice questions that can really make the student tell us the cookie-making stages with precision. An example would be administering a multiple-choice quiz that gives cookie-making stages in both the correct order and incorrect order, following which learners are asked to choose which one will be the correct order. Such will test for accuracy.

Second, it is important for students to also be able to follow the complete steps. A test of completeness will be to give a test that would make the learner show or write the complete steps of how to make the type of cookies.

A test of Independence in making cookies will actually be to see how the learner will follow the entire steps without assistance.

In terms of validity which is a test of whether the assessment measures what ought to be measured (Adelstein and Barbour, 2016), the multiple-choice quiz may not be entirely valid to test the objective. On the other hand, having a student provide a short-answer essay in which the student describes and defends three cookie-making practices may not necessarily address or measure what is intended to be measured. Perhaps the student could have described three cookie-making practices that are not relevant to the course material. Or what happens if a student describes free cookie-making practices as different from the ones taught in the online course? In the third scenario, each student having to submit a 30-minute video of himself or herself following a cookie recipe from the beginning of the recipe to the completion of the first batch of cookies may actually be valid to know or to show if the student would follow the steps appropriately.

I would rather say that a valid test could combine these three assessment activities. According to Abu Zaid (2013), procedural knowledge can build on declarative knowledge. While the 30-minute video depicts a way to assess procedural knowledge, it should be noted that the knowledge listed can be knowledge acted out. For one to act in a complete and accurate fashion, one must know the steps that one would need to act out, as is with a drama script.

As far as the generalizability of each assessment is concerned, this particular scenario is a bit small in scope. The items on the list are exhaustive. The main objective in focus, which is the preparation of cookies, seems to be limited to three options each with a list of limited things to be done — limited actions, limited procedures. In such a scenario, generalizability can easily be achieved. However, where the items are many and where the list is broad, it is not easy, as Dr. Conrad said in the week’s Learning Resources, to achieve generalizability. I would say that as the scope of a learning instruction increases, capturing the entire knowledge base becomes difficult, and achieving the generalizability of assessment becomes a bit tedious.

My recommendations for increasing the validity and generalizability of the assessments that in turn align well with any particular objective will be to look at the components of the objective and ensure that assessments are designed around the key items in the objective. Achieving generalizability is not an easy task as Dr. Conrad indicated. Once the objective is broad, generalizability cannot be easily achieved. One could only try to take cognizance of the most important things to be assessed in the objective to achieve generalizability.

 

References

Abu-Zaid, A., & Khan, T. A. (2013). Assessing declarative and procedural knowledge using multiple-choice questions. Medical Education Online18(1), 21132. https://doi.org/10.3402/meo.v18i0.21132

Adelstein, D. & Barbour, M. (2016). Building Better Courses: Examining the Construct Validity of the iNACOL National Standards for Quality Online Courses. Journal of Online Learning Research, 2(1), 41-73. Association for the Advancement of Computing in Education (AACE).