Blog

Abigail Perrine
Abigail Perrine is the Director of Business Development & Strategy at Lumina Datamatics, where she is responsible for account development and strategic initiatives and workflows. Abby is an advocate for post-secondary educational reform and is particularly interested in removing barriers to education and ultimately the workforce, whether that is by rethinking accessibility features, improving and expanding remote learning opportunities, or creating new supportive technologies. Abby has worked with more than a dozen educational publishers, universities, and EdTech providers in her 15+ years with Lumina Datamatics.

    Subscribe



    DO’S & DON’TS OF ASSESSMENT WRITING
    October 28, 2020

    As educators know, one of the most useful—and challenging—parts of instruction is assessment, or the establishment of whether learning goals have been achieved. This post will cover some of the basic Do’s and Don’ts of assessment writing.

    Maybe you’re an instructor who’s making the shift to OER and you want to write your own test questions, but aren’t quite sure where to start. Or maybe you’re a writer or subject matter expert making your first foray into authoring test questions on behalf of an educational content provider. Or maybe you’re simply an assessment enthusiastic (and really, aren’t we all?).

    Whether you fall into one of these camps or something else entirely, the most important thing for you to know before getting started is that there is really no one “right” way to author assessment. Different institutions, instructors, writers, and content providers have different opinions on how to think about, write, deploy, and ultimately use assessments.

    But even taking into consideration the highly variable and customizable nature of assessment, there are some definite Do’s—and Don’ts—to keep in mind while writing questions.

    DO:

    • Decide on a structure before you write—and follow it. Each piece of testable content should ideally have roughly the same number of questions—e.g., you don’t want 100 questions testing Chapter 7, and only 15 testing Chapter 8.1
    • Use backward design. Think about what knowledge or learning objective you want to test before you write the questions. Ask yourself “What knowledge do I want the student to be proving by answering this question?”
    • Consider complexity of content. Questions for upper or graduate-level students shouldn’t test whether they can remember terms and definitions.
    • Include visuals where it makes sense to do so. Mastery within many fields of study requires some understanding and analysis of visuals—whether math objects, scientific processes, software process diagrams, or graphical analysis of data. If reading or analyzing visuals is an important part of understanding the content, don’t be afraid to test it.
    • Consider which item type is the best fit. Consider what question type is the best choice for what it is you want to test—for example, rather than writing a multiple-choice question with only two possible answer choices, consider a true-false. If a question is complex enough, consider an essay or a worked problem solution, rather than an autogradeable item type.
    • Correctly understand—and apply—metadata.2 When authoring questions on behalf of an educational content provider, EdTech, or your own course or institution, make sure you understand what metadata fields (e.g., Bloom’s taxonomy, difficulty level) are required, if any. These fields are often integral to how questions are sorted and assigned within digital platforms.
    • Keep in mind how the assessments will be deployed. Think about how your assessment questions will be given to and subsequently completed by students. You wouldn’t write a drag-and-drop or hotspot question for a paper test, and you wouldn’t ask a student to upload something when grading successful completion of the task requires physical, in-person analysis by an instructor.
    • Think about how long the test should take. If you’re authoring assessment for your own classroom and intend to use all questions (as opposed to writing a large bank that others may pick and choose from), remember that students have a finite window of time in which to complete the test. Don’t write so many questions that it’s impossible to complete them within that timeframe. In other words, be fair.

    DON’T:

    • Write questions that are all at the same level. Good question banks have a mix of easy, medium, and hard questions, so that student ability can accurately be differentiated. If all of your questions are too easy, everyone will get them right, and you won’t be able to tell who fully comprehends the content. The reverse is also true.
    • Write biased questions. Students will have many differences, including race, ethnicity, socioeconomic status, geographic location, physical abilities, religious background, gender, sexuality, and age (to name a few!). Questions should be inclusive of the (many!) differences that exist in both instructor and learner communities. Keep that in mind as you come up with application questions in particular. Don’t fall into stereotypes.
    • Write questions that can’t be answered by analyzing the content they’ve been given. As you write questions, remember that students won’t yet have the breadth of knowledge that you, as the expert and writer, do. Make sure you aren’t asking them about something you know, but that isn’t covered in the content and/or instruction they’ve been given.
    • Use jargon or terminology that hasn’t been introduced yet. As noted in the previous point, it’s important to be fair to students. If a term isn’t introduced until Chapter 8, it would be unfair to expect students to understand it as part of a question in Chapter 5.
    • Use negative constructions (unless you absolutely have to). Using phrases like “All of the above are correct except” or “Vimal should not do which of the following?” can cause unnecessary confusion. Keep in mind that we’re testing student comprehension, not whether they can wade through convoluted sentence constructions. Conversely, there may be the rare case where you simply can’t ask the question another way—and that’s OK. Just don’t overdo it.
    • Test trivia. Think about what is important for the student to know—is it important to know the exact date of a particular scientific experiment? Or is it important that they understand how the experiment was conducted, the results, and how that impacted their field of study overall? In other words, don’t test minor details that might not really matter.
    • Write throwaway answer choices. Make sure that the wrong answer options (also called distractors) you provide aren’t too obviously wrong, which could make your question too easy.
    • Try to “trick” students. Avoid using overly elaborate question stem constructions or providing “trick” answer choices. Doing so isn’t in keeping with the goal of assessing student comprehension (and it’s also not very nice).
    • Forget alternative text descriptions for any visuals. Make sure that your content is fully accessible and in alignment with ADA requirements. That means you need to include a written description, or “alt text,” for any visuals you include. Users accessing content via text-to-speech software need a description in order to understand and answer the question. But be careful not to give away the answer in your description. Provide only the details that another student would get by looking at the image itself.
    • Be afraid to consult the pros! If you are in need of guidance or support (or just want to hand the whole thing over to someone else), let us know! Lumina’s team of writers, SMEs, and content experts can help with creating, accuracy checking, or validating assessment packages of all sizes and within all disciplines—or our experts can train your institution’s professors on how to handle these tasks.

    In future articles, we’ll also cover some of the nitty-gritty details of authoring assessment questions, including the parts of an assessment question, using metadata tags, the importance of avoiding bias, data considerations, and more. But for now, the above list is a good introduction to writing your own assessments.

    Interested in learning more about Lumina’s approach to assessment, or want to discuss your upcoming assessment needs? Email Lumina or visit our website to learn more about Lumina Datamatics assessment writing and other content services.

    Notes

    1. Caveat: If you’re writing assessment for your own classroom use, and you intend differing amounts of coverage, it might make sense to have variable question counts. E.g., if you will cover all content within Chapter 11, you will want a wider set of questions than if you were to cover only one out of 5 possible sections within the chapter.
    2.  For those very new to assessment writing, metadata, in this context, refers to information about the question that allows the question to be sorted in some manner. Some examples of assessment metadata might include: page number (where in the text the answer can be found), complexity (easy/medium/difficult), Bloom’s tag, learning objective alignment, industry standards (e.g., NASW), and national educational standards (e.g., Common Core or NGSS).

     

     

     

     

     

     

     

    0 Comments

    Submit a Comment

    Your email address will not be published. Required fields are marked *

    GET IN TOUCH
    close slider


      For job-related queries, please send your resume to jobs@luminad.com

      GET IN TOUCH





      Please prove you are human by selecting the Flag.

      Skip to content