Latest Entries »

“Survey to Determine Knowledge of Word Meaning” Developed from Cook et al., 2000

This is a method used to survey a population in order to determine whether that population has an accurate knowledge of a word’s meaning. This method generates seven possible questions Yes/No questions regarding a word’s meaning. These questions are randomly selected to be given in a survey, and responses are compared with answers to determine knowledge of meaning.

The following figure shows an example of questions generated to determine patients’ understanding of the meaning of the word “unconscious”:

Table 1 Questions relating to the term unconscious

  1. If you are unconscious, can you still talk? No 83 (83.8) 16 99
  2. If you are unconscious, can you be standing up? No 83 (83.0) 16 1 100
  3. If you are unconscious, can you still hear? No 47 (46.5) 52 2 101
  4. If you see the room spinning around, are you unconscious? No 85 (85.0) 15 100
  5. If you are unconscious, can your eyes be open? Yes 41 (41.0) 59 100
  6. If you are unconscious, do you stop breathing? No 87 (87.0) 13 100
  7. Can you remember things that happen while you are unconscious? No 83 (83.0) 16 1 100

 

This method allows researchers to quantify a population’s understanding of word meanings, which is useful in studies seeking to make comparisons to other data. Both yes and no questions should be generated from a standard definition that must be documented in the study.

Cooke, MW., S. Wilson, P Cox, and A Roalfe. “Public Understanding of Medical Terminology: Non-English Speakers May Not Receive Optimal Care.” J Accid Emerg Med, 17. 2000. p. 119-121. Web.

Advertisements

Bloom, Benjamin (ed). (1956). Taxonomy of Educational Objectives, Handbook 1: Cognitive Domain New York, NY: Longman.

Bloom’s taxonomy has become widely known in education and related fields. The taxonomy categorizes several elements of educational objectives. Those categories are further divided into subcategories. The domains are cognitive, affective, and psychomotor. The cognitive domain is most widely known, and its six areas consist of the following areas:

  • Knowledge
  • Comprehension
  • Application
  • Analysis
  • Synthesis
  • Evaluation

Bloom’s taxonomy can be used in assessment, as suggested by Blaine Worthen and James Sanders in Educational Evaluation: Alternative Approaches and Practical Guidelines. Bloom’s taxonomy or a modification thereof (see below) could be used in writing program and writing center assessment.

Despite its possible applications and wide popularity, Bloom’s taxonomy has been criticized for it oversimplification and false hierarchy creation, and several revisions of the taxonomy have been suggested, including Robert J. Marzano and John S. Kendall’s The New Taxonomy of Educational Objectives and Lorin W. Anderson, David R. Krathwohl, Peter W. Airasian and Kathleen A. Cruikshank’s A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Krathwohl was an editor of the original taxonomy.

“Questioning the Author” is a method of curriculum development used to help students interrogate a text. The classroom talk portion of this method provides six Discussion Moves that allow the instructor to provoke and foster student discussion. The Discussion Moves are as follows:

  • Marking: Drawing attention to, and emphasizing the importance of an idea that a student has raised.
  • Revoicing: Interpreting what students are struggling to express and rephrasing the ideas so that they can become part of the discussion
  • Turning back: Turning responsibility to students for their reconsideration or elaboration, or reconnection to the text in question
  • Recapping: Summarizing the discussion so far in order to transition to another topic or point.
  • Modeling: Thinking aloud to show students how her mind is actively interacting with the ideas in the text
  • Annotating: Providing information the students might not have.

When applied as a coding scheme, these markers can be used to analyze an instructor’s discursive strategies, thus making it possible to assess what provokes the most “successful” discussion. This can also be useful if used in comparison to students’ interrogation of a text in writing, as written composition could incorporate very similar moves.

Beck, Isabel L. and Margaret G. McKeown. (2007). “How Teachers Can Support Productive Classroom Talk: Move the Thinking to the Students.” In Rosalind Horowitz (ed.), Talking Texts (207-220). Mahwah, NJ: Lawrence Erlbaum Associates.

The classic research into the top twenty common errors in English writing and it follow-up.

In 1985 Connors and Lunsford analyzed 3000 randomly selected student papers from the 20,000+ papers they solicited from English instructors nationwide. They carefully discuss the logistic questions and issues they faced, as well as concerns about taxonomy and inter-rater reliability.

Their also discuss the difficultly of defining some of the errors they found and how they determined what was actually an error and what was style. Their final findings are the basis for The Twenty Most Common Errors referenced in Lunsford’s handbook.

In 2005-2006, Andrea Lunsford and Karen Lunsford re-conducted the same research to revise the list of common errors. They attribute most of the changes to the influence of technology in writing and the growing importance of documentation.

Lunsford, Andrea A. and Karen J. Lunsford. “’Mistakes Are a Fact of Life”’: A National Comparative Study.” College Composition and Communication, 59. 4 (2008): 781-806. PDF file.

Connors, Robert J. and Andrea A. Lunsford. “Frequency of Formal Errors in Current College Writing, or Ma and Pa Kettle Do Research.” College Composition and Communication, 39. 4 (1988): 395-409. PDF file.

Close reading of comments

Straub, Richard. “The Concept of Control in Teacher Response: Defining the Varieties of ‘Directive’ and ‘Facilitative’ Commentary.” College Composition and Communication 47.2 (1996): 223-251.

This article extends the conversation about teacher response by analyzing what directive comments actually look like. Straub points out that while scholars seem to agree that teachers should not appropriate student writing, they never seem to define what that looks like. He goes on to  analyze sample comments on sample papers. This provides a clear reference for the theory behind facilitative comments; generally speaking directive comments focus on correctness while facilitative comments focus on larger rhetorical issues: audience, purpose, rhetorical situation, etc. He goes on to provide examples of more facilitative comments to provide an even stronger comparison. He concludes with a call for all writing teachers to step back carefully look at why they are commenting and what they want from their comments.

Batt, Thomas A. “The Rhetoric of the End Comment.” Rhetoric Review 24.2 (2005): 207–23.

This article takes a close look at two endnotes to student writing. Batt starts with a comprehensive review of the conversations and research about endnotes. He then presents two samples and conducts an in-depth analysis of the strategies used within the comments. He connects the comments to larger discussions of paper ownership, directive versus non-directive, evaluation versus facilitation, and he even looks at classic rhetorical concepts. His analysis and discussion is both useful and thought provoking: Useful because it demonstrates effective models for responding and thought provoking because he connects the models with theoretical concepts and ideas.

In a multi-year FIPSE grant, five colleges [Columbia-Chicago College, CSU-Long Beach, CSU-Sacramento, Florida Gulf Coast U, and U of Delaware] developed a rubric for assessing writing intended to work as a grading tool that would span across a range of institutions. “An Inter-institutional Model for College Writing Assessment” (CCC, 2008) includes both a description of their versions of a rubric for assessing papers, the findings of their implementation, a rationale for changes to their holistic scoring rubric, and also their final version of the holistic scoring rubric.

After trying a version built from the best practices of assessment, and contemplating changes needed to implement this rubric across a range of school, the authors, Neil Pagano, Steve Bernhardt, Dudley Reynolds, Mark Williams, and Kilian McCurrie, identified five categories for assessment of students writing:

  • task responsiveness
  • engagement with the text(s)
  • development
  • organization
  • control of language

Their rubric for these categories, intended to help assessors rate writing on a six point scale, was implemented at the five colleges above. No validation of the rubric was attempted.

Initial and final rubrics are included as appendices to this article.

Citation:

Pagano, N., Bernhardt, S.A., Reynolds, D., Williams, M., & McCurrie, M.K. (2008). An inter-institutional model for college writing assessment. College Composition and Communication, 60 (2), 285-320.

Discourse chunking is a simple way to segment dialogues according to how dialogue participants raise topics and negotiate them.  Discourse chunking gives information about the patterns of topic raising and negotiation in dialogue, and where an utterance fits within these patterns.

A simple example is the opening-negotiation- closing chunk of a dialogue, which looks like this:

Hello: The dialogue participants greet each other. They introduce themselves, unveil their affiliation, or the institution or location they are from.

Opening: The topic to be negotiated is introduced.

Negotiation: The actual negotiation, between opening and closing.

Closing: The negotiation is finished (all participants have agreed), and the agreed-upon topic is (sometimes) recapitulated. Good Bye The dialogue participants say good bye to each other.

This particular chunk is often repeated in a cyclical pattern. The act of beginning a topic of negotiation defines the opening by itself, and the act of beginning a new negotiation entails the closing of the previous one. However, this chunk can be disrupted in interesting ways.

Chunking rules
The chunking rules are as follows:
1. The first utterance in a dialogue is always the start of chunk 1 (hello).
2. The first INIT or SUGGEST or REQUEST_SUGGEST or EXCLUDE in a dialogue is the start of chunk 2 (negotiation).
3. INIT, SUGGEST, REQUEST_SUGGEST, or EXCLUDE marks the start of a subchunk within chunk 2.
4. If the previous utterance is also the start of a chunk, and if it is spoken by the same person, then this utterance is considered to be a continuation of the chunk, and is not marked.
5. The first BYE is the start of chunk 3 (good bye).

Possible Uses:

Discourse Chunking will help a research think about data they’ve collected via interviews (spoken or written), ethnography,  or case study. It allows a researcher to break apart discourse in a manageable, usable manner. The goal would be to gain insight into how individuals construct dialogue, which could lead to new  theories or insights regarding the phenomena a researcher is after.

Midgley, T. Daniel (2003) Discourse chunking: a tool in dialogue act tagging. Proceedings of the 41st Annual Meeting on Association for Computational Linguistics – Volume 2

Find the full article here


Discourse chunking: a tool in dialogue act tagging

Idea Mapping

If you want to develop free-form maps for your ideas about the data you’re collecting, you should look into the variety of software programs developed for “mind mapping” or “concept mapping.” A number of these software packages are available for free, and Wikipedia maintains a list of them.

http://en.wikipedia.org/wiki/List_of_mind_mapping_software

To see an example of how Free Mind (one of the free programs) works, see:

http://freemind.sourceforge.net/PublicMaps.html

Kristen Moore reminded me yesterday, “You code with your head,” a point that is particularly true in qualitative studies. Our research questions, ideological stance, the issues important to the field, and the work that previous researchers have done with similar data all contribute ideas for coding. Then, as we work through them, we constantly work to do manuevers such as:

  • place data into piles
  • reduce the data
  • try out themes
  • name the ones that seem to stick with  ….. operational definitions so that others can locate the themes in this or other data
  • make sure that all the needed data has categories it fits into
  • have others check your sorting(coding)
  • name the activity (this is a story about students. . . or technology. . . or work. . . or assessment. . . or)
  • think about whether this is the unit (or units) of analysis that can link the questions, ideology, issues, theories, with the data
  • re-sort

And, of course, we do repeat some or all of these moves for it seems like forever.

This coding fever also can start more innocuously, rather than embedded in questions, theory, and ideology. Alex, for example, brought some data she had gathered to class. “It took me a long time to code all these gamer podcasts distributed by  iTunes. I hope I can get more out of the work than just the gender.” We poked through the considerable work she had done and started generating other categories out of questions. Some of those were:

  • what stereotypes are there about women gamers, and do these podcast typologies confirm or surprise them? (e.g., women are more likely to do fan-like podcasts)
  • are women podcasters newcomers to gaming podcasts? (look at dates of start/subscribers/frequency of broadcast)
  • what typologies exist among these podcasts and how do genders fall out by type?
Often, we find ourselves moving back and forth among questions and data. This should not be so surprising, as a qualitative study normally has to build its infrastructure.
When asked to comment on the example, Alex focused on the importance of seeing data in different ways. . .
 “what I mean is, before we talked about the types of questions I could ask of my data (what does the data show about the circumstances that gives rise to variations in the gender makeup of gaming podcasts), I had total tunnel vision. I collected the data to answer a pretty simple question. If I hadn’t opened up a space to see what else the data could tell me, I would have missed the most interesting part of my research so far. What is key then about coding is making sure you do draw on all those things you mention (experience, the field, colleagues, etc) as a way of *seeing* differently. I guess that I’m saying is that I am starting to wonder if the importance of coding has nothing to do with the coding at all. Rather, it has to do with finding new ways to see the phenomena present (though perhaps hidden) in the data.”
Well said, Alex Layne.
To pursue this topic further:
One of the best guides for coding educational studies (particularly if they have institutional dimensions) is
  • Matthew B. Miles and Michael Huberman, Qualitative Data Analysis An Expanded Sourcebook (2nd edition). Sage, 1994.
To learn more about seeing the data in different ways, read chapters 4, 6, and 7 in
  • Patricia Sullivan and James Porter, Opening Spaces. Praeger, 1997.