Category: Tools for Analyzing Data


“Questioning the Author” is a method of curriculum development used to help students interrogate a text. The classroom talk portion of this method provides six Discussion Moves that allow the instructor to provoke and foster student discussion. The Discussion Moves are as follows:

  • Marking: Drawing attention to, and emphasizing the importance of an idea that a student has raised.
  • Revoicing: Interpreting what students are struggling to express and rephrasing the ideas so that they can become part of the discussion
  • Turning back: Turning responsibility to students for their reconsideration or elaboration, or reconnection to the text in question
  • Recapping: Summarizing the discussion so far in order to transition to another topic or point.
  • Modeling: Thinking aloud to show students how her mind is actively interacting with the ideas in the text
  • Annotating: Providing information the students might not have.

When applied as a coding scheme, these markers can be used to analyze an instructor’s discursive strategies, thus making it possible to assess what provokes the most “successful” discussion. This can also be useful if used in comparison to students’ interrogation of a text in writing, as written composition could incorporate very similar moves.

Beck, Isabel L. and Margaret G. McKeown. (2007). “How Teachers Can Support Productive Classroom Talk: Move the Thinking to the Students.” In Rosalind Horowitz (ed.), Talking Texts (207-220). Mahwah, NJ: Lawrence Erlbaum Associates.

Batt, Thomas A. “The Rhetoric of the End Comment.” Rhetoric Review 24.2 (2005): 207–23.

This article takes a close look at two endnotes to student writing. Batt starts with a comprehensive review of the conversations and research about endnotes. He then presents two samples and conducts an in-depth analysis of the strategies used within the comments. He connects the comments to larger discussions of paper ownership, directive versus non-directive, evaluation versus facilitation, and he even looks at classic rhetorical concepts. His analysis and discussion is both useful and thought provoking: Useful because it demonstrates effective models for responding and thought provoking because he connects the models with theoretical concepts and ideas.

In a multi-year FIPSE grant, five colleges [Columbia-Chicago College, CSU-Long Beach, CSU-Sacramento, Florida Gulf Coast U, and U of Delaware] developed a rubric for assessing writing intended to work as a grading tool that would span across a range of institutions. “An Inter-institutional Model for College Writing Assessment” (CCC, 2008) includes both a description of their versions of a rubric for assessing papers, the findings of their implementation, a rationale for changes to their holistic scoring rubric, and also their final version of the holistic scoring rubric.

After trying a version built from the best practices of assessment, and contemplating changes needed to implement this rubric across a range of school, the authors, Neil Pagano, Steve Bernhardt, Dudley Reynolds, Mark Williams, and Kilian McCurrie, identified five categories for assessment of students writing:

  • task responsiveness
  • engagement with the text(s)
  • development
  • organization
  • control of language

Their rubric for these categories, intended to help assessors rate writing on a six point scale, was implemented at the five colleges above. No validation of the rubric was attempted.

Initial and final rubrics are included as appendices to this article.

Citation:

Pagano, N., Bernhardt, S.A., Reynolds, D., Williams, M., & McCurrie, M.K. (2008). An inter-institutional model for college writing assessment. College Composition and Communication, 60 (2), 285-320.

Discourse chunking is a simple way to segment dialogues according to how dialogue participants raise topics and negotiate them.  Discourse chunking gives information about the patterns of topic raising and negotiation in dialogue, and where an utterance fits within these patterns.

A simple example is the opening-negotiation- closing chunk of a dialogue, which looks like this:

Hello: The dialogue participants greet each other. They introduce themselves, unveil their affiliation, or the institution or location they are from.

Opening: The topic to be negotiated is introduced.

Negotiation: The actual negotiation, between opening and closing.

Closing: The negotiation is finished (all participants have agreed), and the agreed-upon topic is (sometimes) recapitulated. Good Bye The dialogue participants say good bye to each other.

This particular chunk is often repeated in a cyclical pattern. The act of beginning a topic of negotiation defines the opening by itself, and the act of beginning a new negotiation entails the closing of the previous one. However, this chunk can be disrupted in interesting ways.

Chunking rules
The chunking rules are as follows:
1. The first utterance in a dialogue is always the start of chunk 1 (hello).
2. The first INIT or SUGGEST or REQUEST_SUGGEST or EXCLUDE in a dialogue is the start of chunk 2 (negotiation).
3. INIT, SUGGEST, REQUEST_SUGGEST, or EXCLUDE marks the start of a subchunk within chunk 2.
4. If the previous utterance is also the start of a chunk, and if it is spoken by the same person, then this utterance is considered to be a continuation of the chunk, and is not marked.
5. The first BYE is the start of chunk 3 (good bye).

Possible Uses:

Discourse Chunking will help a research think about data they’ve collected via interviews (spoken or written), ethnography,  or case study. It allows a researcher to break apart discourse in a manageable, usable manner. The goal would be to gain insight into how individuals construct dialogue, which could lead to new  theories or insights regarding the phenomena a researcher is after.

Midgley, T. Daniel (2003) Discourse chunking: a tool in dialogue act tagging. Proceedings of the 41st Annual Meeting on Association for Computational Linguistics – Volume 2

Find the full article here


Discourse chunking: a tool in dialogue act tagging

Kristen Moore reminded me yesterday, “You code with your head,” a point that is particularly true in qualitative studies. Our research questions, ideological stance, the issues important to the field, and the work that previous researchers have done with similar data all contribute ideas for coding. Then, as we work through them, we constantly work to do manuevers such as:

  • place data into piles
  • reduce the data
  • try out themes
  • name the ones that seem to stick with  ….. operational definitions so that others can locate the themes in this or other data
  • make sure that all the needed data has categories it fits into
  • have others check your sorting(coding)
  • name the activity (this is a story about students. . . or technology. . . or work. . . or assessment. . . or)
  • think about whether this is the unit (or units) of analysis that can link the questions, ideology, issues, theories, with the data
  • re-sort

And, of course, we do repeat some or all of these moves for it seems like forever.

This coding fever also can start more innocuously, rather than embedded in questions, theory, and ideology. Alex, for example, brought some data she had gathered to class. “It took me a long time to code all these gamer podcasts distributed by  iTunes. I hope I can get more out of the work than just the gender.” We poked through the considerable work she had done and started generating other categories out of questions. Some of those were:

  • what stereotypes are there about women gamers, and do these podcast typologies confirm or surprise them? (e.g., women are more likely to do fan-like podcasts)
  • are women podcasters newcomers to gaming podcasts? (look at dates of start/subscribers/frequency of broadcast)
  • what typologies exist among these podcasts and how do genders fall out by type?
Often, we find ourselves moving back and forth among questions and data. This should not be so surprising, as a qualitative study normally has to build its infrastructure.
When asked to comment on the example, Alex focused on the importance of seeing data in different ways. . .
 “what I mean is, before we talked about the types of questions I could ask of my data (what does the data show about the circumstances that gives rise to variations in the gender makeup of gaming podcasts), I had total tunnel vision. I collected the data to answer a pretty simple question. If I hadn’t opened up a space to see what else the data could tell me, I would have missed the most interesting part of my research so far. What is key then about coding is making sure you do draw on all those things you mention (experience, the field, colleagues, etc) as a way of *seeing* differently. I guess that I’m saying is that I am starting to wonder if the importance of coding has nothing to do with the coding at all. Rather, it has to do with finding new ways to see the phenomena present (though perhaps hidden) in the data.”
Well said, Alex Layne.
To pursue this topic further:
One of the best guides for coding educational studies (particularly if they have institutional dimensions) is
  • Matthew B. Miles and Michael Huberman, Qualitative Data Analysis An Expanded Sourcebook (2nd edition). Sage, 1994.
To learn more about seeing the data in different ways, read chapters 4, 6, and 7 in
  • Patricia Sullivan and James Porter, Opening Spaces. Praeger, 1997.

Informating

Informating is the term coined by Shoshana Zuboff to help researchers and scholars think about the process of translating  digital inscriptions or measurements into usable information.

Possible Uses:
Although it’s not a tool, the theory of informating can be used to build categories. For example, a researcher can assess and classify multiple instances of the process of translating digital inscriptions into information. This may allow the researcher to understand how differing writers in different contexts manipulate the processes.

Zuboff, Shoshana. (1988). In the Age of the Smart Machine: The Future of Work and Power.

Miall & Kuiken cite Purve’s Literary Transfer and Interest Measure in the development of their Literary Response Questionnaire (LRQ), which provides scales that measure seven different aspects of readers’ orientation toward literary texts: 1) Empathy, 2) Imagery, 3) Vividness, 4) Leisure Escape, 5) Concern with Author, 6) Story-Driven Reading, and 7) Rejection of Literary Values.  It consists of 68 items, all positively worded. 

Possible Uses:

The LRQ can be used to measure readers’ affective interaction with any text in any of the categories mentioned above.  For example, the empathetic attachment of two given prose styles could be compared.

References:

Miall, D. S. And Don Kuiken. (1995) Aspects of Literary Response: A New Questionnaire. Research in the Teaching of English, 17 (1), 37-58.

The Response Preference Measure (RPM) is a 20-item questionnaire focused on measuring literary affective response that has been developed in 1968 by Purves.  The responses are keyed into subsets of the major categories of: 1) engagement/involvement, 2) perception, 3) interpretation, and 4) evaluation.   Zaharias & Mertz (1983) modified the questionnaire by using a Likert Scaling technique.  They further assessed the 4 components of reader-response it measured, confirming its validity. 

Possible Uses:

The RPM can be used to compare two texts (and/or prose styles) from an affective standpoint.  Participants answe the questionnaire following a reading to measure how, why, and to what degree they affectively interact with a given work.

References:

Purves, A. (1973) Literature Education in Ten Countries. An Empirical Study. New York: Wiley & Sons.

Zaharias, Jane and Maia Pank Mertz. (1983) Identifying and Validating the Constituents of Literary Response through a Modification of the Response Preference Measure. Research in the Teaching of English, 17 (3), 231-241.

Torkzadeh and van Dyke developed and tested a version of the self-efficacy test aimed at measuring individuals’ self-perceptions of their competency with the internet. Their 2001 study reports that they reduced a 24-item scale to 17 items based on testing its reliability and validity. In doing that testing they found that three factors contributed conceptually — surfing/browsing, encryption/decryption, and system manipulation.

Use: There’s a need for measures in this area, and several are available. This could be used in technical communication studies, but the language of the survey is too technical for general audiences. [note: We should look to see if there are updates to this measure that make the language less aimed at computer professionals.]

References:

Torkzadeh, Gholamreza, & van Dyke, Thomas P. (2001). Development and validation of an internet self-efficacy scale. Behaviour & Information Technology, 20 (4), 275 — 280.

Berzonsky’s ISI scale seeks to classify how people respond to decisions. The scale sorts people according to whether they process through  information-orientation, normative-orientation, or diffuse-orientation. And most of the research Berzonsky uses those three dimensions.  The third revision in 1992 is available for use (see link below) and it adds commitment as a category.

Berzonsky describes the types of identity style in this way, “Adolescents who use an informational identity processing style actively seek out and evaluate self-relevant information. . . Those who utilize a normative processing style rely more automatically on the expectations and prescriptions of significant others. . . Adolescents who employ a diffuse-avoidant identity style, procrastinate, delay, and attempt to avoid facing up to identity issues and conflicts as long as possible; their behavior is determined mainly by situational factors and hedonic cues.” (2004, 213)

Sample questions:

  • I’ve spent a great deal of time thinking seriously about what I should do with my life. [info]
  • I’ve more-or-less always operated according to the values with which I was brought up.  [norm]
  • I’m not really thinking about my future now; it’s still a long way off.  [diff]
  • I have some consistent political views; I have a definite stand on where the government and country should be headed.  [commit]

Possible uses:

Because the ISI aims to connect identity with general ways of acting,  tt might be helpful to those who are studying group work.

Link to the 3rd revision of the survey (and it includes data about reliability and validity studies done on the measure):  http://w3.fiu.edu/srif/ArchivedPagesJK/Berzonsky/BerzonskyISI3.rtf

Berzonsky requests that anyone using the survey send him information via the Psychology Dept. at SUNY-Cortland.

References:

Berzonsky, Michael D. (1994). Self-identity: The relationship between process and content. Journal of Research in Personality, 28, 453-460.

Berzonsky, Michael D. (2004). Identity style, parental authority, and identity commitment. Journal of Youth and Adolescence, 33 (3),213-220.

Berzonsky, Michael D. (1989). Identity style: Conceptualization and measurement. Journal of  Adolescent Research, 4, 267–281.