Monday, August 15, 2011

Free Speech and Academic Engagement

Dan RichardDan Richard

Office of Faculty Enhancement
drichard@unf.edu


One of the most positive aspects of the academic life is academic freedom. Faculty at colleges and universities are afforded a considerable amount of freedom in the syllabi they construct, the topics they address in the classroom, and the things they say in professional and public arenas.

A recent court case involving a faculty member at Northern Illinois University questions the boundaries of that freedom. A recent article in the Chronicle summarizes the case and the response from the university administrators.

I would like your comments about this case. Have you ever felt concerned for your job because of the things you have said in public? Should there be limits to faculty speech when those faculty are serving in a professional capacity?

Monday, January 18, 2010

Tilting at Silos



Dan RichardDan Richard

Office of Faculty Enhancement
drichard@unf.edu

Do you work in a silo?
In higher education research, a silo is an enclosed community of scholars who focus on research problems with a singular orientation. Physicists might study global warming with other physicists, for example, or sociologists might study poverty with other sociologists. Steve Kolowich in a recent article published in Inside HigherEd (http://www.insidehighered.com/news/2010/01/18/silos) discusses the impact of these silos on the coordination of resources at institutions of higher education.

The benefits of working in silos include the ability to progress quickly in one direction, facilitated by a common language and perspective. Very focused programs can secure funding from focused initiatives with less competition, and focused departments can attract quality students who are seeking out signature, productive programs.

Many problems like global warming and poverty, however, are complex. Funding sources are increasingly recognizing this complexity and are interested in funding projects that take an interdisciplinary approach (e.g., NIH and NSF).

One of the major challenges in moving toward interdisciplinary work is the established culture at the institution. Kolowich points out that promotion and tenure guidelines sometimes do not recognize interdisciplinary work on the same level as research by a single author.

What are your thoughts?
Does UNF encourage interdisciplinary work?
Should UNF move toward recognizing and supporting interdisciplinary work, for reasons of gaining extramural funding or for reasons of IT infrastructure?
Should we stay in our silos to promote focused, signature programs?

Click the "Post a Comment" link below.

Monday, March 30, 2009

What You Can Do for Your Country





Carolyn WilliamsCarolyn Williams
Department of History
cwilliam@unf.edu


"Ask not what your country can do for you. Ask what you can do for your country."
As we watch the government struggle to meet the enormous national and international challenges, we citizens should rise to the call issued by the newly elected president, John F. Kennedy, in 1961. We should be coming up with strategies and creating resources of our own. Of course we at the university have the great advantage. Therefore, I suggest the following. We can create our own think tank where significant research and productive discourse takes place to produce ideas that can be applied to local situations as well as the global sphere. To paraphrase another prominent political Irish Catholic leader from the past, not only are all politics local, but so are the problems, and likewise the solutions. We should enlist in a citizens’ army committed to saving the nation, by making our world a better place.

Friday, February 13, 2009

Research for What?



Dan RichardDan Richard

Office of Faculty Enhancement
drichard@unf.edu

Forgive me, but I am a researcher. I remember when I suffered through academic job interviews for assistant professor positions. I often was asked a somewhat strange but, from the perspective of the questioner, a hopefully informative question, “If you had to choose only one, would you prefer to be only a teacher or only a researcher?” The interviewer then would look on with a quiet resolve, a Solomon-worthy stare. My answer: “A teacher.”

You might be surprised by my response given the start of my post, but my reasoning was (and still is) that the concept of research is such a part of my world-view, such a part of my values and ideals, that I would be a researcher no matter the context of the rest of my life. If I worked at McDonald’s, I would research the flipping of burgers or the crisping of fries. It is a part of who I am.

This is why a recent question at the ICRSLCE conference gave me pause. The leaders of the conference announced that the theme for next year’s conference will be “ Research for what?” This question sparked a number of other questions for me. For now, let me focus on the central question and try to make some sense out of its focus and purpose.

“Research for What?” suggests that, whatever research endeavor we choose, the outcomes of our research should have some purpose or application, that someone should want to know about the outcomes of the research. I believe that every researcher, because of their own intrinsic interest in their work, naturally think that their research IS interesting AND important (see 1Richard, et al. 2002), and that others would benefit from knowing about their research innovations and findings.

Of course, knowing the impact of one’s work is no easy task. Some publishing companies provide journal impact ratings (to evaluate the impact of articles that have been published within a particular journal), and the APS has provided a new score (h-index) to measure the scholarly impact of the entirety of one’s work. There are citation counts, invited works, and even Google rankings as well as a myriad of other techniques that researchers use to evaluate the scholarly impact of their work.

But the impact and relevance that these researchers, educators, and community scholars were referring to would not likely be captured by such narrow, bean-counting type measures. They were talking about the relevance and impact that one’s research has on the community (any community). So often, in the pursuit of knowledge, of recognition, or of tenure, scientists focus on the making of discoveries rather than on making a difference. The “new discovery” is valued in science so much that it often comes at the expense of relevance, impact, and the importance to the broader community.

This idea is a challenge to my earlier commitment to research. I conduct research because it has value to ME. I do research that is meaningful to ME. I, that’s right, I enjoy research. The research question, “Research for what?” reveled to me how selfish I am when it comes to research. If the project makes sense to me or to my career, then I will pursue it. How ego-centric?

Don’t get me wrong. I am not suggesting that researchers should give up basic research or abandon their passion for the work they do. I am not suggesting that we offer up quality, basic research as some altruistic sacrifice for the god of community.

I am suggesting that researchers, especially those who love what they do, are at risk of doing research that is so relevant to themselves and so irrelevant to anything else, that the quality of the research is undermined. I am suggesting that applied research, research that is informed by purpose, including community purpose, has a much greater chance of being quality research than projects that focus simply on a faculty member’s research agenda, career path, or personal vim and vigor.

Perhaps one conclusion from all of this questioning and personal reflection is that quality research IS research that has value to someone other than oneself, and the highest quality research is research that has value to many other people who share a common fate, those who are attempting to improve themselves -- a community.

So now I have a meaningful question I can ask myself, my colleagues, and my students that will help increase the quality of our research --“Research for what?”

1Richard, F. D., Bond, C. F., Jr., & Stokes, J. J. (2002). “That’s Completely Obvious . . .and Important”: Lay Evaluations of Social Psychological Findings. Personalityand Social Psychology Bulletin, 27, 497-505.

Saturday, November 29, 2008

Clickers in Mind

Adam CarleAdam Carle

Department of Psychology
adam.carle@unf.edu

Lately, you may have noticed that UNF has begun “clicking” its way into the new century. Students use clickers (also known as classroom response systems) to answer a professor’s questions and the clicker system provides the professor with real time feedback about the class’ answers. Theoretically, these systems allow students to more fully engage with the material and learn better, especially in large classes. As a result of this theory, UNF and its students have invested substantial resources in clickers. For example, during Spring 2007, 1969 students used clickers. The clickers cost students $20 and students spend $13 each semester to register the clickers. So, in Spring 2007 students spent approximately $65,000 on clickers at UNF.

Astoundingly, little to no research has examined whether students actually learn better when using clickers! Rest assured, research has shown that students and professors both enjoy using clickers and students indicate that they learn better with clickers. But, until recently, no studies directly assessed whether students demonstrated higher achievement when using clickers. To cut a long story short, Mayer and colleagues (2009) recently addressed this problem. They compared college students in a class where students used clickers to answer 2 to 4 questions per lecture to two other groups of students, students in a class without clickers and students in a class without questions. Though their design limits strong causal conclusions, their results showed that students in the question-clicker class scored significantly higher on course exams than students in the other classes. Great news, considering we’ve already asked students to spend a tremendous amount of money on these systems (and UNF has too)

However, like most educational tools, the tool’s effectiveness depends on informed implementation. In the same way that simply handing students a book (or a computer for that matter) won’t directly lead to learning, simply handing students clickers won’t necessarily result in increased learning. Mayer et al. (2009) based their technique on a large educational literature. They built on active and generative learning methods that show that students learn better if they: answer conceptual questions while learning, practice taking tests, and engage in self-explanation while learning. Thus, they had all students in the clicker-question classroom respond to questions. Then, one student explained their answer to the class. Finally, the professor explained how to answer the question, and why.

What does this mean for you and your students? It means a fledgling research literature has begun to demonstrate that clickers can lead to increased learning. It means that you might boost your students’ achievement using clickers if you adopt clickers carefully. You should build on you and your course’s strengths and incorporate clickers in a way that fosters generative and active learning. You should develop your clicker use with the educational literature in mind and utilize evidence-based teaching practices. Finally, you should collect data. See how (or if) your class’ averages change across time as you implement the clicker system. Base your teaching practices on evidence. After all, you wouldn’t buy a $65,000 car without data, or would you?

Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., Bulger, M., Campbell, J., Knight, A., & Zhang, H., (2009). Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34, 51-57.

Monday, October 6, 2008

Research in Mind

Adam CarleAdam Carle

Department of Psychology
adam.carle@unf.edu

Have you ever tried to define research? As UNF continues to grow and change, research has become an increasingly discussed aspect of UNF’s institutional setting. Our mission statement notes that we “support and recognize research and creative endeavor as essential university functions.” And, heck, I’m told it’s part of our responsibility as faculty members to conduct research. Given that I regularly conduct research, I thought it might be fun to try and define research. I thought I’d try and define research in a way that encompassed as many of the academic fields and sciences as possible. It turns out, I’m having an awfully hard time! In this installment of the blog, I thought I’d ask for your help, and, hopefully, generate some discussion. Please, read my (bad) attempt at defining research below. Then, tell me what you think. What does it miss? What does it overstate? How does it relate to what you do (or don’t do) here at UNF? I can’t wait to read what you have to say.

Research, broadly defined, encompasses systematic and organized approaches to accurately describe, predict, and understand the heterogeneous aspects of existence. These efforts include basic and/or applied settings, quantitative and/or qualitative efforts, and cover the vast array of academic disciplines. Regardless of field, these endeavors use a set of planned procedures to examine their topic and arrive at accurate conclusions.

Programmatic research differs in that it typically seeks to break a large research topic into smaller, more manageable pieces. This often allows more stringent control and more detailed, fine grained analyses. Programmatic research addresses each piece sequentially in an effort to build an encompassing and coherent picture from the smaller studies’ findings. Programmatic research allows investigators to incorporate their findings into the discipline’s field at large and build upon others’ research. Programs of research support sustained, long term, focused efforts.

Sponsored research refers to research funded by an external research organization. Because of its nature, programmatic research often needs material resources an investigator or institution cannot easily or continuously supply. Relatedly, sponsors may lack sufficient human capital (knowledge or otherwise). As a result, sponsors, investigators, and investigator’s institutions enter into partnerships to overcome these boundaries. Research and sponsorship often go hand in hand and sponsorship suggests that a topic and method have achieved recognition, especially when the funding process includes peer review. Nevertheless, not all research needs funding to proceed, nor must funding exist to consider an endeavor as research. As a result, though investigators often require sponsorship to conduct research, sponsorship itself proves neither necessary or sufficient as a definition of research.

Monday, September 29, 2008

A Crisis of Ignorance





Carolyn WilliamsCarolyn Williams
Department of History
cwilliam@unf.edu

When it began to materialize for me, the enormity of the economic problem we face, the first thoughts that popped into my head were “greed” and “lack of oversight.” Now that we have been bombarded with words and images of the scramble for a solution, the other phrase that keep rolling around in my head is “crisis of ignorance.” From the top down, it seems that no one really knows what is going on or what to do.

So maybe this is the big “wake up call” for us to do what the founders urged, that is, to be informed and engaged citizens. If all of us took more time to understand economic issues from our daily lives to the big engine that drives the nation, we would make better decisions about our personal finances, have a real comprehension of government policies, etc., ask pointed questions, demand clear and substantive answers and hold our representatives to greater accountability.

As we try to avoid a catastrophe, let’s follow the advice of the Founding Fathers-eternal vigilance-to make sure our rights to life, liberty and property are preserved and protected. Faculty and others with knowledge, insight and expertise here at the university can contribute to creating a new vigilant citizenry that can be of use to the present crisis and help prevent future disasters.

Saturday, July 26, 2008

Answers In Mind


Adam CarleAdam Carle

Department of Psychology
adam.carle@unf.edu


Do you believe in common knowledge? I’m beginning to feel that I don’t. It seems a wealth of common knowledge about test taking exists, but the educational psychologist in me knows that too little research has empirically examined much of the common lore. As we enter the final week of summer session, I’m sure a few of our students have tests on their mind. As they study and prepare, many of our students ask us for advice.

“How should I study for the final?” (Don’t cram, space your studying)

“How many questions will it have?” (Enough to reliably and validly measure your achievement)

“Do you have specific advice on how to take the final?” (?)

Common testing lore (among students and professors) would have you answer that last one something like, “Don’t change your answer on multiple choice tests. Stick with your initial impression.” But, recently, some investigators decided to put this advice to the test. It turns out that this is ill founded advice. In a series of elegant experiments, Kruger, Wirtz, and Miller (2005) showed that, on average, students who changed their answers on tests faired no worse than if they’d kept their initial answers. In fact, on average, switched answers (when students doubted their initial answer) lead to better scores on average. This goes against what many of us have learned and what many of us believe we’ve experienced.

How can this happen? Relatively simply. Kruger, et al. suggest (and go on to show) that students more easily remember times when they changed a correct answer to an incorrect answer. As a result, they overestimate the number of times this occurs. They often don’t notice when changing an answer led to a correct response. Why should they? Most of us attend more strongly to negative feedback. Students do too. Because students painfully feel switches to wrong answers, students quickly and easily remember their occurrence. They overestimate their prevalence; misestimate their own experience; and subsequently choose a poor test taking strategy.

What can we do about this? The obvious. We can start telling our students that, when they doubt their answer, they should change it. We can start by telling our students that their initial impression may be wrong. We can start giving our students empirically grounded advice. Unfortunately, Kruger, et al. also showed that students didn’t want to change their test taking strategy, even when professors explicitly suggested the better strategy (switch) and even when professors showed students evidence for switching’s effectiveness. Kruger, et al. suggest that, in the end, students find a life’s worth of misleading personal experience hard to conquer.

However, I’m left wondering. What if all their professors gave this advice, not just one? What if they gave it regularly, not just once? What might happen then? Let’s find out.

Kruger, J., Wirtz, D., & Miller, D. T. (2005). Counterfactual thinking and the first instinct fallacy. Journal of Personality and Social Psychology, 88, 725–735.

Friday, June 20, 2008

Communication in Mind

Adam CarleAdam Carle

Department of Psychology
adam.carle@unf.edu


Did you hear the one about the two professors who walked into a bar? The third one ducked…. I know. I know. I can hear you groaning now. Silly me. I love that joke. But, I didn’t write it for that reason alone (though I might do something like that). Rather, I wrote it to make a point. Did you know I had started a joke? If so, how? If not, did you find yourself suddenly concerned for my welfare? More so than usual? (Did you know that I meant that as a joke?)

Why should you care? Well, it turns out, we humans don’t communicate nearly as well as we think we do. In a series of intriguing experiments, Justin Kruger and his colleagues (2005) explored some of the limits of our communicative abilities. Generally, we have a tendency to overestimate the extent to which people understand our expressions and we simultaneously have a tendency to overestimate our ability to understand others’ communications. They found that this problem becomes particularly pronounced across email communications, i.e., communication where we don't see the other person. To simplify, they found that senders well overestimated the extent to which recipients would realize they’d written a joke in an email. And, on average, recipients well overestimated their ability to perceive jokes and humor sent via email. Thus, people often failed to realize that someone had sent them a joke. Likewise, people often failed to realize that the implicit joviality of their email had not translated across the electrical medium. Jokes and sarcasm met in one nasty accidental brawl.

So, when communicating via email, we don’t do so well. We think people know what we mean and we think we know what other people mean. And, here comes the kicker, we respond in kind. We get angry because a colleague, friend, or professor has sent us this ridiculously mean or sarcastic email, when in reality we never realize they’d intended their statement as a joke or gentle prod. Moreover, the colleague, friend, or professor has no idea that we, the student perhaps, missed their joke and that we’ve responded angrily because we misinterpreted their email.

What does this mean for you? As modern academics we do a tremendous amount of communication without direct interaction. As we move through the summer months, many of us will probably communicate with our peers and students via email more often than usual. Work like Kruger’s suggests that we regularly fail to convey and perceive the jovial nature of communication. We know we’ve told a joke. Surely the recipient knows it too. Moreover, it shows we frequently misinterpret others’ attitudes and emotions that they express to us via email and that we rarely realize we’ve misinterpreted the communication. We know how we would feel if we wrote that; they must feel the same way too. All this work suggests that we should work harder to explicitly state the nature of our communications. When we make a joke, we should preface our statement with a disclaimer noting as much. Moreover, when we receive ‘one of those’ emails from students and we think, ‘they can’t possibly mean that seriously,’ perhaps they don’t! We should ask others what they meant. We should work to clarify our communications as much as possible. And, I suggest, we should cut each other a little slack. We should assume the best in our colleagues and students. We should suspect that the angry note or stunningly awkward statement reflects the inadequate medium of email as communication rather than attributing it to a lack of character on the other’s part. After all, someone needs to pick us up after we walk into that bar....

Kruger, J., Epley, N., Parker, J. & Ng, Z. (2005). Egocentrism Over E-Mail: Can We Communicate as Well as We Think? Journal of Personality and Social Psychology, 89, 925–936.

The “Good” Teacher



Dan RichardDan Richard

Office of Faculty Enhancement
drichard@unf.edu

I have never been one for labels. Something just feels wrong about putting a label on someone, as if everything they are can be summed up in one word, one category. At dinner parties, when people ask what I do for a living, I often tell them that I am a scientist or that I teach just to avoid the strange, wide-eyed looks and awkward pauses that follow when I tell people I am a psychologist. It seems that the category of “psychologist” carries with it a host of ideas and responses that, by its invocation, immediately turns every learning theorist, comparative psychologist, biopsychologist, developmentalist (-- and social psychologist) into a couch-toting, ego-peddling, Freudophile.

These cocktail-party experiences, coupled with a symphony of adolescent label-drama (I will spare you the details) and my familiarity with 1Henri Tajfel’s work on categorization and group behavior, have left me with both a healthy respect for and a general dislike of using category labels for people.

It was a surprise to me, then, at a checkout counter in a large membership warehouse store, that I was hoping, anticipating, and even wishing that someone would assign such a label to me.

It started as it usually does, with a quizzical look, a feeling of familiarity and anticipation that comes from an area of your being that is somehow secret and mysterious (I know this person from somewhere, but where?). As these feelings typically resolve themselves, I realized that I was standing face-to-face with a former student. I teach large sections of Social Psychology, so I have many occasions to come in contact with former students. During these times, I never know if they will be happy to see me (i.e., they did well in my class) or if they would rather avoid me (i.e., they did not do so well). The person at the checkout counter was smiling. This was a good sign.

We exchanged recognition statements – “I had you for my psychology class,” and “Yes, social psychology.” Now the real test begins. What will this student say about the class? What will the student say about me? Will I hear the words I so long to hear? Will I be categorized and labeled by this person?

Then, I heard the words, sweet and enchanting: “You’re a good teacher.” He must have noticed the change in my smile, from awkwardly apprehensive to graciously gleaming. He put me in the category of good teachers. All of the ISQ (course evaluation system) ratings and ratemyprofessor.com comments could not accomplish so much in such a short period of time as that one LABEL. Why did this arbitrary label (a label that I am sure the student did not give a second thought) mean so much to me?

Well, it may have something to do with how the brain works. Some models of the brain suggest that concepts operate as nodes (or connecting points) and that each concept is linked (through neural connections) with other concepts. One of the most important and frequently accessed concept in the human brain is “the self.” We know much about who we are because we spend a whole lot of time with ourselves. People learn about new concepts in the world often by figuring out how our “selves” relate to these concepts (see 2Kihlstrom, Beer,& Klein for a discussion of how this works). One thing that people know about who they are is that they are “good.” Humans place a high price on knowing that as individuals, we are connected to the concept of “good.” I guess that is why this student’s comment meant so much to me. I, and most other people, want to be categorized as “good.”

So, when you receive those not-so-positive ISQ (student rating) results, and you read those not-so-flattering ratemyprofessor.com comments, remember that the uneasy feeling you have is just your brain having trouble connecting your self-concept with those concepts. Too often, negative comments or evaluations can have such a negative impact on our emotions and motivation that we fail to take advantage of constructive criticism. If it helps to keep you motivated during ISQ season, just think back on those times when students made you feel that the world made sense by saying, “You’re a good teacher.”

References
1Tajfel, H., & Forgas, J. P. (2000). Social categorization: Cognitions, values and groups. New York: Psychology Press.

2Kihlstrom, J. F., Beer, J. S., & Klein, S. B. (2003). Self and identity as memory. New York: Guilford Press.

Greenwald, A. G., Banaji, M. R., Rudman, L. A., Farnham, S. D., Nosek, B. A., & Mellott, D. S. (2002). A unified theory of implicit attitudes, stereotypes, self-esteem, and self-concept. Psychological Review, 109, 3-25.