http://chronicle.com/article/Its-Not-How-Much-Student-Data/125255/
November 4, 2010
It's Not How Much Student Data You Have, but How You Use It
Student-assessment reports feature tables, charts, and shining examples of data in action. According to this year's National Survey of Student Engagement, released on Thursday, the University of Nevada at Las Vegas had seen low marks for advising, so it opened an academic-success center. South Dakota's public colleges, worried about weak measures of "active and collaborative learning," had made plans for all students to get tablet PC's, and for faculty members to integrate them into coursework.
Such smiley snapshots come mostly from feedback the survey solicits from participating colleges. Of 643 survey participants last year, 29 percent responded to feedback requests. Whether the rest use or shelve their data, researchers don't know.
Stanley O. Ikenberry, for one, is skeptical. "On too many campuses, NSSE results seem to remain unexamined and without any material consequence," wrote Mr. Ikenberry, a former president of the University of Illinois and the American Council on Education, in the foreword to this year's report.
According to last year's Faculty Survey of Student Engagement, three-quarters of professors said their institutions were significantly involved in assessment projects of some kind; just a third found them useful.
That may be because colleges conduct assessment less for improvement than for accountability. They transmit student-engagement data to the Voluntary System of Accountability, an effort by more than 300 public colleges to provide information about life and learning on their campuses, and to state "dashboards" that display similar information. Both Nessie, as the student-engagement survey is known, and the Cooperative Institutional Research Program's Freshman Survey, the other major national student poll, report that many colleges are motivated by requirements from accreditors. In fact, accreditation is the main driver and use of all assessment, says the National Institute for Learning Outcomes Assessment, known as Niloa.
So institutions may often report results, but not use them to try to improve learning. Higher-education leaders call for "closing the loop," but with heaps of data, some colleges don't know where to begin.
Many just keep collecting. Each year, for example, they administer national or homegrown surveys of students; they gather academic-performance measures; they amass spreadsheets. Colleges in the habit of automatic assessment, experts worry, are compiling data indiscriminately.
"There is a strong tendency, particularly because of the need to do something visible, to say, 'Ready, shoot, aim,'" says Peter T. Ewell, a senior scholar at Niloa and vice president of the National Center for Higher Education Management Systems.
And it's hard to focus meaningfully on a mess of statistics. Mr. Ikenberry compares it to reading a phone book. Campus administrators—and faculty members, where they're involved—tend to look for something to pop out at them. If a college has collected data for general reporting purposes, assessment researchers and consultants say, unless a particular finding seems devastating, results typically don't spur action.
Adequate Measures
But what is unsatisfactory, exactly, and what is good enough? On some measures, national benchmarks allow an institution to compare itself to peers, but often no norms exist. Wondering what percentage of students should pass a certain remedial course or report satisfaction with a particular program makes interpreting data tricky.
Concerns about methodology can also let data lie dormant. Statisticians have criticized student-engagement data because survey questions rely on long-term memory and relative terms like "often" or "very much," and experiments can rarely be randomized. (A college generally can't, for instance, compel some students to participate in a learning community and keep a similar population out.)
Without some compromises, assessment would be impossible, says Cynthia B. Tweedell, executive director of the Research Center in Adult Learning, a joint project of Indiana Wesleyan University and the Council for Christian Colleges and Universities. Take "value-added" assessment, which looks at the impact of a particular experience on students by measuring their performance or attitudes before and after. Colleges may opt to use simultaneous populations of freshmen and seniors, making a few statistical adjustments.
Even if an institution tried to poll the same students, as freshmen and again as seniors, some would have dropped out, others transferred in, and the semesters would have rolled by. "It's going to take you four years to do that study," Ms. Tweedell says. "We shouldn't wait that long to find out that our program is ineffective."
Maybe colleges don't have to. Charles F. Blaich, director of the Wabash National Study of Liberal Arts Education, conducted a lengthy, statistically rigorous longitudinal study of 19 colleges' students and shared the findings. "There were a lot of things campuses already knew from much simpler studies," he says.
Use It or Lose It
Calls for colleges to go ahead and apply their student-assessment data are getting louder.
The Teagle Foundation, which began giving grants for assessment five years ago, suspected that colleges hadn't sufficiently mined their results. It has shifted the focus to "engaging evidence" in a round of grants to start this month, says Donna Heiland, Teagle's vice president. "We wanted to encourage people to use data they already had."
A particular obstacle with Nessie has been that institution-level information—on faculty-student interaction, for example—isn't necessarily useful, and sample sizes are too small to drill down to individual departments. For the first time this year, the Indiana University Center for Postsecondary Research, which runs the survey, began online polling of all freshmen and seniors at participating colleges.
Researchers are also more aggressively promoting good examples of colleges' using results. Last year the survey published a new series of dispatches, "Lessons From the Field." In February it introduced a database of 500 cases of Nessie use, searchable by categories such as retention and critical thinking. The Freshman Survey, run by the Higher Education Research Institute at the University of California at Los Angeles, started last year to break down colleges' data into themes, including career planning and diversity, to urge institutional researchers to share them with relevant units across the campus.
Group discussions of assessment results have become the norm at Westminster College, in Salt Lake City. This month the college will hold its third annual program-assessment meeting, a half day with administrators, professors, students, and trustees, to decide what to do with its data. In the spring they examine campuswide assessments, like the national surveys.
Where such collaborations occur, collecting and using data become part of the campus culture. Then, maybe, the provost will allocate resources expecting to see data at work, says Linda A. Suskie, a vice president of the Middle States Commission on Higher Education. "Boy, is that going to get everybody on the assessment bandwagon real quick."
And accreditors, like Middle States, are driving a harder bargain. "Regional and specialized accreditors have been pushing very, very hard to get institutions not just to collect information about student performance, but to use it," says George D. Kuh, project director at Niloa. In a time of tight budgets and flux, the concept of evidence-informed decisions is starting to pervade publications, conferences, and grant making.
To try to shape that movement, college leaders have formed the New Leadership Alliance for Student Learning and Accountability, which plans to announce on Friday that 71 presidents had already signed its pledge to collect, report, and use assessment data. Not only compliance but good stewardship should drive those practices, says David C. Paris, the group's executive director.
Paltry resources may prove a general challenge. According to Niloa, just a quarter of colleges charge more than one full-time employee with assessment. Last year about half of institutions said the recession wouldn't affect their assessment activities; a fifth foresaw budget cuts.
Assessment at colleges is like research and development in industry, says Mr. Kuh. To do it well—to follow through—demands significant expenditures. "If any manufacturer invested so little," he says, "they couldn't keep up with the competition."
@prof_truthteller -- one discipline, physics, has done a LOT of assessment of their students's learning, found it wanting, and did something about it. As best I know, they weren't influeced by the right (plus, the work goes back several decades). For example, earlier this week there was a report about how MIT changed their physics instruciton due to findings on assessment: http://tech.mit.edu/V130/N49/normandin.html .
More generally, here's assessments that they developed:
http://www.ncsu.edu/per/TestInfo.html . Here's two papers with 1,000+ scholarly cites on (i) students not learning the fundamentals in physics (determined by an assessment) and how different teaching methods lead to better learning by students: http://se.cersp.com/yjzy/UploadFiles_5449/200607/20060705142003187.pdf and http://web.mit.edu/rsi/www/2005/misc/minipaper/papers/Hake.pdf .
Leaders from leading universities deeply involved in physics education research: http://www.laspau.harvard.edu/idia/mecesup/readings/Eric_Mazur/Mazur_52364.pdf , http://vodpod.com/watch/2777267-confessions-of-a-converted-lecturer-eric-mazur and http://www.cwsei.ubc.ca/resources/files/Wieman-Change_Sept-Oct_2007.pdf The author of the 1st two is a chaired professor at Harvard; the author of the last one is a Nobel Laureate and is current deputy science adviser to the President (which again suggests that this work is not right-wing insipred).
A great place to start one's investigation of this work is http://www.compadre.org/per/ .
No comments:
Post a Comment