Inside Higher Ed has an interesting article, The New Diagnostics:
Rio Salado uses more than two dozen metrics during that first week to predict how well that student stands to fare over the entire course, but some of the most effective are the most basic: Has the student logged into the course home page during that first week? Did she log in prior to the first day of class? Other predictive metrics, such as whether a student is taking other classes at the same time, whether she has been successful in previous courses, and whether she is retaking the course, are culled from the college’s student information system.
So, they can tell which of my students are going to make it? I can see that as a very useful rubric.
I am also proud of my alma mater,
Purdue University, which has run similar predictive modeling program since 2006, and does keep students in the loop. At an “actionable analytics” symposium last month, John Campbell, the associate vice president of Purdueâ€™s advanced computing center, said the â€œat-riskâ€ students generally took that information as either a motivational kick in the rear or were prompted to quickly drop the class — and were grateful in any case. A double-blind study conducted during the first two years of the Purdue’s program, called Signals, revealed that 67 percent of students who learned they were in the middle- or high-risk categories were able to improve their grades.
Go read the whole article. It is very good.