Graduate education, particularly the training of doctoral students, plays crucial role in the progress of society. Around 1,500 of the country’s 4,500 or so universities award doctoral degrees. In 2018 according to the Survey of Earned Doctorates 55,185 students were doctorate recipients in the United States.
To match potential graduate students and graduate programs needs the interplay among students, college professors as evaluators, and admission committees. Many academics are involved in the admission process of graduate students.
As a college professor, it’s my seasonal duty each fall to write letters of recommendation and rate students based on several criteria in order to help them gain acceptance to graduate schools. Students should ask several professors to evaluate them. Occasionally, I have to tell a student that I would not be able to write a strong recommendation, so it would be better not to ask me. We evaluators combine quasi-objective data (say, grades) and subjective impressions to generate a rating score. Despite the subjective nature, these evaluations are far from random, and college professors don’t have better ways of helping students and graduate programs find a good match. At the individual level, rating and ranking of application to graduate schools dramatically influences the life of students. Admissions committees have a strong interest in ensuring they only accept mature, polite, reliable, and stable people into their program, and my professional duty is to help them achieve this goal.
One company provides software as a service to many universities for admissions and applications evaluations. Its software uses six criteria to rate students:
- Knowledge in chosen field
- Motivation and perseverance toward goals
- Ability to work independently
- Ability to express thoughts in speech and writing
- Ability/potential for college teaching
- Ability to plan and conduct research
For each criterion, the evaluator picks among five options: exceptional (upper 5%), outstanding (nxt 15%), very good (next 15%), good (next 15%), and okay (next 50%).
How do we generate the numbers and check the appropriate rubric? In principle, a micro-rationalist, bottom-up approach would work. Teachers could collect and store data from students throughout decades, and they might have a formal algorithm for calculating the percentages. I believe it is more likely that many of us apply top-down strategies. We ask ourselves: do I want to grant a set of grades that is all exceptional? Does the applicant have a clear weak point, in which I should check the third or maybe the fourth category? What if I score, say, four outstanding and two exceptional? The table below shows an example:
I thought this student should be supported, because she seemed to be very motivated, independent and clever. However, I have never checked anybody to be “exceptional” in all categories. First, nobody is exceptional in everything, second, it would be suspicious if my scores were unfair and consciously biased, and so it might be counterproductive. In the case shown in the figure the candidate had little experience in her chosen field, so I checked the second column. Probably the third or even the fourth column would have been more appropriate. I did not feel I was cheating, but I knew I was a little biased.
Good or bad, decision makers calculate the sum of the scores, analyze their distribution, read the recommendations, and adopt some strategies to bring decisions. (In some version of the ranking game, weights are assigned to the different features, to expressing the relative importance of the different features. University ranking systems famously assign relative percentage weights to each ranking factors .) While rating with numbers contains subjective elements, we don’t have better ways to approximate objectivity.
The notion of objective reality refers to anything that exists as it is independently of any conscious awareness or perceiver. Subjective reality is related to anything that depends upon some conscious awareness or some perceiver. Objectivity is associated with concepts like reality, truth, and reliability. Objectively ranking the tallest buildings in the world is relatively easy, since it is based on verifiable facts, and we have a result that everybody will accept.
If we accept that it is an impossible task to objectively ranking graduate students, what we could and should do? On the one hand, we could improve our ranking methodologies, on the other hand we (all stakeholders of the game from students to professors) should feel that our actions are as fair as possible.
Featured Image by Joshua Golde from Unsplash