So I signed a letter recently against the way that Poets & Writers Magazine ranks creative writing programs. As someone who's taught in creative writing programs for the past ten years, and who has recently directed a program (which, in case you were wondering about any potential sour grapes on my part, ranks in the top five PhD programs in the nation according to P&W), I think it's important to shed a little light on the pitfalls and problems of rank when applied to writing programs, especially as application season starts to loom. I know many of you out there probably aren't interested in this and just want to know what the hell I've been doing with all that grant money besides spending it on bad apartments and wheels of cheese. To you, I apologize. Please come back in a few days.
Ready?
First, as an ex-program director, I'd like to apologize for NOT filling out the annual Poets and Writers' survey. This was not why I was stripped of my title (the directorship at Utah is a revolving duty and my time thankfully just revolved) and likely this will discount my following remarks on the grounds of administrative laziness (which, frankly, it initially was), but my decision was ultimately based on what I thought were good reasons. The first reason was that the survey was mostly based on three distinct sets of questions that all revolved around numbers. Roughly paraphrased, the questions boiled down to the following categories. First: How many students apply each year to your program, and for which degree and in which genres? Second: How many students do you accept in each genre and for each degree? Third: What is the fellowship amount awarded to each student?
These are good questions to ask, I think, because it gives the potential applicant a sense of her competition, as well as a basis for understanding just what--monetarily--she is competing for. It's something any applicant would want to know. But sadly, I didn't have accurate enough numbers to keep track.
Why? you ask. Because every damn year these numbers change.
For instance, one year we'd have over 300 people apply, another year over 400, another year in the high 200's. (Lately, and I think you can guess why, our numbers have been increasing.) In terms of genre, the numbers are all over the map and show no annual consistency. Even the fellowship numbers change, as our MFAs are funded less regularly than PhDs, and occasionally with different packages. (Our MFA program at Utah has a Modular MFA that allows students interested in Book Arts, Environmental Humanities, and the American West Center to take graduate courses in these fields rather than in English, which means these students have different funding streams based on their interests.) We also have new fellowships constantly being added to the list, and as the budget crisis continues to tornado through the university system, it's not always clear just how many overall fellowships we'll have, nor how many applicant spots we can finally offer.
Essentially, our program--like many--is dynamic, and the numbers (useful to have, I admit) are annually unreliable.
Ah, you say. But why not just give the roughest estimate you can while still indicating that it's all in flux? Cave applicantor and all that?
Well, there's the laziness problem, which is significant. But more importantly, outside the funding numbers (which I think are hugely important), I didn't think the other numbers were useful ways to rank programs. Good questions to answer for applicants, but bad questions to use in ranking programs.
Graduate degrees in creative writing are weird beasts-- weird even for the humanities, which are chockablock with looney tune research fields. I sympathize with students trying to pick the right program, considering the variety and amount of information they'll have to process. Generally, they should look to their counterparts in literary studies at the MA or PhD level to see how a successful applicant approaches choosing a program. This is how it goes:
1. The applicant figures out what her discipline is, and likely knows what her speciality is going to be. She learns who the people publishing in this particular speciality are, where they teach, and what kind of graduate courses they offer.
2. The applicant starts to ask the directors, faculty members, alumni and enrolled students of her programs of interest about the program's pedagogy, social life, publishing and mentoring possibilities. She emails frequently. She risks, frankly, being a bit of a pest.
3. The applicant then makes a list of schools that specialize in her field, with the funding package numbers for each of them. She applies to a variety of these schools. She understands she will be choosing her school based on personal interest, financial opportunity and faculty-student mentoring possibilities.
Which is all a very dull way of saying: SHE MAKES HER OWN DAMN RANKING SYSTEM.
Leaving aside for the moment some of the obvious differences between literature and CW degree seekers (organizational skills and the ability to lie effectively being the main two that distinguish the successful lit applicant. Can I tell you, seriously, how many students approach me about getting a PhD in poetry who confess they don't like to read poetry and have never taken a workshop before? Seriously?), the problem with the P&W list's fascination with numbers is that, ultimately, it treats all MFA and CW PhD programs as the same without allowing for formal and aesthetic specialties. This is NOT the expectation we would have with, say, other literary studies degrees at the graduate level. Certainly, there are ranking systems for graduate degree-conferring universities, including those in English literature (is Yale still #1? Go bulldogs!), but people pursuing graduate degrees also know that this ranking system doesn't capture the full picture. If you're a medievalist, it's nice to go to Yale, but you REALLY should go to Notre Dame.
The same works for creative writing. If you are a formalist narrative poet, you would be wasting a reading fee applying to Brown. If you are an experimental fiction writer, likely Denver is your place.
In general, public ranking systems of degree programs exist for a variety of good and bad reasons but, increasingly, I think they largely remain in place to help settle internal college disputes. During departmental hiring debates, it's notable how often ranking numbers come up, and certainly a highly ranked program gets the lion's share of budgetary attention come crunch time in the College of Humanities. But in terms of its public value, ranking systems exist to assure future employers of the status of job candidates' particular degrees. Thus, what these rankings systems--whether for literature or for creative writing--implicitly measure is the degree-holder's marketability.
And therein lies the second problem for me about the P&W ranking system.
Because it's implicitly NOT being used as a method of evaluating the best place to become a writer and artist, but as a measure of particular creative writing degrees within the university and literary marketplace. This is a problem because--and let's all take a deep breath here and reach for a shot--as programs contract and teaching lines dry up, there is no university marketplace. And the literary marketplace doesn't much care about degrees.
I've been on many hiring committees, and we have never, NOT ONCE, looked askance at or more seriously over the application of a candidate based on where she did her MFA degree. Hiring is about the quality of the publications, not the school. And no school, no matter how good or highly ranked, can ensure good publications.
I want to take a moment here to admit that my complaint goes beyond what the P&W list is trying to do and is more about the changing use-value of ranking lists in a university system on the verge of collapse. P&W isn't responsible for that, of course, and I do think the attempt to bring some kind of order to the chaos in which we are all working is a noble one. But I wonder, if numbers are important, whether we are looking at the right ones. What--and who--should we really be quantifying? In short, what DOES the P&W list really tell us?
Here's something that struck me about the list. Each year I noticed that the program I was directing kept ending up in the top 5 for the PhD degree, yet disappeared off the charts for the MFA.
Well, you say testily. It's because you never wrote in the damn funding packages your MFA students receive on that survey. And while people know about Utah as a PhD program, they aren't interested in the MFA program.
OK, I say. That makes sense. But there's still a little problem in the ranking system.
YOU GET THE SAME INSTRUCTION AS AN MFA AS YOU WOULD A PHD.
Seriously. Same workshops. Same faculty. Same students in the classroom. Same reading series. Same books and paper requirements and mentorship and publishing opportunities. Same focus on studio time. Even some of the same funding packages.
Maybe that's the problem. Applicants want their PhDs to be PhDs and their MFAs to be MFAs. But I have a hard time wrapping my head around that. Really, if the instruction is so good at one level, why is it not valued at the other one? Why the big discrepancy?
Because the ranking system is primarily based on the numbers generated by applicants themselves. Basically, the more people that apply to a program, the higher that program is ranked in the P&W list.
Which means that if people coming out of the gate are primarily applying to MFA programs they've heard of before, those programs get consistently high numbers. It seems that students do know about Iowa and Michigan and Syracuse and USC and Houston (which the school ranking numbers imply) and a host of other schools maybe from the ads in P&W or their time at AWP. So the students go there, and their reading list expands, they get more professionalized, and now they know to apply to Denver and Utah and FSU for a PhD because other people have applied there. What these rankings track, therefore, is less what a student may want or need in a graduate program than what information that student already has available to her when she applies. It's more about program advertising--or lack thereof--than the program itself.
This kind of ranking model seems to me a little like rating McDonalds a great restaurant because over 6 billion other people were once served there.
Something else worries me about the P&W ranking system and what it tells us. The numbers for certain schools are extremely high (given the number of respondents to the P&W questionnaire): disproportionately higher, maybe, than what we might expect in a rational admissions system. (Aside: These numbers aren't UNWARRANTED. That is, the highest ranking schools are all indeed excellent; the question is whether they are disproportionately represented.) These numbers might make sense if everyone applying was of equal ability and schools made no effort to distinguish based on ability: we just take what comes our way. But we don't. We have a system that admits people based on talent and demonstrated ability. Think about this: Does every person apply to MIT? No. Only students who are interested in the subjects that MIT specializes in, and who recognize themselves to be of the caliber of student that MIT might admit will apply. This limits the number of applicants through appropriate self-selection. There's a spread for sure (some people do get lucky or are related to famous people, so why not?) and more people will apply to MIT than Dumspville College, sure, but will more people apply to MIT than Cal Tech? Or MIT than Loyola?
Maybe it's because this is the arts, and we think that art schools are more subjective, thus the kind of self-selection I'm talking about (which depends also on great GREs, a strong GPA, great letters of rec, not to mention fabulous critical writing samples) isn't so necessary to consider as the creative sample. We all want to believe we are great writers so we all apply in great numbers to all the best programs, hoping for the best. But the fact is that there are better and worse students of creative writing, as there are better and worse students of any subject in any field. TO REALLY RANK THE MERIT OF A PROGRAM, YOU NEED TO RANK THE QUALITY OF THE ADMITTED STUDENTS, NOT THE ASPIRATIONS AND NUMBER OF ITS APPLICANTS. Which gets me to my last two points:
1. What the P&W ranking system accidentally reveals is that people applying for MFAs, unlike people in almost any other academic discipline, aren't appropriately self-selecting. This is kind of interesting and kind of worrisome at once. Rather than focusing on what their own strengths and weaknesses and interests really are, students are turning into admission lemmings and looking at the MFA system as little better than a lottery. And we're letting them do it.
2. The P&W list isn't responsible for the lack of appropriate self-selection, but I think it helps encourage this kind of McDonaldsy applicant behavior. It can't help but do this because the numbers it relies upon are so limited and so endlessly self-reflexive.
We don't have many ways of monitoring "excellence" at the university level--inside or outside of MFA programs--which may be why we fetishize lists of this sort. It suggests someone's done the heavy lifting to ensure that the product we are all clamoring for and serving up to each other in great steaming piles is worth the cost. But to me, the numbers that matter are largely financial: Does the program offer up a fellowship large enough for the student to live on for two or three years? These are numbers that absolutely have to be taken into account, as almost everything else about the degree--including the value of the teaching and the workshops and the writing time itself--can only be individually quantified.
So, as you can see, I have some issues about ranking systems. I understand the need for them, and I'm certainly sympathetic to the compilers of lists and the students that use them. With over 200 programs to choose from, demanding that an applicant research each and every one is a little, well, insane. P&W provided a quick and dirty approach to a system that itself has turned quick and dirty, as the popularity of the creative writing industry in the academy now means that we have two terminal arts degrees duking it out at the MLA every hiring season.
Perhaps now is a good time to remind everyone that a graduate degree in creative writing means, sadly, nothing. It won't give you easily marketable skills. It will take years away from specializing in something more lucrative, perhaps in something even more enjoyable. It doesn't assure that you will get a teaching job in anything anywhere ever, nor that you will be published. It gives you time to write and the opportunity to learn how to do it. That's it. Whether you went to Iowa or Dumpsville, the degree gets you nothing more than this. WHATEVER STATUS IS INHERENT TO A PARTICULAR MFA DEGREE IS IMAGINARY BECAUSE THE DEGREE ITSELF HAS NO POWER IN THE MARKETPLACE. Any ranking system, even when well-intentioned, merely tries to whitewash this reality of the post-MFA landscape by implicitly suggesting that there IS something quantifiably of value, something that can later be translated into capital power.
Maybe agents or editors or future employers will care that you got your MFA at this school and not that one. But in my limited experience, agents and editors and employers care about the writing. They care that you wrote a kick ass story or novel or poem; they don't give two shits if you came from a school that was once on the top of a list. (And if they do, maybe you should consider running.)
So this is what I suggest be the "ranking" system, and I propose it knowing full well that Utah won't come out on top. Considering that the only useful and truly quantifiable data for a program is the fellowship information, I propose that P&W simply list the programs, in alphabetical order, that offer money and what amounts they offer. That, to me, is the only series of numbers that means something in the marketplace, and it's a good starting place for students thinking about spinning the wheel of fortune with this degree.