The scores for Romance Writer’s of America’s (RWA) unpublished manuscript contest, The Golden Heart, have been sent and there’s already some discussion on loops about what the scores mean and whether it’s working.
First, some background. The Golden Heart’s mission from RWA’s website is to:
…promote excellence in the romance genre by recognizing outstanding romance manuscripts
This is a coveted contest final in the Romance world. After finalists are announced, agents regularly give automatic requests for fulls to them. At the national conference, the finalists are announced and winner awarded during the final gala night in an Oscars-like setting.
This is the first year a new scoring system has been used. In the past, the judge gave one overall score based on how they viewed it. There’s been criticism in the past that this wasn’t based on any specific criteria. This year, judges were asked to break down their score into the following categories:
The Story/Plot (1-10)
For a total score of 50.
On the loops I belong to, writers are asking what it means when they get a wide range. Some have said it’s as varied as 11 (as a total score!) to 48, for the same manuscript, and others are reporting the same wide range. I’ve also seen some writers say they received a 1 or a 5 for their Writing category. In the instance of one of the ones who got a 5, I actually Beta read that manuscript and their writing (which should be based on the craft, i.e. grammar, command of language, etc) was not 5, IMO.
It’s a true adage, that in contest feedback, large swings in opinion can mean that you have a strong voice and so you’re alienating some folks who just hate your voice.
Golden Heart this year has taken care of some of the unfairness in these large swings by dropping the lowest AND the highest score, but I’m wondering if there’s more to it than this. I judged entries this year and here’s my thoughts:
There was no grading scale given to help orientate the judge on what a 1 as opposed to a 5 or a 10 means (other than that 1 was on the low end, and 10 was a perfect score). Lacking this, I made one up for myself by taking the grading scales used in local chapter contests. So when I judged the entries, this was the criteria I used by taking the 1-5 scale used in local contests and extrapolating it out to:
9-10 Ready to Publish, no changes needed.
7-8 Almost there.
5-6 Several minor problems.
3-4 This area could be strengthened with some significant rework.
1-2 Major problems in this area.
And for the Romance category, I used this:
17-20 Ready to Publish, no changes needed.
13-16 Almost there.
9-12 Several minor problems.
5-8 This area could be strengthened with some significant rework.
1-4 Major problems in this area.
And so I gave my scores accordingly. But do you see the problem here? I did this on my own. Who knows whether this is what the coordinators had in mind? Who knows what other judges used to assign their numbers?
In many local contests that use a scale, they give what the judge should look for in each category. Perhaps a way to improve this would be to give some kind of scale guideline for each category in order to take out this part of the subjective equation. Because yes, every judges opinion is subjective, but how to use the numbering system shouldn’t be subjective.
Also, some folks had high scores in all categories except romance, with their romance number being 7s and 8s, consistent with what they were getting in the other categories. So, it makes one wonder if the judge didn’t realize the scale went up to 20?
I also had an interesting phenomenon happen. I had one entry I judged that I thought was so great, I gave it a perfect score (the only one I gave). The writing was great–sharp writing, sizzling sexual tension (I was literally squirming) and the synopsis was well done in that the plot was crystal clear, plausible and the character’s goals and motivations were all clear and made sense (it was the only one that did). I was surprised it didn’t final and then I saw that it was published, so it must’ve been disqualified. Anyway, I bought it, so I could read it, and the story completely did NOT hold up. The prose was still technically flawless, but man, for the Black moment/final climax, it totally hinged on the character doing something completely out of character as it was written (but which in the synopsis made it sound like it was totally their character) and also the characters never really got fleshed out past cardboard cutouts to serve the plot. Just goes to show how only reading first 50 and a synopsis truly do not help pick the best. Jami Gold’s post yesterday touches on this in her post “Why Is Storytelling Ability So Important?” based on judging a recent contest.
So here’s some thoughts I have on how the contest could be improved for the future:
- Give a scale on what each number means for each category
- If Romance will still count double, perhaps double the score after the scores are turned in. I’m sorry, but there’s some people who just don’t pay attention. How many folks missed out on finaling because they were unlucky enough to get more than one judge who didn’t look close enough to see the scores went to 20 in this one category only?
- Unfortunately, I think asking to read a full will be too much work, so fixing the instance I found where the problems exploded only once the full was read probably can’t be addressed. RWA had a hard enough time getting enough judges this past year
What do you think? I see this as a place to discuss the new scoring system and whether it worked or didn’t. Is it an improvement on the old system? Do you have some suggestions on how it could be improved for next year?