Hacking the Bagpiping Judge Part IV: How To Be a Bagpiping Judgeâ€”The System Applied
A new system of selecting bagpipe and drumming adjudicators would take into account a candidate’s rÃ©sumÃ© of real world experience and give it the relevance it deserves. The question “How to be a piping judge?” is answered by deriving a sum of scores from a list of an individual’s top five achievements using the following formula:
(prize weight) X (experience ratio) = achievement score
Each individual would score each of their top five achievements using this formula and simply add the scores together. The best achievement score would end up being the lowest sum possible when the top five scores are added. Those who rank with the lowest scores would sit at the top of the list of those who qualify to be a judge. Anyone can submit their rÃ©sumÃ© of achievements to be considered for the judges panel under such a system. The method ensures that only those with significant achievements would garner a low score and thus be placed at the top of the list. No exam, no measure of any kind other than actually having “been in the fight.”
In Part III, the details of calculating the first part of the formula, the prize weight, is a straightforward calculation derived by rating the grade or class of competition at which the relative prizes were won and simply multiplying that by the prize placing that was taken at the contest. The prize weight calculation elevates some achievements over others by grading the contests themselves. But what about general years of experience?
The experience ratio is figured using similar thinking. An individual with a high number of years of activity would have a lot of time to accumulate successes to add to their score over another candidate who may have significant achievements within a shorter experience span. Each candidate’s total achievement score is balanced given their overall experience and real world activity. The more years’ experience one has, the more likely it is that their top five achievements would be good enough to reach a low total score versus someone else, who may have been very successful, but who did not have as much time to wrack up the achievements needed for a truly low overall score just yet. Just like in your professional work life, years “on the job” count for something and are rewarded even though you may not have worked for a Fortune 500 company or received professional accolades of some sort.
experience ratio = years since achievement / years of actual experience
The ratio balances an individual’s overall years of activity with the relative distance from the time of an achievement, thereby indexing the prize weight.
In Part II, we set up a hypothetical situation where Jack Lee relocated to Boston, Massachusetts. If he were to submit his “top five” achievements in pipe bands, SFU’s Grade 1 win at the 2008 Worlds would likely be among them. The prize weight for such an achievement would be calculated as follows:
1st Prize Grade 1 Worlds: prize weight = 1 X 10^1 = 1 X 10 = 10
If SFU were to have won at the 2008 Perth Highland Games
1st Prize Grade 1 Perth: prize weight = 1 X 10^2 = 1 X 100 = 100
Thus we classify certain contests in a way that recognizes the significance of the achievement. There is some subjectiveness here, but I think we can all agree that certain contests are a step above the rest. A prize at the Northern Meeting at Inverness (a quintessential contest) does not have the same weight as a prize at the Capital District Scottish Games in Altamont, NY. Likewise, a leadership role in a pipe band that achieves significant prizes is not the same as the rank piper who plays in the same band. In our Part II hypothetical, we described another piper who had also relocated. His achievements did not so clearly equal those of Lee’s but what if he were a piper in SFU as well? He also could list SFU’s 2008 win on his rÃ©sumÃ© and his contest class exponent would also be a “1” per our contest class exponent table but his contest class exponent would be increased by 1 again since he did not act in a leadership role with the band. Therefore, his exponent would be a “2.” His prize weight would calculate thusly:
1st Prize Grade 1 Worlds: prize weight = 1 X 10^2 = 1 X 100 = 100
Compared side by side, these two pipers (Jack Lee and his compatriot) score very differently for the same achievement. The formula rightly elevates Lee above the other player who still garners significance for his achievement with a relatively low score, but does not match Lee’s role as Pipe Sergeant in the same achievement.
How would these achievements score through the rest of the calculation? If, for the sake of argument, we assume 30 years of “actual experience” for Jack Lee, the single achievement item of a win at the 2008 Worlds would score thus
10 X (2/30) = .67
SFU’s 2009 win would score
10 x (1/30) = .33
While SFU’s 1999 Worlds win would score
10 X 11/30 = 3.7
Added together, these three scores are already so low (4.7) that it is becoming obvious that Jack Lee’s rÃ©sumÃ© alone would place him at the top of the list for pipe band judges on our new judges panel. No exam, no arbitrary requirements. We can also see how distance and time influence the score. SFU’s more recent wins carry greater importance under the system over a similar win 11 years ago. More recent success becomes more relevant with respect to judge selection given his overall years of activity. Similarly, if Lee had half the overall experience, his score for the same 2008 achievement would change.
10 X (2/15) = 1.33
Lee’s many more years of experience are rewarded and incorporated into the calculation. If he were a different person with fewer years of total experience, the achievement score would not be as low for the very same achievement. As stated, anyone can score their own rÃ©sumÃ© using this system. The system is also easily tested using hypothetical pipers with a mix of imaginary achievements.
To be blunt, the new system (let’s dub it the adjudicator index or AI) has the benefit of assuring that only those who have “walked the walk” are those who score at the top of the judge selection list. To be considered, one would only need to submit their rÃ©sumÃ© of achievements. Those who have not achieved the same level of success can still be scored but may not score as high on the list just yet. But, just like your professional work rÃ©sumÃ©, your piping rÃ©sumÃ© would be fluid and change over time. The AI formula has the benefit of re-scoring and measuring individuals over time, thus taking into account the fluid nature of experience as candidates document more recent success. Individuals who do not rank as high on the list given their scored achievements have the opportunity to increase their ranking by “keeping at it.” The system thus rewards improvement over time as individual judge candidates learn, train, and otherwise develop as players and/or band leaders, which is something the current EUSPBA system lacks completely. The AI system offers a way to continually measure the quality of piping and drumming adjudicators by simply measuring the quality of their real world accomplishments. Something that the EUSPBA system does not do at all. The AI system is also self-regulating, leaving it up to the individuals to keep their AI score at the level required to remain on the panel by staying active and working for personal improvement. In other words, the same achievements that garnered a score to put a piper near the top of the list would, over time, not score as high the further away it became in that piper’s personal experience as the years progress.
What about piobaireachd versus light music, you say? What about pipe bands versus solo piping? The adjudicators who sit on the EUSPBA judges panel now are categorized to judge the various disciplines we find in our competitive sphereâ€”some as a result of the current exam, some not. However, what has happened is that the majority of the panel are found judging just about everything there is to judge piping-wise regardless of the quality of their background. Under the AI system, a candidate’s overall score would be categorized based on the rÃ©sumÃ© of the “top five” achievements that are scored. In order to qualify to judge a specific discipline, a candidate would simply need to submit a rÃ©sumÃ© in that discipline and calculate an achievement score for that accordingly. For example, a candidate would need to list and score five piobaireachd prizes, five light music prizes, and five pipe band achievements in order to qualify to judge all of those disciplines. It gets trickier dealing with categories such as ensemble and drum judging. More fully fleshed out criteria for ensemble and drum judging would be needed that more clearly reflects success in those disciplines. But again, the same formula is used and the lowest scores calculated sit at the top of the list in that discipline. Could a similar list of 15 different achievements across disciplines from a random selection of the EUSPBA judges panel score at the top of a new AI list? Predictably, the list of qualified judges for each discipline would be vastly different, offering more diversity in experience and broad musical point of view.
Under the AI system of selecting judges, anyone can be a piping or drumming judge. Anyone can submit their rÃ©sumÃ© of achievements and experience for scoring. Let’s be clear, the actual numbers in the scores are unimportant. What is important is their “lowness” in setting up a hierarchy. Benchmarking the list is a simple matter of selecting or setting a required number of adjudicators in proportion to the number of competitors and/or competitions in a given region. The list terminates at the number of judges required to cover the needs of competitions and competitors at any given time. The bottom of that list becomes the AI score needed to fall within it. It is conceivable that that bottom would change year to yearâ€”being higher some years, and lower in others. If someone does not score high enough to reach the list cut-off now, then it is possible to be scored again at a later date and be ranked higher. Likewise, it is possible for an individual to rank high on the list at one point, but fall lower on the listâ€”perhaps even below the cut-off as it changesâ€”if their real world activity does not measure up. The look of the judges panel could effectively shift year to year as different individuals appear, disappear, and/or reappear. At the end though, it is assured that the most qualified individuals are always sitting on the panel. The competitive community benefits from the varied real world experience of these pipers and drummers. The judges on the field do not have limited experience nor are they pigeonholed into a narrow view of what constitutes a qualified judge. Instead, they are encouraged to remain active participants musically and competitively, building the experience and expertise required to perform competently as bagpiping adjudicator.
“Hacking the Bagpiping Judge: Prologue”
“Part I: Why Can’t You Be a Piping Judge?”
“Part II: Anyone Can Be a Piping Judge”