Interested in following
the debate on "composition-as-research", I regretted that I
could not be in London for a panel talk on this topic, now a few weeks ago.
Luckily, composer Scott Mc Laughlin did go, and he kindly agreed to report on
it. Here are his thoughts.
Scott Mc Laughlin
Report on Practice-as-Research
discussion at City University London
I’m a composer
and academic at the University of Leeds (UK). I went down to London for the
research forum on Nov. 25th organised by Ian Pace at City University called
'Can Composition and Performance be Research? Critical Perspectives'. A video of
the event is here.
The forum
discussion was planned as a response to John Croft’s article 'Composition is
Not Research' (Tempo, 69/272, April 2015, pp.6–11), and also as a
prelude to the forthcoming edition of Tempo which will include response
articles by Ian Pace and Camden Reeves, as well as a right-of-reply article by
Croft himself. In this research forum, each of the six panelists gave a short
response to Croft’s article, followed by panelists responding to each other,
then opening-up to audience questions. The panel is listed below, and was
moderated by Alexander Lingus:
- Christopher Fox (Professor of Composition at Brunel
University and editor of Tempo)
- Miguel Mera (composer and Head of the Department of
Music at City University)
- Annie Yim (pianist and DMA student at City University)
- Christine Dysers (PhD student in Music at City
University)
- Camden Reeves (composer and Head of Music, University
of Manchester)
- Ian Pace (pianist and Lecturer in Music at City
University)
Panel Contributions:
Panelist 1: Christopher Fox’s short talk emphasised the impact of
Croft’s article as re-igniting this long-standing debate across the summer of
2015. Fox’s main point was to raise two important questions that he felt are
central to this debate: (A) what do we mean as academics (in practice areas)
when we say 'I am doing research’? and (B) what are the practical consequences
of research, and what is the impact of losing this status? Research generates
money and is an indicator of esteem, it attracts students. If we stop calling
composition (and performance) research then there will be consequences for
composers who wish to study for PhDs. Fox’s parting point was that if music is
a discourse then why should composers write words about it, they should just
compose. Unfortunately there was no time for him to nuance or discuss this
further, but I would argue in response that music can be a discourse but that
this discourse (at least, that which is received by the listener) is not
necessarily connected (or connectable) to any research that went into its
composition.
I agree with Fox
that these two questions are central. For this debate to move forward, I think
a single session of discussion devoted to just the first —what do we mean as
academics (in practice areas) when we say 'I am doing research’? — would be
time very well spent.
Panelist 2: Miguel Mera framed the discussion in terms of a
disciplinary anxiety about legitimacy. He reframed Fox’s question as ‘why do we
find it so difficult to judge the contributions to knowledge made by
composition and performance on their own terms?’. He noted the definition of
research given by the REF as being very open, 'a process of investigation
effectively shared’, and that the REF does not inscribe any scientistic ideas
of research (such as the OED’s requirement that research ‘establishes facts’)
in its requirements. Mera, queried what the phrase ‘effective shared’
might mean, but suggested that as a discipline it is our responsibility to
define what we consider to be valuable in our practice, and in what ways this
may or may not be research. Mera began with Croft’s idea that composition is
often not about pre-formed ideas but rather as creating striking responses to
musical problems. Mera agreed that composition research shouldn’t simply report
the findings of research questions, and called for playfulness in research; a
possible ideal of compositional research being exploratory and facilitating
serendipitous discoveries. He tentatively agreed with composers who think music
should not need words to explain itself, but felt that in academia we had a
duty to explain or work, though not necessarily in words (see ‘Exegesis’
below): I agree on this point in particular. Summarising his thoughts, Mera
emphasised practitioners making a case for when we ARE doing research, and
highlighting our contribution to knowledge.
Panelist 3: Annie Yim is a recently-completed DMA performance
student, providing the discussion with a useful shift in perspective, both in
terms of discipline and researcher-context. Yim’s presentation focused on her
experience as a student and the frustrating lack of boundaries and definition
between joint roles as practitioner and researcher. She echoed Mera’s point
about overly rigid approaches to practice-research by noting that PaR
requires a curiosity that the existing framework (postgraduate study I presume)
is not equipped to handle. Yim also made several points about ‘training’ that I
regret I was not able to follow up with questions: I’m interested to know if
she considers the DMA as professional training for practice, which she may
consider as antithetical to the researcher role.
Panelist 4: Christine Dysers was not able to make the event in
person, but she had a prepared a statement that was read out by Sam MacKay. As
with Yim, Dysers, a musicologist, also provided a welcome new context. She
found Croft’s definition of research to be too narrow and ‘bureaucratic’, and
she echoed Mera’s call for an open approach to practice research wherein she
described composition as reflexive and non-linear process where the composer is
keen to reflect findings and communicate them. Dysers’ statement also made some
references to practice research in terms of ‘scientific discovery’, which I
think is a problematic approach to thinking about most research in
composition/performance (see 'Science' below), but I was not able to question
her about this.
Panelist 5: Camden Reeves talked of a sinister attack on
composition where some forms of composing are segregated as not being
research-worthy: an acute example of Mera’s ‘anxiety of legitimacy’ mentioned
above. Reeves compared composition-as-research to the Athenian democracy,
which, despite good intentions, ended up marginalising its people by
successively changing the criteria for being a citizen. Reeves dismissed
Croft’s entire question as ‘goofy’ and only mattering to those in a University
— I’m not sure where Reeves is coming from on this point, since the
University is the only context in which discussion of ‘research' is relevant —
and echoes others in considering Croft’s definition of research as too narrow.
Reeves also takes issue with the attempt by the humanities to mirror STEM
research in its definitions and models, arguing that the scientific method is
not applicable in the humanities — a point I agree with. Reeves’ closing point
was that we, as a discipline, need to decide how to measure composition, but not
by calling it research.
Panelist 6: Ian Pace called for more careful distinctions to be
made between the possible relationships of practice and research (practice-as,
practice-led etc.), and also to open the debate to more perspectives; both from
other disciplines (theatres, dance, etc., through their extensive engagement
with PaR), and other non-practice perspectives within music. Pace presented
some analysis of how much practice-as-research is happening within UK Music
Depts: unfortunately the numbers passed by too quickly for me to take
appropriate notes, but he promises to make a blog post of this analysis soon. [Update: the link to Ian's blog with the numbers.] Echoing Reeves, Pace identifies some worrying trends in PaR where certain types
of practice are considered more research-worthy, noting that it appears
'composition mostly IS research if it involves electronics or [compositional?]
systems, and performance mostly IS research if it involves extended
techniques.’ Pace described how in his own specialism (notated music) there are
choices, therefore interpretation, and therefore research is possible. He asks
for a critical approach to research and investment in long-form critical
research, being open to choices, critically interrogating these choices,
and communicating them as research; though he also accepts that communication
need not necessarily be through text. Pace warned of the inherent danger of
textual exegesis as allowing (or even encouraging) assessors to avoid engaging
with the work itself, but he takes the pragmatic view that textual documentation
of practice-research as standard is ‘probably inevitable’.
Post-Panel:
In the
subsequent panel responses (to each other) and audience questions, there was
some useful clarification of points and positions, but often it was difficult
to maintain a thread or argument, with many points going by unexamined and
unconnected. This forum was very useful in demonstrating some consensus on
points in response to Croft (mostly), the format was problematic as there were
simply too many people on the panel for the time allowed. I assume it was
constructed this way to ensure a wide breadth of perspectives was included, and
in that it was successful, but 2 hours wasn’t enough to even begin to unravel
all the points or to provide critical perspective. If we are to continue having
these discussions then I firmly believe we need to break down the problem into
topics and tackle them one at a time; as far as that is possible. Of course all
of these topics interrelate and will influence each other so separating them
will always be artificial, but it seems to me the only way through a discourse
dominated by uncertainty over definitions and anxiety about change is to try
and create SOME anchors of consensus along the way. As you can see from above,
there were certain topics that came up again and again in this discussion, I
address a couple of these below.
‘Quality':
The conflation
of musical quality and research quality; the idea that good music is the same
as good research. This seems to be at the root of many issues composers have
with considering their work as ‘research’. The conflation is revealed in
comments that I’ve heard at this event and others like it. As an example, at
the 'RMA Practice as Research Symposium' in Manchester in June I heard some
colleagues express disbelief along the lines that composer X, 'who is an
excellent composer', did not get a research grant to write a certain piece. The
disbelief appeared to rest on the assumption that a good composer must
automatically be a good researcher, which to me is very problematic. This issue
did not escape the Main Panel D report, which noted that 'the sector still has
difficulty distinguishing excellent professional practice from practice with a
clear research dimension’ (REF2014, p.100 [update: see here for the REF report]). In the same vein, towards the end
of this research forum at City I asked Camden Reeves to expand upon something
he’d said about music being judged on its own merits (presumably, as opposed to
being judged on what’s written about it). Unfortunately, I didn’t frame the
question with much care, and Reeves' somewhat indignant response shot the point
down. In hindsight, what I wanted him to unpack was to what extent is it
possible to judge music on its own merits in the context of research. Reeves
claimed ‘on its own merit’ was self-explanatory and didn’t need expansion, I
didn’t think this was such a ‘given’ because any piece of music is too open to
different readings to be judged so simply and holistically. A piece can be
simultaneously innovative on one level and derivative on another, it can be
highly original in its development of one technique while using another without
any critical reflection or apparent knowledge of others’ advances. And in artistic
terms this is all fine, that’s just how composition works, but as research we need
to be able to point to where and how the originality and rigour are happening.
This point did come back around in another guise 10 minutes later when Reeves
was arguing that we should change the conversation away from 'what is research'
to assessing 'who is producing quality', by which he meant quality ‘work’, to
which myself and another audience member queried how this would be assessed,
but the conversation had moved on and the point withered. To me, this is
another example of the conflation of artistic and research worthiness via the
universal descriptor of ‘quality’, if we’re simply judging what music is ‘best’
then the question of research is meaningless (as I think Reeves believes it
is), but I don't think research is commensurate with artistic quality, there is
a definite difference in what the two measures are trying to gauge. The REF2014
Panel Criteria and Working Methods points to ‘originality’, ’significance’ and
‘rigour’ as its criteria for assessing research. While these can be
applied to artistic quality, I struggle to imagine artistic quality being
measured solely on this: that said, I struggle to imagine any sort of even
partially-objective measures of artistic quality (answers on a postcard
please…). Ian Pace subsequently pointed out that funding based on artistic
merit is what we have the Arts Council for, and I worry that defining
composition and performance as ‘research equivalent’ will put us on the short
path to being ’not research’: this is especially pertinent in a context where
every other performance discipline appears has to have vigorously embraced the
idea of Practice as Research, where does this leave a musical practice without
research?
This issue
brings us right back to Christopher Fox’s first question, what do we as
practitioners mean by 'research’. I sincerely believe that every composer and
performer is automatically doing research in what they do, but that within the
academic sphere we have (as Mera says above) a duty to explain and communicate.
This is also strongly tied to the difference between professional and academic
contexts. I don’t expect a composer who writes a piece for a concert to explain
what they’re doing unless they want to, but in an academic context I think it’s
the only way to separate the research from the piece. Because I don’t think the
research is the piece. My answer to Fox’s question is that the research takes
place in the process of writing the piece. It cannot happen apart from the
piece and is wrapped up completely in the act of composing, but the piece is
not the research. Research is (as Croft would agree I think) the thinking and
doing of creating the piece. It is the response of praxis to issues raised by
the unfolding of that praxis. Subsequently, for this process to be meaningful
to others outside the artist’s head it also needs to be 'effectively shared’,
see Exegesis below.
To echo a point
of Pace’s above, when I compose I make choices, and those choices embody the
research. Sometimes those choices require investigation before they can be
made, and this can take a dizzying array of forms all equally valid (this might
be ‘book’ research, but more likely it will be material and performative
research that may be mediated through other persons — e.g. consulting with musicians
— and may be difficult to document and/or unpack). Part of our problem in this
debate is the legitimisation-angst this creates by calling for these forms of
research to be considered valid in the face of poorly-considered comparisons
with research models such as STEM and musicology, which are not appropriate in
most cases.
So what is good
research in composing and performing? this is something that we as a discipline
need to work out.
Exegesis:
A general
sticking-point in this debate is whether, or to what degree, practitioners
should use text in support of their practice submissions. Generally, the panel
seemed to agree that 300 word statements are the worst of all possibilities as
they (a) don’t allow enough depth of engagement with the research, and (b) they
possibly increased the attractiveness of ‘gimmicky’ projects (an anxiety
clearly present in Croft’s article). Mera also noted that these 300 word
statements were not a REF requirement; though I get the sense that many
Universities insisted on them. The Main Panel D report from REF2104 noted
positives and negatives in this respect:
‘[often] presentation of practice
needed no more than a well-turned 300 word statement to point up the research
inquiry and its findings, since the concerns outlined were then amply apparent
within the practice itself’ (REF2014, p.99)
'300 word statements too often
displayed a misunderstanding of what was being asked for and provided evidence
of impact from the research, or a descriptive account akin to a programme note,
rather than making the case for practice as research’ (REF2014, p.100)
Some panelists
were explicitly against text support, preferring the work be judged on its own
terms, while some panelists explicitly called for some level of
exegesis. Reeves argued against exegesis because he felt it would unfairly
advantage composers who were good at writing: I’ve heard others put this
argument more cynically that it would advantage those more able to write in
whatever academic speak is fashionable, but this rapidly becomes more
conspiracy theory than argument. I don’t find Reeves' point to be persuasive,
it seems a particularly hollow form of special-pleading to argue that academics
(of all people) don’t need to explain and contextualize their thoughts on a
topic. Surely objective distance and the ability to analyse and explain complex
ideas is exactly what academics are for.
It is clear that
the REF2014 guidelines already assume that artefacts alone cannot
always speak to their research concerns. Personally, I have problems
with the idea that the research value of the work is accessible in the
artefact itself without at least ’some’ level of help. I think words
are the most effective tool to point to the research, but equally I accept that
there may be useful non-textual approaches also: I would dearly love to see
good examples of this, I’m sure they’re out there, please send them my way if
you’re aware of any.
Conclusion:
From the
discussions that I’ve observed in this debate, we appear to be reaching a
consensus in this overall debate that composition neither is nor isn’t
research, as both of these positions involve throwing a lot of babies out with
the bathwater. The fruitful ground appears to be in the middle of the spectrum
where we should identify how composition can be good research, and when
it is not. I think the next question to discuss is really what we think good
research is. Only then can we answer the question of how and if this is
evidenced.