(Part 1 of 2)
For the last year or so I have participated in the RSA Cultural Evidence champions network. I was particularly interested in joining off the back of work I had done evaluating a theatre in libraries project that you can read more about here if you like. (or here). If you aren’t particularly interested in education, I think the broader arguments about evaluation in the cultural sector will still be of interest. What? You don’t like evaluation either? I am shocked! Well, not that shocked.
Having never really worked around cultural education/arts education before, I dove into the literature quite enthusiastically, only to find that the evidence base for arts education is pretty limited: see the Education Endowment Foundation review.
Naturally, the classic “arts for arts sake” argument and any kind of “evidence based” arguments find themselves at odds here, as usual. Are we rushing to “evidence” primarily as “defensive instrumentalism”? A grudgingly tolerated, necessary evil for the modern practitioner? Or, a genuinely insightful and useful process? Probably some murky combination of all the above.
“If you see a show in the forest but nobody hands you a questionnaire afterwards, did it really happen?”
The RSA and the EEF teamed up to select five arts education projects that would be expected to have some kind of beneficial educational impact on the kids involved. These would be evaluated by a crack team of academics and policy wonks, to help address some of the gaps in the literature and presumably inform the future funding implications of other education-oriented or education-adjacent work. At the same time an Evidence Champions Network would be established, reaching some 100+ arts managers, practitioners and academics to further spread the gospel of using evidence, good evaluation practices and sharing ideas among a bunch of like-minded people.
This blog from Mark Londesborough (of the RSA project) really helps put the divide in arts education into perspective, and many of the tensions here feel quite familiar, if you’ve followed any other “evidence-based vs otherwise” types of debates in the arts, or elsewhere. (Check out the comments as well).
The format: “Schrödinger’s *BLANK*” is sometimes invoked to label something that has any two contradictory characteristics. For example, “Schrödinger’s Immigrant”: Both too lazy and lounging around on welfare while simultaneously, rather industriously, managing to steal your job.
Therefore the common gripes I hear around evaluation are that it is somehow BOTH too expensive, too intrusive, too instrumental and yet ALSO done on the cheap, only surface-level and of limited influence.
Hence: Schrödinger’s evaluation: you spend money you don’t have, to demonstrate what a great job you’re doing, with measures you don’t believe in; and at the end of it, you get less funding anyway.
Who evaluates the evaluators?
It’s not a stretch to say that evaluation in service of ‘evidence based policy’ has increased over the last decades, in all areas of public policy. Add the more recent flavour of austerity to this and you are in a position requiring increased accountability whilst simultaneously competing for less public funding. The sheer availability of data alone seems, somewhat inevitably, to attract more analysis. (For instance, see the ‘Quantified Self’)
I am just about of the right age to have launched out into the professional world as austerity pretty much became “just the way things are now”. (2008! A bad year to graduate!) Worldwide recession, Tories in, Austerity in, Wages down, stagnant or disappearing. Philanthropy gets squeezed: less funding, higher demand and also likely to be redirected away from perceived luxuries like cultural organisations.
I’m not trying to get into a debate about whose fault this is or whether younger generations are any more or less politically active than older generations. For every Greta Thunberg, Alexandria Ocasio-Cortez and Malala Yousafzai; there is a pretty large chunk of 18-24 year olds who don’t vote. The children of Thatcher and Reagan have clearly not all been working to overturn neoliberalism. (usual sympathies with the ‘precariat’ aside).
Bottom line: I have only ever known a professional environment where you will be increasingly expected to do more with less; or to face greater competition for the same pots of money: see also ‘austerian realism’. Alternatively, just say bollocks to it all, try and make it work in the private sector, set up a kickstarter, move to LA, sell a kidney, become an estate agent…
…but I’d like to believe it’s still possible to do some good in a bad system, though, right?

Naturally, I don’t think a lot of the evaluation-oriented work that I’ve done over my career so far has been a complete waste of time! And yes, before you ask, my personal bias-o-meter has confirmed that I am currently radiating zero partiality-particles, so that, my friends, is a 100% iron clad UltraFact©. – Guaranteed for 10,000 miles or 5 years, whichever comes sooner.
TLDR: I don’t think we need to entirely throw out the empirical-evaluation baby with the positivist-instrumentalist bathwater.
The rest of this piece is my thinking on some of the things we should try to keep, some of the things we should chuck and whether or why any of it “has to be done” in the first place.
How to avoid “Schrödinger’s evaluation”
Part 1: “You spend money you don’t have…”
If the issue is purely about waste, let me just wade through these mountains of unused print marketing materials, check on how our mobile app with virtual reality integration is doing (five downloads now, great) and go to another training day or conference that could probably have just been some online discussion and a video; then I’ll get back to you. There are easier targets.
The actual nuts and bolts of doing evaluation are not particularly expensive or complicated. Data as we know is more plentiful than ever. The much more likely problem is that because it can be such an open-ended and wide-ranging task, that people either don’t get off the starting line at all, or they end up getting lost in the woods. Sometimes this is the fault of the individual but it is also not helped by unclear direction from funding bodies and the sector. There’s also no amount of evaluation that will save a truly sinking ship (but the lines between actual hands-on consultancy and more supposedly neutral evaluation can be pretty thin).
I don’t think budgeting something like the 3-5% recommended by the RSA for evaluation; especially for work with a more educational flavour, would really break the bank for most projects. I wouldn’t be surprised if this is in line with what many are doing already, even if it isn’t necessarily identified as such. Having it clearly staked out would probably be better value than just cobbling it all together with whatever people and resources are available at the time. Saying: “It’s everyone’s responsibility” is often a great way of saying: “It’s no-one’s responsibility”. Some issues are rooted more in management effectiveness and having clear direction in the first place.
Some people DO actually need to put some funds behind the tedious legwork of it all (fieldwork, data entry) and not just expecting things to run themselves (whether or not volunteers are involved). Be generous with your incentives and prize draws. Let people (including participants) know about the results. They can be your biggest advocates and really they don’t have to be as impartial as the rest of us pretend to be.
The sentiment: “It’s only done because funding bodies require it” is a tautology and not a particularly insightful one at that. You could interpret it as: “I know it’s a botched system, but don’t worry, I am smart enough to know which are the right levers to pull… despite my alleged distaste.”
Artists and organisations, funded and unfunded, have had ulterior social or economic motives, to varying degrees for as long as there have been arts professionals of any description. It supposes that artists are only beholden to some divine muse, rather than (at least in equal part) to their landlords: “The myth of the suffering artist is part of the wider myth that sinking into abjection will somehow cleanse and elevate the poor and/or unconventional, eventually leading them on to glory.”
It arguably furthers the perception that publicly funded arts; those that face more of this kind of exterior scrutiny, are the only types of art that matter. Of course, it would be great to see funding bodies more proactively seeking out and developing artists (and I’m sure this does happen, to a certain degree) rather than responding only to those wily enough to brave the application forms and lingo in the first place. But (and this is probably not the only time I’ll say it) the majority of peoples cultural diets are outside of the funded sector. The Taking Part survey is often interpreted that 70-80% of people ‘regularly engage in the arts’ – where ‘regularly’ means just ONCE in the last YEAR and ‘the arts’ could mean a whole range of things. (But NOT Karaoke, as I am always fond of pointing out). Is once a year really that regular?
The underlying theme; technocratic managerialism as one of the creeping tendrils of neoliberal wealth accumulation… yes, this is of interest! But: I would not expect some motley bunch of underpaid freelance workers and volunteers, putting on some little one day town hall arts and crafts festival to have to be the ones to solve this, single-handedly: “Once you’ve put up that stage, do you mind popping out to overthrow neoliberal capitalism? Cheers, mate…Well, alright then, have your tea break first.” Excuse the metaphor but at a national scale, the funded cultural sector might as well be the little arts festival, at the scale of departmental budgets and so on. Sure, we’ve got a part to play but we do seem fonder of beating each other up instead of our supposed opposition. (The narcissism of small differences)
We can certainly all point to examples here or there where evaluation is being done in confusing, wasteful and probably even dishonest ways. Many organisations definitely have greater reporting requirements to a wider range of stakeholders and funders than ever before. There is definitely scope to reduce the repetition and redundancy of a lot of this while still leaving room and ambition for more individual, contextual approaches. (There is also scope to just throw some more money behind all of it too, but we’ll get onto that). Funders could also reassure us how this hard won data is actually informing their decision-making and we’re not just recycling the same old stats year after year. We especially don’t want to see newer, smaller organisations slave away on evaluations that are basically ignored, while big, established organisations turn out any old guff that gets celebrated wildly.
So why not, in the meantime, actually spend some time taking down or improving on ‘bad evaluation’ on its own turf? Especially if it all as poorly thought out, barely funded and shoddy as it all supposedly is? (For example). I think we could stand to be a bit more tactical about things. I would say that some of our ‘defensive instrumentalism’ comes not just from a general lack of understanding of social research methods and evaluative practice in the first place; but also comes from our own limited examination of and around what we might actually AGREE is worth measuring and being judged against. Of course you’re never going to resolve some of this stuff (at least without significantly reforming funding) but I suppose, pragmatically, I am of the view that if you don’t come up with measures that suit you, someone else is probably just going to impose that stuff on you anyway. Has that not at least been the case for the last 40 years or so?
Part 2: “To demonstrate what a great job you’re doing…”
Evaluation is supposed to be a critical and reflective process, yet we only ever hear about how bloody amazing everything supposedly is.
First off the bat: I don’t think this is necessarily unique to arts and culture, though given the overlapping charitable, leisure and entertainment relevance, advocacy here probably has a certain shade of boosterism that you just won’t see in other sectors, or in the pages of such established trade publications as: Potato Storage International, Cranes Today or Fishkeeping Answers.

You could be easily tricked into thinking that any one of several noisy, self-aggrandising, glossy press releases that you’ve encountered recently actually IS the point of doing evaluation, for a lot of people. Again, probably not unique to the cultural sector, I mean, have you ever written a CV before? There’s self reflection and there’s self sabotage, right? We could all be a bit more comfortable talking about the unknown and about failure (and not just to demonstrate how earnest and self-critical we are). Part of setting good targets is about knowing what failure also looks like. For instance, see Failspace:
“Significantly, it seems most professionals define success in terms of the number of people who take part in their activities and on their ability to gain further funding to continue their work. Even among those who said their aim was to bring about social change, there was limited consideration of whether this had occurred, let alone how it had been achieved.” (link)
Again, funding bodies could lead the way here in terms of their own transparency. However, it is worth bearing in mind the mixed funding model of most organisations: if you are keeping several funders happy at once, it is likely they have different measures of what counts as success and probably even that some of these measures of success actually contradict each other. I did feel a pang of recognition with a recent comment on this Arts Professional article, “Fears within ACE over dumbing down art”: “Why is the discourse here still on the same stale dichotomy of aesthetic vs. participatory? Surely Art is more complicated than that? Come on!”
Or, on another fairly recent article discussing “hyperinstrumentalism” in cultural policy, it was surprising to see so many readers in the comments coming out to defend instrumentalism. Some examples:
“Individuals don’t have to justify their spending policies [but] when it comes to government money, then policy is crucial.”… “The general rush to measurement is deeply problematic but it is a justified reaction to the old method [which is] profoundly undemocratic, since nobody can really define art or quality they inevitably give money to the things they find worthwhile” … “It is a completely legitimate question to ask: How are citizens experiencing cultural activities funded by citizens? And what tools are we using to ensure this? … Finally, here’s a cracker: “I feel I have read this article a thousand times over the past 15 years and it has been pretty irrelevant for at least the past five.”
Instrumentalism is seen by some as a bare minimum, some kind of “least worst option” and even in other ways admirably democratic. Success as an individual artist can mean whatever you want it to, but if you want public funding, there are going to be some strings attached.
Probably the least controversial would be around monitoring basic numbers and the broad demographic profile of people engaged. The amount of money spent on X, Y or Z. But if all of this is necessary, it seems only fair that funding bodies could reciprocate a little more too: Who are they? What expertise do they have? How do they make certain decisions? Although after some point, I think, you do have to accept some degree of obscurity, or else people will just be hunting down individual panel members who they disagree with.
As much as co-creation is in vogue at the moment, I would love to see a detailed study around how much authority and expertise is really wielded or exchanged by the average person on the street in these situations. I think everyone obviously has the capacity to make aesthetic or cultural distinctions but we can also appreciate the expertise and experience of others. I guess there is some ideal balance between hierarchies and authorship, representation and participation this is all aiming towards here but god knows who is qualified to pin it down. A worthy target anyway, I suppose… unless the target is so broadly defined as to be meaningless.
So, having committed to spending some resources on evaluation and having grasped the basic tools, there is still the issue of where it sits in the overall organisation or project. We are all familiar with the end-of-project/end-of-cycle rush to get this sort of thing squared away which often throws up problems. I have often said evaluation is shaped like a U – lots of work in the start, not much in the middle, and a lot of work again at the end. In this respect, it can potentially fall into a lot of the same pitfalls that people raise around short term, project-based working in the first place: there’s not enough time to ask serious questions, we start to get somewhere interesting and then it’s time to get the next round of funding sorted out.
Try to bear in mind how things will actually be used in decision making; though a little exploration isn’t entirely off the cards either (Might I suggest the DIKAR model of information management as a good starting point). Try to keep some continuity across the gaps from project to project or year to year, where relevant, for documentary reasons if nothing else, but you may be able to find that these seeds bear some juicy longitudinal fruit in the years to come. (Or even better, that you did a sufficiently good enough job of ‘proving the case’ last time that you can do something different next time.)
If there is something we can all do, it is to place less emphasis on the ‘having a nice glossy report and press release event’ aspect, than we do of all the work leading up to that point. There will never be a world without advocacy of one sort of the other, or the politics underlying it all, but we can at least ask better questions of it all. It seems a noble cause to just ignore or write this stuff off as pure instrumental utilitarianism, but then doing so – at least on it’s own – doesn’t seem to have persuaded anyone of an alternative (at least yet).
(For more on the ethics of measurement, see ‘Getting to the bottom of ‘Triple Bottom Line’ in Business Ethics Quarterly and possibly even this response to it as well.)
Part 3: “With measures you don’t believe in…”
Despite earlier saying that arts professionals since the dawn of history have been ‘doing evaluation’ of one form or the other, there are more modern trends in evidence-based policy and instrumentalism that deserve a look.
I definitely remember evaluation being referred to (half-jokingly) as the ‘new health and safety’ at one conference, many moons ago. That is: It was something patronizing and pointless that we all had to ‘play along with’, not seeing any benefits or taking it seriously. But, by and large, here we are today; risk assessments, health and safety, no big deal, what was all the fuss about in the first place?
Do you think if we wound the clock back another decade, there might have been someone at the same conference, half-jokingly referring to Health and Safety as the ‘new Equalities act’? What do you mean I’ve got to put a ramp in?
There are many other “annoying” things “forced” on us: minimum wage laws, health and safety, safeguarding, fire capacity regulations, GDPR and so on. And we seem to love the heroic figures who take on ‘the machine’ and win: Don’t you realize, it’s not about who will let me, it’s about who will stop me!
We do make some progress, sometimes, right? See this various ‘event myths busted’ list by the health and safety executive for examples. Or watch a fun video about the famous “McDonalds hot coffee lawsuit”. Go on. You’ve made it this far, have a little break.
The wider point is that it is easy to blame “the grand bureaucracy” for just about anything, rightly or wrongly. I have seen Brazil, by the way.
It seems appropriate or inevitable that funding bodies will effectively use their grantees as some kind of information-gathering apparatus, but there is obviously a tension between their priorities. I think grantees have some influence on the wider direction, though perhaps this is more like trying to steer an oil-tanker than a sports car. The same can be said for cultural policy within the wider direction government. (Hey, lets swap DCMS (£1bn) and MOD budgets (£52bn) and just see what happens!)
On the ground, are more people are doing this sort of pencil-pushing thing because they feel they have to than because they really, honestly, want to? Probably! – at least to begin with, but that’s how we learn most things, right? The immediate desirability of a thing is probably not a good measure of it’s longer term value.
I feel that many of the issues stem from the fact that people don’t feel like they really know what they’re being measured against in the first place (or why). Much like the majority of people who still don’t get Health and Safety, Equality and Diversity, Environmental or other such policies; a fair bunch of them tend to be either be misinformed, deliberately looking for an axe to grind; or actually have a bigger grievance about something else somewhat unrelated in the first place.
They see confusing, moving targets from a variety of funders. People get their rejection letters back, or their end of project debrief and it find it hard to parse (short of any obvious calamities). It feels like a game of reading between the lines. Should I have emphasised this or that theme or outcome? Should we have emphasised this community or that community? Am I an early-stage or mid-career artist? What if the evaluation tells me I am crap?
A lot of time and energy is spent worrying about applications: see the positive response to Jerwood arts encouraging artists to cover their own time spent in applying for their grants. Nevertheless, some degree of instrumentalism and varying levels of coercion are the price many pay in exchange for funding.
Obviously we can go much, much further down the instrumental line of thinking, even to the degree of (seemingly) putting quantified values on artistic excellence. Nicholas Serota, when being pressed about the increasing march of the metrics into the arts world: “In relation to the notion of using tick boxes he said: “I think the boxes will have to get larger.” I read this as a little bit of a welcome bit of a push back at the time, but we still seem to have ended up with the Quality Metrics system regardless, so who knows. (Sorry, it’s now called the Impact and Insight Toolkit).
I’ve criticized the Quality Metrics situation (as have seemingly most people who have written for Arts Professional at any point in the last few years) however I don’t debate that it, or any other system like it, probably COULD produce SOME useful insight for SOME people.
My problem is more the way that it has been funded (by the largest single tender in ACE history originally in breach of state aid rules), the way it has been constructed (largely behind closed doors by a private company ACE have no long term stake in) and the way it has been enforced (to organizations obliged to use it with little to no input to its design now or seemingly at any point in the future). In terms of criticising this, I don’t think we should be satisfied by just standing on the side-lines and scoffing haughtily: “Of course you can’t measure the arts! You can’t tick a box called ‘transcendental wonder’!”
Personally I’ve never met anyone claiming that this or that research instrument ACTUALLY WOULD measure transcendental wonder and I doubt those making such wry observations have either. Anyway, I’m glad we’re having these useful, constructive discussions about totally fictional scenarios. Hence some of the frustration expressed by the practitioners I’ve encountered, and exemplified by the Arts Professional commenters earlier.
Whatever system of measures or decision making you have (or pretend you don’t), someone, somewhere has their hand on the chequebook:
“If you exist to help people create the work they most believe in, do not be surprised when the number of applicants far exceeds your resources. You need a process for determining which are the most valuable proposals – and it will be contested. How do you determine value? In reality, like all political decisions, the answer is a more or less acknowledged compromise of idealism and power. However it is resolved, the people you fund will feel vindicated and see your decision as self-evidently just, while the much greater number you refuse will feel slighted and misunderstood.”
(link)
I suppose, in summary, we are fully aware of the various instrumental agendas we have been required to work under for the past decades (centuries, millennia?). This, in itself, is not news. We are not necessarily thrilled by this situation, we may even support alternatives, but people are often more immediately concerned with what they are supposed to do at the end of this funding round. I always remember an arts agent I interviewed once saying they felt they had “done their time” in the public sector. At least at some ends of the scale many people probably do aim to use it as a jumping off point to other things, rather than as an end in itself or of a kind of eternal life-support system.
Don’t participate, then? Take the money and refuse to do the evaluation? I think it is easy to conflate refusing to participate with some kind of instant status as an edgy, culture-jamming renegade. I mean: I get it. Spoiling a ballot is better than nothing, absolutely. Some of this is probably necessary but probably an equal amount is just self-serving. (See also: Time Management for Anarchists).
I can’t help but feel (idealistically) that if the sector did a better job of establishing its own shared knowledge and own ideas about success in the first place, there would be more impactful pushback and ultimately more control of the current direction. Trying to get artists to agree on anything is a bit of an ask, certainly.
There is clearly a diversity of individuals and organisations so there needs to be a diversity of methods. At least people could know what degree of instrumentalism you are getting yourself into and what methods you may need to adopt. Even ACE has arguably been paying at least a little bit of attention to this, for example with the smaller ‘Developing your Creative Practice’ fund, even a ‘Grassroots music fund’ both emerging relatively recently.
And finally, very importantly, yes, we can further imagine a wider and bigger diversity of funding streams in the first place, all with varying levels of conditionality attached to them.
Let’s just have some funds that have no strings attached whatsoever!
Let’s have one that’s decided entirely by children!
Let’s have one that’s picked by a lucky Octopus!
Part 4: “At the end of it, you get less funding anyway.”
Standing still and keeping the lights on, is arguably seen as a considerable mark of success in the current environment. This is not an unfamiliar ‘success story’ in the arts: “the reduced cuts suggested its lobbying had worked” – as organisations in Birmingham only face a 23% cut, rather than a 30% cut – hooray? (Not even accounting for previous years cuts and with a heavy emphasis on the added pressure of hosting the upcoming Commonwealth games).
We’ve got no local authority arts officers. Some authorities have no cultural budgets at all. Some libraries don’t have librarians and some museums don’t have curators. Major outdoor and public events rely on the diminishing resources of the police and local authorities. That is, if the public parks being used aren’t already hosting commercial events primarily to make up for their own diminishing budgets. Wait, one more, hold the presses: it’s not just the cultural budget, actually the whole authority has gone bankrupt. Could it get worse? Well, yes! Across the pond, they’ve been kicking around the idea of cutting arts funding in its entirety for a number of years now.
It’s no surprise that competition heats up for shrinking funds and that organisations are encouraged to look outside of the usual boxes. It’s worth bearing in mind is that the mixed funding model that most organisations have is not an overnight invention. Some recent stats give the average funding mix as: 52% earned income, 24% ACE subsidy, 13% contributed income, 7% local authority and 4% ‘other public subsidy’. Private investment in the arts has increased in recent years, though this is biased towards larger organisations in London and the South East. (Incidentally, the stats here include ‘Trusts and foundations’ as a considerable chunk, which most people would interpret as quite different to ‘Individuals’ and ‘Business investment’.)
It is hard to quantify exactly how important ACE funding is, beyond the immediate cash value, in terms of securing the other types of funds, but it is certainly referenced by many as some kind of mark of quality that helps draw in others. The mixed funding model means that accountability is also dispersed, so any kind of centralised reform would only go so far in the first place. It’s inevitable that ACE is the primary target of cultural campaigners but we need to think beyond them as well. By the same measure, you can’t blame everything on ACE either.
At least on the economic side of things, I would imagine much of the electorate (if they give any thought towards cultural policy at all) are still relatively happy with a primarily Keynesian justification for ACE: “Wealthy nations can afford to provide nice things for the public, and really, we don’t need much more justification beyond that. It might even turn out to do some good for the economy, or education, or health as well.”

State funding is important (whether central or local govt.) but the sector is more complicated than just monitoring one QUANGO at a time. ACE gets just shy of £400m grant-in-aid from central government (plus another £70m or so from the National Lottery). Cuts in other areas utterly dwarf this. Phillip Hammond finds a ‘windfall’ £400m for education down the back of the sofa to ‘help schools buy the little extras they need’ – and is rightly laughed at. Meanwhile someone forgets to round a few numbers and carry a few decimal points and HS2 gets set to overrun by £30bn or so.
£400m kind of IS peanuts, and while it doesn’t excuse bad policy, we should keep it firmly in mind. This is the about the same as the cultural budget of Berlin.
Therefore it seems logical that people are wondering if ACE is the right vehicle for any kind of large scale changes. Not least of who are the Movement for Cultural Democracy, with a proposed separate £1bn-sized National Arts Fund. The Labour Manifesto also pledges a similar, additional £1bn, though this would be divvied out over 5 years and (probably) be administered by ACE (so effectively a 50% boost per year). I think we know what the main alternative to this is, having lived under a Conservative government for as long as we have now: austerity, business-first mindset etc (if they don’t self destruct before then). You’ve got to give some credit to ACE for hanging in there at all, I suppose.
Whatever the ambition, a wide base of support is essential. Limiting the discussion to the usual suspects, sector leaders, policy makers, academics – will only take us so far. Blue sky imaginations of a world of perfect cultural policy is well meaning but pragmatically speaking a bit navel gazing. To this end, a bit of self-examination or evaluation is worthwhile, if we really want to claim to be acting for the benefit of a truly representative slice of the population, and to build a mass base of support. What do your neighbours think the arts are “for” or “about”? What do the chamber of commerce think? Schools? Supermarkets? Utility companies? Emergency services?
This probably means we won’t always get to shout about how transcendental and irreducible the arts are as much as we’d like. And in the interests of building a broad church, I would say we also need to beware of “some people [who] are more addicted to fighting than winning.”
(And why not also see this use of The Simpsons in a metaphor about effective political campaigning).
In semi-conclusion…
If one of the main problems people have with cultural policy is that it is supports an obscure, bourgeoisie circle jerk– how do we make sure the alternatives don’t become something similarly obscure and self-fulfilling?
Whatever you think of the specifics, these questions will eventually go beyond the evaluation of this or that project. BUT: hopefully we have explored that continuum, however faint, that links that one survey, interview, report or focus group up to wider political reality. And in the words of the White Pube writers:
It all feeds into our perceptions of what we ‘should’ be doing and ‘how well’ we’re doing it. What is evidence? Who decides? Should we just accept that capitalism is the end point of all civilisation and give over the steering wheel entirely to the technocrats? (I mean, probably not, but here we are).
Can research instruments ever be anything but instrumental?
Can these market research-esque methods really be empowering rather than dismissive? I would suggest listening to this podcast or reading this article for a surprisingly socialist take on that most consumerist of signifiers: the history of the focus group. Why is it that people prefer participating in focus groups over participating in local democracy? Why do you give your life history away to social media for free but think that filling in a survey after you’ve enjoyed a free event is some horrendous intrusion? (Okay, that hits a bit too close to home for me).
I suppose my closing argument to this section is that considering and engaging with ‘the big issues of the day’ doesn’t excuse you from doing the best you can in the flawed situation we all, inevitably exist in, right now. Whether we tip the whole thing over and start again or reform from within: “What we needed were not words and promises but the steady accumulation of small realities”:
(a quote from Haruki Murakami which I first came across in this also somewhat relevant piece: Bildung in the 21st century)
To be continued in part 2: “Zeno’s cultural paradox”