Survey methods in events management research

Survey-based research methods are nothing particularly new to the event and festival sector; and in a forthcoming journal article (Event Management), myself and co-author James Bostock (Derby University) carry out an in depth analysis of past, current and future trends.

Yeah, we may use survey methods a lot in this field, but that doesn’t mean we’re somehow exceptionally good at it or that all the different variations of surveys can just all be lumped together and considered as ‘basically the same thing’. As a proportion, we’re doing ‘less’ surveys, but as the literature expands, there is still more and more research using surveys.

Always room for improvement, and hey, if you hate surveys for some reason, then wouldn’t doing them better/more efficiently give you more time and resources for interviews, focus groups, digital ethnography, participant observation and whatever other experimental methods you’ve got up your sleeve?

The original version of this paper was largely based around my own experiences using various survey methods in various contexts, what could be practically learned from these and where methodology grey areas (or even black holes) still exist; especially with respect to the endlessly diversifying uses of technology.

Of course I would recommend you read the full paper (when it comes out) but until then, this blog post contains a lot of stuff that did not make the final cut. Consider this the ‘DVD bonus features’ section, or perhaps, something more relevant to a general audience than academics specifically. This refers to various surveys that the author has helped carry out in different festival environments, though the festivals themselves will remain anonymous.

Paper vs Digital / Electronic surveys

Paper-based forms and manual data entry are two defining features of most audience surveys. Despite the increasing availability of the technology required to both administer and receive digital surveys via mobile or home computers, the author has encountered three major problems introduced when attempting to remove paper forms from the process.

Firstly, the overall response rate and sample size. A survey of a multi-venue, multi-day festival (~20,000 attendance) where a team of fieldworkers visited venues to distribute and collect paper forms directly achieved a sample of approximately 1000. A very similar survey with the same festival in later years distributed only by email and social media has achieved samples an order of magnitude smaller; ranging from 200 to 150.

Secondly, the demographics of respondents. One survey was made available both in paper at a weekend long festival (~15,000 attendance) and by web for a week afterwards. 250 paper responses and 350 web-based responses were collected. The audiences gender was approximately 50% Male /50% Female in the paper survey and 25% Male / 75% Female in the web survey, while the age groups were approximately 40% under 30 in the paper survey and 20% under 30 in the web survey.

Finally, the time of sampling or the ‘immediacy’ factor, will introduce numerous factual and perceptive biases. Paper surveys are (mostly) done during events, electronic surveys are (mostly) done after events. Market research firm, Ipsos, referring to other emerging trends in ‘in-the-moment’ research suggesting that feedback captured immediately after an interaction is considerably more accurate than feedback captured as little as 24 hours or more after the event. All of the above problems may be more or less critical for different festivals though we argue that there are both practical and fundamental reasons why audience research should be ideally conducted at a point when the audience are more aware of their own experience and role as audience members. There are also conflicting sources around the effects of the time of sampling on things like audience expenditure too. Do people estimate they spend more if you ask them before, during or after the event?

There are further implications of an increasingly nit-picky nature…. but one last one I’d like to mention is whenever a numerical 0 answer can be conflated with blank/missing or null responses. This has implications for how average values are calculated, for one, and depending on the software used, it may be that it “helpfully” auto-fills 0s for the user (or not) whereas on a paper form, the researcher will end up with some 0s, some blanks and has to figure it out or themselves. This can have a surprisingly large impact on things like total expenditure figures, yet I rarely see any comment or discussion on how other researchers deal with this issue. Equally, given a specific text box to write in, it may be that a paper respondent writes (gasp) outside of the box to give all of their views… whereas an electronic respondent could write for ever and ever into a box, only for the researcher to realize that their software cut off the text after a given number of characters anyway. No more nit-picks! (he proclaimed, in a blog post composed 95% of nit-picks)

Returning to technical issues, paper forms are not specifically required to encourage participation, but they are of course cheap enough to distribute broadly, quickly and are a relatively familiar sight to audiences. Increased smartphone ownership is a potentially valuable avenue for survey distribution with hyperlinks, QR codes or other prompts potentially being used in brochures or posters encouraging audiences to participate via their own smartphones. Smartphones overtook laptops in 2015 as the most important device for connecting to the internet in the UK. Despite these high levels, we could further question: how many have a data plan, a reliable signal or the desire to use a provided Wi-Fi network? If Bluetooth, NFC or QR codes are used to direct people to a survey, how many are familiar enough with these technologies to participate? Bluetooth was also used at Roskilde festival to track individuals around the site, providing their phone was on and ‘discoverable’ (Larsen et al, 2013).

There are mobile apps of course, or potentially using free Wi-fi as a way of collecting some data. There are hundreds of ‘event management’ software options if you search on Capterra, or look for the Event App Bible or Event Tech Live to find more; usually these are based on a Software-as-a-service (SaaS) type approach. Specific features vary greatly and customisation or support comes at an additional cost. Regardless, while most festivals could potentially find some value in any or all of these approaches, the costs and complexities still seem to be beyond some festivals especially given a potentially uncertain return on what is likely to be an ongoing investment. Even those who can afford custom built apps might still question the value they provide or whether it replaces any particular need for more traditional audience research.

Electronic surveys at events and festivals

If you want to read more about the software (Open Data Kit), I’ve written various other bits – or you can skip to the end section, or see appendix A for technical and hardware notes.

The following section covers brief examples of deployment in various festival and event contexts.

  1. A public engagement project that exhibited at three greenfield music festivals between 2014 and 2015. ODK Collect and three e-readers were used at the exhibit tent to capture audience response to the content of the exhibit. Power and internet access were generally limited throughout. A team of twenty staff and volunteers manned the exhibit, with one to three being available at the ‘front desk’ to administer surveys. At the same time, iPads were used by volunteers with Enketo to conduct a travel survey at a different area of the festival. Forms were uploaded at the end of each day by mobile data. Paper copies of the survey were also available in the case that three devices was insufficient or users did not want to use them. Team members were able to copy over paper responses to the devices at quiet times in the exhibit. The iPads used for transport surveys collected several hundred very simple responses per day (type of vehicle and number of passengers). Over the three festivals, 180 responses were collected from an estimated total exhibit audience of 3,000; a 6% sample.
  1. A sonic art exhibition that ran for approximately three days in 2015 in retail unit in a shopping centre. Three e-readers running ODK Collect were used. The general design was discussed via email in advance of the exhibition but was only finalised at the opening, where a mobile data hotspot was used to access, edit and load up the relevant files on site. Similarly to the situation in scenario 1, a front of house desk was manned throughout the exhibit by two to four volunteers. No paper forms were available. Over approximately three days, 66 responses were collected from an estimated audience of 300; a 22% sample.
  2. ‘Pop up’ activities at two different one day outdoor festivals that were taking place on two consecutive days. Both activities were managed by the same event manager however each ‘pop up’ was interested in carrying out a different evaluation of their work. The use of colour images to assess marketing materials was required as part of the evaluation, so four colour tablets were provided. Two different forms were loaded to the devices and the event manager was able to select the appropriate form for each festival. No paper forms were available. The first festival was included in the work as a trial and a technical error prevented all but one of the devices to be available, this was resolved by the time of the second festival. 20 responses were collected at the first festival and 74 collected at the second. The second festival had an estimated audience of 17,500 though we assumed only 1/5th were likely to have visited this specific exhibit. Estimating a population of 3,500 would mean the sample of 74 was about 2% of the audience at the second festival.
  1. A weekend long festival, taking place alongside a major sporting event, with performances at a number of outdoor areas throughout the city. The performances included street arts and music performance with the audience mostly free-flowing. 8 devices were provided, 4 e-readers and 4 colour tablets. The exact number of volunteers and workers carrying out the surveys, or how long they were active for is unrecorded. No paper forms were available. 108 responses were collected of an estimated audience of 50,000, a sample of 0.2%.
  1. A series of five small scale indoor performance events over the course of a year. 4 e-readers were provided at all but one of the events, where 3 colour tablets were provided instead. The event manager also intermittently used an additional laptop they had to hand. No paper forms were available. 115 responses were collected from an estimated audience of 300, a sample of 38%.

In many respects, the basic challenges of on-the-ground audience research remain the same whether additional technology is being used or not. Aside from the general format of event the availability of researchers, staff or volunteers to administer the survey is usually the key factor. The exact number of fieldworkers and how long each were active in surveying was not closely monitored. It was often uncertain whether fieldworkers would be undertaking other duties or when the peak surveying time for each festival might be. Making feedback or survey completion feel at least unintrusive, if not relevant and even rewarding to do continues to be a key challenge.

Faults with the technology are as follows. Even with falling prices of mobile technology, these devices obviously cannot be distributed as freely and with the same ease as sheets of paper and pens. It’s worth noting that this is only relevant when forms are expected to be (at least mostly) self-completed, rather than with the fieldworker acting more as an interviewer, in which case only one response is being collected per fieldworker at any given moment anyway. Security of the devices and of the fieldworkers was noted as a concern, though the well-lit, well-staffed and busy environments in which surveying took place meant that no issues were reported. Cases and lanyards were provided to give a visual signifier and overall the devices were significantly less valuable than the average smartphone. Visibility of the screens was occasionally noted, with the e-readers being better during the day but poor in the evening whereas the colour tablets were poorer in the day and better in the evenings. While some feedback suggested that the use of devices was seen as ‘smart’ or ‘professional’ by both respondents and those administering the survey , whereas others suggested that older audiences were less familiar and perhaps less likely to participate as a result. A key issue also was that potential respondents could not see at a glance, what or how many questions the survey involved.

I have not yet really tested the capability of ‘ODK Scan’ which, on paper, looks like a great middle ground option between digital and paper; whereby paper forms can be scanned (taking a photo with a mobile device) and reponses automatically processed (OCR-type of thing). Therefore you could in theory potentially use both paper and digital surveys pretty seamlessly, although there would still be methodological considerations and overall I imagine the survey itself would have to remain pretty small. This is an aspect of ODK that has seen some development however I would guess that in most cases, people would rather find a way around the limitations of digital-only devices (eg: just buy more so you can have more available) than try a kind of ‘hybrid’ approach… therefore I don’t necessarily see this element being at the top of the list when it comes to future developments. But who knows.

Areas where the technology has made largely unmitigated improvements are thought to be as follows. General form design and management allows for quick revisions to be made, in the case that errors, changes or simple spelling errors are found. Templates and blocks of standard questions can be copied simply and accurately for reuse. The user also requires no particular skill in layout or graphical design.

Although digital forms are unconstrained in their length, one key benefit has been in the expansion of response options, rather than expanding the number of questions asked. For example, the age and ethnicity questions used in the UK to parallel national census data have 16 and 19 options available respectively. In previous print surveys, these questions have often been truncated to 8 and 6 option categories as a result of space saving. The same general concept can be applied to all questions whatever the area of investigation. All questions can include an ‘other, please specify’ open text box with no additional clutter for those users who do not need to use it. In terms of accessibility, things like font size and even extra language translations can be used seamlessly.

Digital forms are potentially more secure, easier to backup or archive, less likely to be damaged, lost or thrown out by mistake. From a GDPR point of view, some of benefits of paper forms (cheap and easy to distribute) are potentially liabilities if not considered properly. Of the few GDPR horror stories I’ve heard, a surprising number of them have involved paper records being physically stolen/lost/duplicated – audience surveys maybe aren’t as much of a liability as medical records of course, but the principle remains.

Paper forms can also simply be stuck in a box and forgotten about as people can put off data entry until doomsday and beyond. As time drags on, the potential value of that data is only seen to keep diminishing. Data quality is improved as invalid answers can be screened at the time of collection. The page-by-page approach might encourage less skipping of individual questions. The removal of the data entry step saves a considerable amount of time and likely improves data quality further, especially in a context where we have also probably increased the volume of questions and options. I have no specific data on this but in my personal experience, I would say that digital surveys seem to have fewer ‘skipped’ questions and this is generally a positive thing; although I do wonder if some of this is potentially down to the respondents of digital forms being more self-selecting than paper.

Finally No printing costs are required and it is impossible to run out of blank forms in the field. There are too many variations for analysis to go into, though if more standardized questions get used, it becomes more efficient for even a relative novice to look into creating reusable ‘dashboards’ through common spreadsheet formula and functions.

Conclusions

There are other potential problems emerging for survey methods. We might argue that growing audience engagement via transactional data (box office), the web, segmentation, social media and may become increasingly sufficient for organisers to get a good sense of a target audiences characteristics and opinions without having to engage in traditional research (eg ‘The coming crisis of empirical sociology, Savage & Burrows, 2007).

We can understand that it is tempting to view free data as better than no data at all, but which of the ‘free’ options should an event manager or researcher rely on? Commercial platforms will always want to reify their own insights as the most valuable or accurate, particularly with the objective of selling targeted advertising and further services to help reach ‘your’ audience. (For a comparison of open segmentation and survey methods, see this blog post). To a lesser degree you could also make the same criticism of commercial survey platforms; are the kinds of questions and use-cases they think are the most important to support will be the ways of working you end up adopting. I mean, it may not always be a problem but it’s worth thinking about.

I would encourage you to think about, pilot and even cost/time-out various options before deciding on a particular method (or even both/multiple, of course!). For example: we have 2 fieldworkers and want to collect 100 responses, which means 50 each over the course of a 6 hour event. Call it 5 with a 1 hour break = 10 per hour = 1 every 6 minutes? (including time to approach and recruit people). Starts to add up, right? Then you’re going to spend 3-5 minutes per completed response inputting that data: 5-8 hours of time for 100 forms.

You could at least get an estimate compare the pros and cons vs other costs (buying devices, software subscriptions) and figure out whether one approach is more economical than another (or is preferable in other ways besides).

But, costs aside, something we get into more in the journal article are the bigger differences between different types of surveys (before/during/after, digital/paper, with interviewers/without, multi-day vs single day events etc etc etc).

Use Paper when:
You have relatively few, relatively simple questions. For example, if it can fit on a side of A4 without shrinking the font size or cluttering up the layout too much.
You are confident people will be able to self-complete or if self-completion is critical to your methodology (eg: a small window of opportunity to distribute loads and hope some come back).
You are not expecting many responses and therefore not anticipating loads of data entry (or you have a good plan for dealing with this separately and don’t just think the data entry fairies will take care of it).
You have a plan for physically securing completed responses (and probably destroying them afterwards)
The kinds of questions you are asking are relevant to responses ‘during’ the event (and within that, at the start of the event? in the middle? when people are leaving?).

Use Digital when:
You need more complex types of questions and possibly more questions overall
You can’t reach people ‘there and then’ at an event, but you do have their contact details (or use tablets/mobile devices accordingly)
The amount of time saved in data entry and analysis of paper forms is balanced out by the possible downsides/expense of digital
You have a decent reach on the web/social media to ensure that people will find out about the survey.
You are happy to choose from and use one of the very many free / paid-for services out there (and accept any limitations or extra costs that may ensue)
The kinds of questions you are asking may be more relevant to responses that are ‘before’ or ‘after’ the event. (or both)

Any other suggestions? Let me know!

There are some things technology can’t help you with I’m afraid #surveyshaming

Tell me more about ODK

EDIT: Actually I thought it might be more useful to FIRST talk about why you might want ‘open’ survey software in the first place, why not just use a free (Google Forms etc) or a commercial (Survey Monkey) option? They’re pretty easy to use and not very expensive? Well, here are some reasons, in some vague descending order of importance:

– The features are hands down better. I have never come up against anything ODK can’t do. Even in those rare fringe cases you find out someone else has been working on that feature and it’s just come out. Barcodes, GPS, Offline-capable, so many randomization options, on any kind of device or browser you like, appearance options, metadata collection, encryption… Everything is available to everyone, there’s no ‘GOLD’ plan.
– You can edit surveys in basic spreadsheet format rather than an online interface. Yes, the latter is easier for beginners but: how do you save stuff offline? how do you copy questions or answers between surveys? if you let your account expire do you lose all your stuff? isn’t it easier to just type questions than drag and drop things, especially as things get larger and more complex?
– There is a legitimate community out there to engage with ask questions! Sure, ringing or emailing a commercial provider might get you faster support – and if you do want this, there are commercial ODK-based providers too so you can have the best of both worlds.
– You have complete control over where the data is stored. Locally, in the cloud, on your own server. Very secure before you even start to think about encryption.
– The costs are lower for most people, possibly even entirely free. There’s no recurring cost and no price changes in the future. Depending how you use it, you may end up paying some kind of server fees and of course there is the cost of your time learning the ropes. It definitely pays off, especially if you find any of the other benefits appealing.

Open Data Kit (ODK) was initially developed by researchers at the University of Washington and subsequently adopted and advanced with contributions from numerous individuals and organisations (Hartung, et al., 2010). Around 150 deployments are listed on their website showing usage from academic research to humanitarian groups and commercial applications (Open Data Kit: Deployments). Many of these deployments of ODK have been in challenging environments, where the upfront cost and ongoing maintenance of custom developed applications and expensive hardware can be just as much of a barrier to effective data collection as limited mobile, internet and power infrastructure. Even in cases where labour is cheap, the time required and potential for human error from data entry presents a problem. The basic function of these tools will be outlined in this section with some comments regarding the overall research process of a typical audience survey.

ODK Collect is often the most visible part of ODK, as an app which runs on a suitable Android device, usually a phone or tablet. The end user, or respondent, navigates through multiple pages of a form, each page typically showing a single question or response matrix. Questions can be in any common format, with the user able to make use of the touchscreen, on screen or physical keyboards, camera, microphone and GPS to complete each question. The device can be used by either an interviewer or the respondent themselves, with a little supervision.

Designing forms is done in any spreadsheet software and according with the XLSForm standard, essentially a very basic programming language. The design stage can also be performed via a web browser (ODK Build) with more of a GUI, visual, drag and drop type of interface. The cells, rows and columns of a standard spreadsheet are used to describe the order, layout and type of questions to be presented. The simplest questions could be ‘text’ or ‘integer’ which provides an open-ended text or numerical box. Single or multiple choice questions use ‘choices’ listed separately that may therefore be repeated as required. Other features worth noting are constraints (only enter a number between X and Y), relevant (show this question if a specific previous answer was given) and required (this question must be answered to continue). If additional languages are required, translated text can be integrated directly into the design without having to setup multiple new forms. The basic spreadsheet format makes it easy to copy between different files and manage different versions. Importantly, the created logic for user interaction also works as the basis for the data store to receive completed responses.

This human-readable XLSForm spreadsheet (an .xls file) is converted to a machine-readable XForm (an .xml file) which the application can then present as a form. Responses are saved locally to the device. This can all be accomplished offline, however ODK Aggregate is a ‘ready to deploy server and data depository’ which allows a user to remotely receive responses sent by ODK collect and to provide form designs to be downloaded. The community documentation and installer provide a relatively simple ‘click to deploy’ experience. By default, Google App Engine is the platform as a service (or PaaS) used for hosting which allows for ‘pay as you go’ hosting. A free daily quota means small projects collecting >2000 responses, will incur negligible hosting costs, if at all (Open Data Kit: Deployment planning). Additional built in integration with Google map services and Fusion Tables may be of further use but we will not detail these here. Finally, Enketo allows for the same XForms to be used as browser-based surveys without using a specific Android app (ODK Collect). In the authors experience this has included iPads and various Windows laptops.  This survey could be distributed via email, social media much the same as any other browser based survey, though the offline capability remains, if this is needed. Responses through either are (usually) collected together in ODK Aggregate (or Google Drive, or ODK 2.0, or ODK Briefcase, or ODK Central…)

All of the above steps utilise software that is entirely free to use, share and adapt in perpetuity, thanks to the open-source license (in this case: Apache v2.0). Of course, in the longer term, management and maintenance costs are not to be ignored and other suppliers have emerged to offer more ‘turnkey’ solutions built on ODK; Ona.io, SurveyCTO, KoBo Toolbox to name a few.

Appendix A: Technical and hardware notes:

ODK Collect states the minimum specification for the software is Android 1.6 ‘Donut’, released in 2009, which covers virtually all devices in this author’s experience. The author has directly used a number of different devices in the following live ‘production’ contexts. These include modified e-readers, 7 and 8 inch Android tablets. The e-readers are Nook SimpleTouch e-readers which have black and white 6 inch touchscreens and have been rooted to run Android 2.3. Because of the e-ink screen and the very low technical specification, the battery life for e-readers is much greater than for colour tablets, measured in days rather than hours. The e-ink screen also has the benefit of being easy to read in direct sunlight. Three used e-readers were purchased for around £30 each. Colour tablets used have been first generation Google Nexus 7 (running Android 5.1.1 ‘Lollipop’) and a first generation Lenovo Yoga 8 (running 4.4.2 ‘Jelly Bean). Compared to the e-readers, these devices were able to show colour images and video. They are harder to see in direct sunlight than the e-readers, but not to the degree where work appeared to be significantly hampered. Conversely, the backlit screens are more visible in indoor and darker environments. The battery life is likely to be enough to cover one day of fieldwork, typically six to eight hours, though the Lenovo tablet features an extended battery allowing for double this amount. These were purchased for £60-£90 each, manufacturer refurbished. Various iPads have been used with Enketo, though these were not under the author’s direct control; however no particular problems were reported.

Regarding all devices it can be noted that additional batteries that charge via USB are similarly affordable and including a backup may be good practice. In most cases, researchers will be able to access charging in between uses or to simply include more devices. Managing power and screen brightness settings can be worth investigation as can ensuring that Wifi is turned off when not uploading/downloading files. Finally, using mobile data, whether through a devices own SIM card or through a wifi hotspot, is quite cost effective and convenient. Each response uses a fairly negligible amount of data (kilobytes), although if the survey includes audio, image or video responses, this will be much higher. It may not be advisable to use public Wifi in the case of transferring sensitive data – and there are further end to end encryption features that can be put in place as well.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.