A survey of emerging technologies and their implications for the performing arts
Prepared for ArtsEmerson by Exponential Creativity Ventures | March 2019
Researched and written by Adam Huttler and Rutger Rosenborg
The year is 2019, and the performing arts have yet to fully enter the digital age. Yet, this need not be cause for hand-wringing or self-flagellation. Indeed, it creates compelling opportunities for disruptive innovation at organizational, artistic, and field levels.
Structured Curation Algorithms and Recommender Systems 4
Applications of Artificial Intelligence for Curation and Creation 8
Immersive Environments and Extended Realities 12
Brain-Computer Interfaces 16
eSports: Reimagining and Reapplying the Performing Arts 17
Ushering Business Infrastructure Into the 21st Century 20
Engaging New and Existing Audiences 22
Where to Begin 24
The performing arts industry faces myriad challenges associated with initiating and sustaining digitization. At the same time, there is no shortage of possible strategies with which to approach it. Some follow successful examples from adjacent industries, while others are entirely novel and/or hypothetical. Structured algorithms for recommendation engines, artificial intelligence for curation and creation, immersive environments for enhancing performances, the new frontiers of brain-computer interfaces and eSports, and improvements in industry management and social engagement can all play a part in bringing the performing arts fully into the 21st century. But to fulfill that potential, the field — and especially its nonprofit institutions — will have to confront its own cultural resistance to change and adopt a much more ambitious approach than it traditionally has.
Anyone who has shopped on Amazon has encountered the numerous recommendations the multinational E-commerce superstore offers with each item: “Customers who viewed this item also bought” and “Frequently bought together” are the most common headers for these lists. What Amazon is doing here, in a broad sense, is curating the user’s experience by offering recommendations culled from aggregated purchase and ratings data. Fortunately, individual employees aren’t doing this data analysis — highly optimized structured algorithms are.
A structured curation algorithm is a finite set of computational rules used to generate user data for preference discovery and recommendation engines. Big tech companies, especially those with tastemaking reputations, have used these tools for recommender systems that directly and indirectly analyze user behavior and provide item and/or content recommendations tailored to each user’s taste. There are two primary ways to input and output this behavioral data: collaborative filtering and content-based filtering. With collaborative filtering, the algorithm collects the ratings and purchase history of users and recommends items based on other users with similar histories. A company using this direct approach will analyze an individual user’s preference profile against its entire user base, or, by clustering the user base into segments and analyzing individual user preference profiles against their respective segments. With content-based filtering, on the other hand, the algorithm approaches recommendations by analyzing the items themselves, grouping them with items that have similar characteristics, and curating a user’s experience accordingly.
Most of today’s major tech platforms use structured curation algorithms that fall somewhere along the spectrum of collaborative and content-based filtering. For example, Amazon’s “item-to-item” collaborative filtering “rather than matching the user to similar customers … matches each of the user’s purchased and rated items to similar items, then combines those similar items into a recommendation list.” Spotify uses a more traditional version of collaborative filtering to create a personalized “Discover Weekly” playlist for each user every week. One of the earliest and most basic recommendation engines, Pandora, is largely item- or content-based, operating on the premise that songs are composed of some 450 musical attributes that the algorithm, in conjunction with Pandora employees, can analyze to construct a taxonomy of music and offer recommendations via categorization. This is what Pandora calls the Music Genome Project. It’s clever and clean, but not especially flexible or adaptive. By contrast, Netflix uses a hybrid recommender system that draws upon both collaborative and content-based filtering to not only recommend movies and series, but to influence content creation as well.
There are legitimate reasons for the performing arts industry’s failure to adopt such strategies. The foundation for any functioning recommendation engine is a large set of structured data. With the digital and physical commodity-based exchanges offered by Amazon, Spotify, Pandora, and Netflix, such “big data” is readily available; not so with the largely place- and experience-based exchanges that define the performing arts. There are two reasons for this discrepancy, one technical/fundamental and one organizational/cultural. The technical challenge is that the data necessary for content-based filtering in the live performing arts is inherently less structured, more subjective, and harder to extract than it is for recorded music, film, or television. This is a difficult but not insurmountable problem to solve. The organizational challenge is that no single organization has enough data on ticket sales or audience ratings to develop a collaborative filtering approach, and to date, no one has been able to aggregate that data at the field level. The technical hurdles to such aggregation are well understood and entirely surmountable, but the cultural obstacles are significant. Nonetheless, with sufficient resources and clarity in end-user value, field-wide data aggregation projects should be able to overcome the prevailing cultural resistance.
Successful precedents for data aggregation in the performing arts are, admittedly, few and far between. Southern Methodist University’s DataArts is the best known and most successful effort to date, having at least introduced the idea of aggregating data (albeit financial data) to assess the economic health of arts organizations. DataArts hints at the potential for aggregation projects to change the way the performing arts think about data and about their own business infrastructures. However, the project has been crippled from the start by a few related factors.
If DataArts is guilty of an “original sin,” it is the effort’s reliance on laborious manual processes for data collection. This element fostered widespread resentment throughout the field, imposed an expensive and bloated infrastructure on DataArts itself, and severely slowed its expansion throughout the country. Instead of requiring overworked arts administrators to spend multiple days each year on tedious data entry, DataArts should have relied on application programming interfaces (APIs) to facilitate largely automated cross-server communication between organizations’ existing accounting systems and its own database.
A second lesson from the DataArts example relates to the importance of commercial incentives to participation. Despite its offer of ostensibly valuable financial reporting tools, the reality of DataArts participation has always been more stick than carrot. Organizations participate because their funders threaten to withhold grant support if they don’t. Participation is therefore reluctant and at the minimal level to ensure compliance.
The good news is that the field can learn from the DataArts experience and avoid the same pitfalls with future data aggregation projects, such as those required to facilitate an effective recommendation engine. For example, it could leverage APIs built into its event ticketing software to effortlessly aggregate show listing information and (safely anonymized but individually trackable) audience purchase histories. Meanwhile, a well-designed engine should increase ticket sales and deepen audience engagement in a measurable way, creating clear economic incentives for organizations to opt-in.
For now, let us imagine a comprehensive repository of upcoming productions with geo-specific targeting, much like Bandsintown does for concerts. By analyzing an audience member’s behavior across all participating arts organizations, such a service could provide individually-tailored, informed recommendations as an “honest broker” without any bias toward a particular organization’s shows. Not only would such an approach foster trust among audiences, but it would level the playing field among organizations by reducing the distorting effect of unequal marketing budgets. Such a system could additionally cross-reference data for theater and dance with data sets from other media. So, for example: “If you like Band X and Movie Y, you might like Play Z, which is coming to a theater near you in the coming weeks…. Buy tickets here.” Bandsintown already does this as well, pre-populating a list of “tracked artists” by pulling from a users’ Apple Music and Spotify libraries, Facebook likes, and Twitter accounts to provide local concert recommendations.
Recommender systems provide a curated experience personalized for each individual user — a largely consumer-facing task. But curation also happens on a more back-end and internal basis. This is especially true when it comes to Artists and Repertoire (A&R) in the music industry. Traditionally, the role of A&R — especially as far as scouting new talent is concerned — has fallen on label representatives with good ears and an intuitive sense of what will be a good economic and artistic investment. Before streaming became the norm, the task was at least moderately manageable, but since the floodgates have opened, A&R reps have been inundated with an overwhelming deluge of new artists. Fortunately, that’s provided technology with the perfect opportunity to help stem the tide. Singapore-based startup Musiio is trying to help A&R departments with the initial filtering process, but instead of using trailing indicators like streaming analytics and social media activity, Musiio is using artificial intelligence (AI) to analyze the music itself. It’s similar to the Pandora model in terms of its content-based, taxonomic approach, but it features a critical boost from AI and machine learning, which allows the software to analyze the 30,000-odd new songs released every single day and report back to labels’ A&R departments which of those actually warrant a listen.
Compared to human-defined algorithms like Pandora’s, artificial intelligence is characterized by flexibility. An intelligent algorithm will learn, adapt, improve, and eventually optimize its own accuracy, making it much more effective for curation, especially with respect to “subjective” content that is resistant to strict categorization (or where the essential taxonomy is non-obvious to human analysts). In the visual arts realm, the Russian startup Connoisseur is using AI to analyze artwork with an approach similar to Musiio’s. The company’s founders describe the platform as a “visual search and recommendation engine for online art marketplaces,” which allows for four curation solutions: a database search of visual artwork to enhance a customer’s experience, customized recommendations to “offer customers art they would buy,” trend and demand analysis to “help customers make purchasing decisions,” and attribution and valuation to “make manual operations more efficient.” It’s like Spotify, Amazon, and your own personal appraiser for fine art.
AI-powered curation in the performing arts faces similar obstacles as those discussed in the section on recommendation systems. The content of a dance or theater performance is arguably more subjective — or at least more complicated from a data output perspective — than, say, a recorded piece of music or a movie that is frozen as a permanent, static artifact. Even if an actress is working with the same exact script or a dancer is working with the same exact choreography, each performance is unique in subtle but essential ways. As such, applying the power of AI to curation for the performing arts will require considerable imagination to translate highly complex, quasi-subjective data into a machine-digestible format. This is true whether the field adopts a Musiio-like strategy on top of new field-wide taxonomies and data aggregation or a Connoisseur-style approach that leverages natural language processing and other techniques to interpret critical reviews, audience feedback, and the content itself.
Assume for the moment that the field is able to agree on the essential taxonomy and commit to the requisite technical infrastructure for an AI filtering approach. In other words, imagine that (1) the performing arts industry is cataloguing its work according to detailed metadata on subject matter, writing style, production aesthetic, performer identities, length of show, and any other taxonomic variable you can imagine, and (2) a comprehensive set of such data exists and is being updated in real-time from across the sector. With such a foundation, implementing a part-Musiio, part-Pandora approach would be similar to what Netflix did when it decided to produce House of Cards. This was the first widely publicized example of AI curation in action. The company used data from its recommender system not only to inform its content acquisition but also to direct its original content creation. Similarly, AI filtering coupled with taxonomic analysis could be useful to a theater festival assessing a big stack of submissions, a performing arts center putting together a season, or even an organization seeking a deliberate strategy for expanding or changing its audience demographics. There’s also the possibility of utilizing the AI subfield of natural language processing — essentially, the analysis of human (natural) languages by computers — which might also work best in conjunction with a structured algorithm for a recommender system. This is essentially how Spotify operates, and with the performing arts, it could be used to generate averaged scores of performances or show recommendations based on aggregated language data compiled from multiple show reviews.
The nearly limitless potential of AI even extends to content creation — a notion that tends to provoke skepticism, terror, or both from the arts community. Deep-pocketed tech behemoths are already experimenting with AI-generated visual art, as evidenced by Google’s psychedelic Inceptionism project and Microsoft’s drawing bot. Even Facebook is throwing its hat into the ring by teaching AI neural networks to make images of vehicles and animals. The ripples of AI’s artmaking potential are emanating out from mass media into the fine art world as well, with an AI-generated painting selling for $432,500 at Christie’s Auction House in October. Even the music industry is experimenting with applications of AI, as evidenced by Popgun using it to generate pop songs and Muzeek offering videographers a more efficient option for generating soundtracks for their videos. More immediately germane to the performing arts are AI-generated screenplays, including Sunspring, starring Silicon Valley’s Thomas Middleditch, and It’s No Game, starring none other than David Hasselhoff. Make no doubt; the scripts are weird, lying somewhere between Lynchian surrealism and Beckett-like absurdism, but that makes it all the more interesting — especially if we look at the various possibilities for adaptation to stage. The big cautionary consideration here, however, is how we use AI in the performing arts to empower playwrights, directors, choreographers, and others — not to muscle them out.
Reality in 2019 is challenging enough without having to worry about three additional forms of it. This may be why the distinctions between virtual reality (VR), augmented reality (AR), and mixed reality (MR) — all grouped under the catch-all of extended reality (XR) — can be lost in the digital noise. It’s easiest to think about VR, AR, and MR as different points along the spectrum of XR, with VR being an immersive digital environment entirely separate from reality; AR a digitally-enhanced environment created in realtime by overlaying virtual objects on reality; and MR existing somewhere in between, with virtual objects being incorporated — not just overlayed — into reality (often by being superimposed onto real objects that a user can touch and manipulate). As such, VR has the benefit of removing place- and reality-based restrictions on experience, while AR and MR largely serve the function of enhancing the experience of what’s already there. Visual arts, the film industry, and gaming are all developing ways to capitalize on XR’s potential to enhance those forms of media, and there are especially exciting opportunities for the performing arts to adapt their own uses of VR, AR, and MR as a result.
While VR is perhaps the most well-known of the three, in the long-run, it’s also probably the least adaptable to mainstream creative use — precisely because it’s both completely immersive and also an individualized experience. It can be effective as a narrative medium, especially in gaming, where users can put on their headsets and completely lose themselves in a fictional world, but without a significant anchor in reality and communal experience (the latter of which is traditionally fundamental to an audience’s experience of a live performance), it is most likely destined to reign over a more niche status — more applicable to performance art than the broader performing arts. In the past couple of years, the internet has been abuzz with front-of-house VR use cases in theater and ballet productions, but this is misleading. What these articles are really talking about — with a few exceptions — is 360 degree video (360), which is more or less just video filmed with a camera (or array of cameras) that captures images from all around it simultaneously. In other words, no new “reality” is created, just a more immersive video experience. To this end, 360 applications may be limited in the long run to promotional purposes — i.e., driving social engagement for a play with a promotional 360 video that “takes you on stage” with the actors and actresses (a use case for which it can be quite effective). As for actual VR in the performing arts, the Shubert Organization is using the technology to map interiors of Broadway theaters to improve the process of moving new shows in and figuring out set schematics. While this offers an interesting example of a back-of-house use case, the Shubert Organization finds itself largely on its own in this regard.
AR will likely offer the most promising opportunities for the performing arts, because it extends and enhances a real physical environment with virtual accoutrements without replacing it completely. Importantly, it also doesn’t isolate the user as much as VR does. This is partly because AR hardware (which could amount to as little as a smartphone, tablet, or AR glasses) is considerably less restrictive than VR, which, by contrast, requires a specialized headset that inevitably removes the user’s sense of presence from any communal audience space. In 2016, AR was ubiquitous and a lot of people didn’t even realize it. We can credit the Pokemon Go app, which requires nothing more than a smartphone, for bringing AR technology to the global mainstream. In the same way that Niantic overlayed virtual Pokemon onto the real world, the performing arts can make use of similar overlays to enhance set design, sprinkle in virtual effects, and even make performances more accessible.
In June, Fast Company profiled the startup ARShow, which “equip[s] each audience member with an AR headset and integrat[es] the operating system into the theater’s sound system, lighting, and projector. The live actors on stage use monitors to help them seamlessly interact with the show’s AR components.” While there is a specialized AR headset in this example, the communal and real-time performative aspects remain. Plus, the added ability for actors to interact with virtual objects introduces MR into the equation as well. One can imagine how dramatic incorporating this technology into a production of Alice in Wonderland or The Wizard of Oz would be. In 2016, The Builders Association did just that, inviting audience members to “hold up their phones to the stage and see special effects — like the tornado that hits Dorothy’s house — superimposed over the live action.” Meanwhile, consider the potential for entirely new works crafted by artists with this resource at their disposal.
As exciting as all of this may sound, there are reasons for caution when considering the Pokemon Go approach. One is distraction: How do you ensure that audience members are actually paying attention and not distracted by their phone notifications? Another is purpose: Don’t we go to the theater in part to get away from our phones, and does introducing a technological intermediary compromise the immediacy of the performance? Finally, there’s the issue of practicality: Won’t audience members quickly grow tired of holding up their phones for the duration of the show? As AR technology advances, a solution to all of these questions might come in the form of AR glasses, which would be less cumbersome and isolating than headsets and also more manageable than trying to navigate the smartphone dilemma.
At Cornell, two Connective Media Master’s candidates are using AR glasses to caption conversation in real-time, with the text of whatever is being said running along the bottom of each lens. Their objective is to make social situations more accessible for deaf and hard of hearing individuals, and one of those social situations could very well be a theater performance. That accessibility could also extend to foreign languages, whether that means presenting a foreign play in the original language that it was written, or offering any number of subtitle options for international attendees — all by simply wearing AR glasses. Enabling and activating this use of AR for the performing arts will take some financial resources, but the technology already exists. Cultural impediments and traditionalist friction may be less of a concern here than with some of the other opportunities explored in this paper, thanks to the obvious values-based inclusivity argument. AR for accessibility might therefore be a stepping stone to introducing immersive environments to the mainstream in performing arts, opening the door to more applications of XR as a whole.
The term “brain-computer interface” (BCI) sounds alarmingly like the birth of the cyborg. For the time being, the reality is more modest in scope. BCIs offer hardware-based methods of collecting much of the same — if more cognitively enriched — behavioral data that algorithms and artificial intelligence do. The BCI hardware that extracts this information from an individual ranges from invasive to non-invasive, but the electroencephalogram (EEG), which uses electrodes to record brainwaves, is often the measurement instrument of choice. In terms of their applications to the performing arts, true BCIs are likely to occupy a niche space comparable to that of VR, because they are so hardware dependent. The company Oscillations, for example, has developed a VR headset with an EEG integrated into it and an API programmed to respond to the subject’s brainwaves. Here, we have XR dovetailing with BCI technology and computer programming to adjust a user’s experience in real-time according to the cognitive data inputs of her brainwaves. In other words, the content of the VR experience can change in response to the viewer’s cognitive or emotional reaction. The potential for creative experimentation with such a tool is boundless.
BCIs also have promising implications for accessibility and inclusion. A BCI could offer someone with physical limitations the opportunity to perform vicariously through other performers, whether human or non-human. As doctoral student Andrés Aparicio envisions it, the active, but physically limited, user could communicate via BCI with human or robotic “enactors” that would serve as performance avatars for that user, offering her the ability to perform what she otherwise could not have.
At the non-invasive end of the spectrum is a sort of pseudo-BCI that records unconscious cognitive activity without the need for direct brain-computer contact. Our portfolio company Robbie AI, for instance, uses AI to record and analyze facial expressions as a means of interpreting emotional experience. This sort of approach could also be useful for interactive content based on a user’s emotional experience, but again, how that might translate to a traditional stage is difficult to say.
eSports might be a controversial category to associate with the performing arts, but it would be a mistake to ignore the exploding industry and its performative potential. When we think of sports, heavy physical exertion and competition usually come to mind. Gamers, on the other hand, are burdened by stereotypes of laziness and lack of ambition. eSports upends both of those preconceptions by bringing video games into the realm of public competition and, yes, athleticism (if by a different measure). The question is, how exactly do we categorize this huge entertainment medium? Sure, it’s gaming, but ESPN’s extensive coverage demonstrates that it’s already being co-opted by the sports world. At the same time, aspects of the visual and performing arts are vital to the success it has already had — from players having to put on a show in order to attract a following to the game designs and narratives themselves. As a result, there are really two, not mutually exclusive ways to approach the greenfield industry of eSports. One is from an application point of view that preserves the boundary between gaming and traditional notions of the performing arts. The other is a bit more liberal and involves extending the boundary of the performing arts to include eSports.
First: application. In December, The Verge reported on Epic Games’ copyright battle with three separate performers: rapper 2 Milly, Fresh Prince of Bel-Air actor Alfonso Ribeiro, and viral internet star Russell Horning. The company’s wildly popular game Fortnite was using their signature dance moves (the Milly Rock, the Carlton Dance, and the Floss, respectively) as uncredited and unlicensed “emotes,” or movements players can purchase or be rewarded with for their video game avatars to “express themselves” on the battlefield. There is currently a legal gray area regarding whether or not a specific dance move is eligible for copyright protection since it doesn’t necessarily rise to the level of choreography. Enter a huge opportunity for the performing arts to establish a licensing infrastructure and marketplace for eSports to incorporate more performance and expressiveness into their game designs with appropriate recognition and compensation for the performers being borrowed from. Another exciting opportunity in the way of application involves “live” concerts. In early February, EDM producer Marshmello made history by performing a live concert within Fortnite, offering a glimpse into the future of interactive media and virtual performance venues. Proving this wasn’t just a gimmick, Marshmello followed up his virtual performance by debuting No. 1 on Billboard’s Top Dance/Electronic Albums two weeks later with Marshmello: Fornite Extended Set. Whether it’s licensing, performance royalties, or even ticket and merchandise sales, the digital box office for which Fortnite just laid the groundwork is waiting to be claimed.
If we zoom out from the video games themselves and focus in on the competitors and the sociocultural phenomenon of eSports, there’s a case to be made for it being a kind of performing art. And it’s a popular one: “The League [of Legends] World Championship, or Worlds, clocked in at 99.6 million viewers in 2018 for the final series, according to stats provided directly by Riot Games, League’s developer. That means the difference between American football’s biggest event and League’s biggest event in 2018 was a mere 3.4 million people — 103 million for the Super Bowl and 99.6 million for League’s Worlds.” Twitch, Amazon’s video game live streaming service, and Let’s Play videos, user-uploaded YouTube playthroughs of video games, drive the point home a bit further: Billions of people are watching other people play video games, and it’s not just because they have nothing better to do. As eSports develops and the industry enters the Western mainstream (as it stands, it’s far more a part of Asia’s consciousness), the spectacle and the performance of eSports competition will only become more important for its continued success. The question for the performing arts is whether they’ll be open enough to subsume some aspects of eSports competition, influence its trajectory, and, in turn, capitalize on its growth as an entirely new medium for communal entertainment.
When streaming disrupted the music industry, it did so by changing the prevailing consumption and value creation model. However, it didn’t do much in the way of fixing organizational and field-level problems in royalty accounting, professional hiring, or loosening major label grip on the recording industry. Efforts to chip away at those persistent challenges are just beginning to break through. Companies like Kobalt, AWAL, and Bquate are threatening Universal, Sony, and Warner’s market share and offering artists and songwriters more control over their own work, while our portfolio company Jammcard has introduced a professional networking platform for musicians (think music’s exclusive LinkedIn) to book gigs with A-list artists. It has been a lot harder, or at least taken a bit more time, for organizational disruptors like this to gain traction, even though the need has arguably been there for much longer than the need for a Spotify or an Apple Music.
The music industry is fortunate in the sense that it has always been bound to technological innovations. Theater and dance, on the other hand, have less codependent relationships with the digital world. While this is part of what gives the performing arts their immediate, human-centric appeal, it also means that performing arts industries still rely on largely analog processes and haven’t fully embraced certain basic capabilities of technology that have been widely available for decades. The field could still stand to learn a thing or two from the music industry’s organizational disruptors.
In 2019, there is still no widely-used professional networking platform for theater or dance. Attempts have been made. The earliest was likely in 1999, when Fractured Atlas launched TheatreWarehouse.com. The site aimed to provide a one-stop shop for industry resources and connections, including artist résumés, audition notices, rehearsal schedules, and Internet 1.0 era networking tools. The database was never embraced by the field, however, and it was shuttered in 2003. Two years later, the Chicago Department of Cultural Affairs launched Chicago Artists Resource, offering similar services for the Chicago arts community. The site remains live but has never managed to expand its geographical footprint, despite several efforts over the past decade. Such failures should not be taken as an indictment of the concept itself, but rather the attempts as evidence that the value proposition is clear and persuasive. Whatever cultural resistance or other obstacles the concept might face, the Jammcard example demonstrates that it should be possible with the right implementation. One clue may lie in the laser-like focus of Jammcard’s model. Its scope is narrow and its user base deliberately limited, in contrast to the sprawling, all-inclusive efforts described above. Jammcard is also a VC-funded commercial startup, which creates different incentives and provides a different magnitude of resources for development and evangelism.
Better integration of existing technology could streamline the production process as well. Our portfolio company ProductionPro, for example, has developed a smartphone and tablet app that allows directors and stage managers to manage, archive, and keep track of scripts, scores, designs, props, blocking, choreography, and more, affording them unprecedented organization and efficiency, scene by scene, version by version. Integrating automated and remote-controlled prop and light movement into a production management tool like this could further alleviate production and rehearsal stresses.
Another aspect of 21st century technology that the music industry has used to its advantage is the power of online social networks. Music news dissemination, music PR, DSP integration, and Spotify playing up music as a digital social glue have made music an everyday aspect of everyone’s lives … at all times. According to a 2017 Nielsen report, Americans — especially the millennial demographic — spent over 32 hours a week listening to music, which was up from 26.6 hours in 2016 and 23.5 in 2015. In other words, thanks to a successful integration strategy, not only are people listening to music more than ever before, but the audience pool is that much larger as well. The same is true with photography, with Instagram making everyday digital gallerists out of amateur picture takers. It will not be easy to replicate that kind of success in the performing arts, given the nature of the beast, but the principle of better engaging with existing audiences and expanding the audience pool in general still applies.
Imbuing theater and dance with the same sort of online social presence that music and photography enjoy is a difficult wall to scale, but it’s not insurmountable. 360 videos, for example, are already trickling onto YouTube, Instagram, and Facebook from Broadway and other major productions. The more prominent these immersive videos — and even 360 photos — become, the more they’ll be used, which is fortunate, because performing arts can really capitalize on this sort of immersive media with existing social media platforms. Looking further into the future, Redwood City-based AltspaceVR has developed a sort of virtual theater that preserves the communal experience in the digital world: “Now you can hang out with friends and the AltspaceVR community inside a 360 video with our newest activity: 360 Theater. Up to 10 people can watch 360 videos together and choose from a curated list of the best videos on YouTube. These videos serve as a fun backdrop for social gatherings and meeting new friends.” If, instead of a theater for YouTube videos, something like AltspaceVR’s platform could be used for live theater and dance productions, the appeal for younger, more digitally dependent audiences could explode — especially if it was positioned as an online social network in its own right.
The answer doesn’t lie in technology alone, however. Sometimes attracting new audiences means changing what’s on the stage and not just how it’s marketed. Here Broadway offers a compelling example. Ticket sales for the 2017-2018 season hit an all-time record of $1.7 billion, up 17% over the prior year’s numbers. Early indications suggest that the 2018-2019 season will be similarly impressive. Much of this growth can be attributed to a slew of new shows created by and for young people in recent years. The Book of Mormon arguably launched the trend, but it only accelerated from there, with blockbusters like Hamilton, Dear Evan Hansen, Mean Girls, and now Be More Chill attracting hordes of newcomers to Broadway’s 100-year old theaters. With a few exceptions, shows like these have tended to encourage the social sharing practices of their young audiences, rather than cracking down in the name of decorum or copyright enforcement. As a result, huge online communities of teen and even preteen Broadway fans have taken root. They follow shows like rock band groupies and track the careers of ordinary working actors from role-to-role. Taken as a whole, this trend may well ensure the continued vitality of the industry for generations to come. Importantly, none of this constitutes pandering; these productions have generally been met with critical acclaim.
For those intrepid experimenters who would seek to bring the performing arts — dragging it kicking and screaming if necessary — into the modern era, the question remains: where to begin?
There aren’t always bright lines between the areas of opportunity discussed in this report. Some overlap in places. Some could only be addressed sequentially. With that in mind, we will offer three suggested “next steps” which might open doors for further innovation in the future. One of these requires field-wide collaboration, one aims to catalyze the imagination of artists, and the third offers a challenge to individual institutions. Implementation details are beyond the scope of this report, so we will not offer detailed plans or prescriptions. Rather, these are high-level suggestions informed by our intuition for what is possible technically, what is practical culturally, and where the greatest potential for impact lies.
For several of the areas discussed in this paper, meaningful progress depends on a solid foundation of cross-organization data on audience behavior. The failure of the field to develop this shared infrastructure is a critical bottleneck to innovation. We recommend that a consortium of organizations come together to develop a proof-of-concept implementation. At a minimum, this would require agreeing on the scope of data to be collected, creating a taxonomy for the data, developing basic automated systems for capturing the data, and designing provisions for protecting audience members’ privacy and organizations’ confidential information.
Augmented reality offers extraordinary opportunities for innovation at the artistic level. We recommend that a laboratory be created to give theater and other performing artists the technical resources and artistic freedom to map the boundaries of the possible in this new medium. One model to consider for inspiration is the Sundance Institute’s New Frontier program, which has nurtured hundreds of emerging experimental filmmakers and arguably deserves credit for birthing the VR renaissance.
Despite many years of hand-wringing and with a handful of noteworthy exceptions, the noncommercial performing arts have made little headway in attracting and retaining younger audiences. Yet, there is ample evidence to suggest that millennials and younger generations (i) want to spend their money on experiences more than products, (ii) respond deeply to authenticity but resent advertising, and (iii) are eager and willing to turn the things they love into overnight viral sensations. The opportunity is plain; it takes only the will to embrace it. The first step is to follow Broadway’s lead in providing platforms for young voices to create work that speaks to their problems, interests, and tastes. The next step is to adopt a “shareable by default” posture toward the work on stage (and in rehearsals, and backstage, and in the administrative offices, etc.). Videos, photos, music, and other shareable media should be posted early and often — not to market, but to invite audiences and fans to meaningfully participate in the creation and production process.
 It must be acknowledged that this element could create obstacles to participation by larger institutions.
 For further reading on BCIs, accessibility, and the arts: https://www.tandfonline.com/doi/abs/10.1080/2326263X.2015.1100366?journalCode=tbci20; https://www.tandfonline.com/doi/pdf/10.1080/2326263X.2015.1100514; https://www.researchgate.net/publication/260583301_Games_gameplay_and_BCI_The_state_of_the_art
 This could begin as a time- and scope-limited project rather than a permanent program.