1 of 18

Open and Reproducible Research with community-driven Research Software metrics

Yo Yehudi

@yoyehudi

Mateusz Kuzak

@matkuzak

Emmy Tsang

@emmy_ft

2 of 18

How is open source software used in research and science?

3 of 18

Open source software in research

The first image of a black hole!

The Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., Alef, W., Asada, K., Azulay, R., ...Yamaguchi, P. (2019). First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole. Astrophys. J. Lett., 875(1), L4. doi: 10.3847/2041-8213/ab0e85 (Licence: CC-3.0)

4 of 18

Open source software in research

https://setiathome.berkeley.edu/

5 of 18

✋✋✋

How many of you have worked in science or research before?

6 of 18

Metrics in research

7 of 18

Metrics in research

As well as all of the metrics you might see in other open source orgs or companies, research has other pressures

8 of 18

A traditional output: The research paper

  • Researchers typically write up their results in peer-reviewed papers, published in research journals.
  • This is a format well-suited to writing up a lab experiments (and terrible for summing up/describing software)!

9 of 18

A traditional output: The research paper

⛔️ Problem ⛔️

      • Software doesn’t look much like a paper, but we end up shoehorning it in to papers anyway.
      • What happens after we’ve updated our software? More papers about software updates?

10 of 18

A traditional output: The research paper

⛔️ Problem ⛔️

      • It’s possible to publish a paper about software without showing the software

“For this review, unfortunately I did not have access to a computer with the correct operating system, so I assume that the application functions as described”

11 of 18

A traditional metric: Citations

Citation: The number of times a given paper has been cited by other papers.

⛔️ Problem ⛔️

People often don’t think/remember/know how to cite software - leaving research software engineers worse off, due to their non-traditional outputs

12 of 18

Other traditional metrics

Journal / conference prestige

# of papers

⛔️ Problem ⛔️

Can these research paper-based metrics accurately reflect the quality of the research software?

13 of 18

Another metric: Research Excellence!

In the UK, the Research Excellence Framework asserts:

“In assessing outputs, the sub-panels will look for evidence of originality, significance and rigour”

So, what does “originality, significance and rigour” mean in software?

14 of 18

CHAOSS + Research metrics - how do we measure?

Many/most of the CHAOSS metrics apply to research quite clearly, but some are company/organisation specific and may not be aware of grant funding models (as opposed to corporate sponsorship) and research environment pressures.

CHAOSS

Research

Sweet spot?

15 of 18

Discussion

In groups of 2-4, discuss one or both of:

    • Which CHAOSS metrics could be useful for research software
    • Which other metrics might be useful, even if not CHAOSS-ey?

Go to http://sli.do (enter code chaosscon2020) - enter ideas / thoughts real-time!

16 of 18

Results

  • To follow up or view results, visit http://bit.ly/CHAOSScon-2020-research

Go to http://sli.do (enter code chaosscon2020) - enter ideas / thoughts real-time!

17 of 18

Thank You

View sli.do results & get involved by going to http://bit.ly/CHAOSScon-2020-research

Go to http://sli.do (enter code chaosscon2020) - enter ideas / thoughts real-time!

18 of 18

Yo Yehudi, Software Developer, University of Cambridge, & EngD student, University of Manchester

t @yoyehudi g @yochannah yy406@cam.ac.uk

Mateusz Kuzak, Research Software Community Manager, the Netherlands eScience Center, t @matkuzak g @mkuzak m.kuzak@esciencecenter.nl

Emmy Tsang, Innovation Community Manager @ eLife

t @emmy_ft / @eLifeInnovation g @emmyft�e.tsang@elifesciences.org