1 of 43

Hacking groups

BioHackathon 2024

2 of 43

Group topic

Participants

  • Toshiaki, …

Description

Use this slide as a template

← Put your names here to participate in

Use the bold font for group lead

← Describe objectives, requirements etc.

(Group leader is responsible for this and presentation)

We will start presentations and reviews of the hacking projects at 11:00

3 of 43

Suggested target domains

  • [R1] Multi-omics analysis on human genotype to phenotype that includes genomic, transcriptomic, epigenomic, proteomic, protein structures, and biochemical data.
  • [R2] Automated data analysis of other organisms including phylogenetic compositions, gene annotations, pathways, and growth conditions.
  • [R3] Data-driven interdisciplinary studies in public health, environment, agriculture, food, energy, and other fields utilizing knowledge graphs.
  • [R4] Facilitating knowledge discovery and biological analysis from databases and literature, especially utilizing large language models.

4 of 43

R1

Human genotype to phenotype

5 of 43

Genome variation

Participants

  • Yosuke, Toshiaki, Maxat, Toyoyuki, Yuki, Nobutaka, Hirokazu, Tsuyoshi, Dorothy, Pitiporn (Sam), Mayumi, Núria, Shuichi, Kentaro (Yamaken), David, Takatomo, Hiroyuki Mishima, Tazro

Motivation

The analysis of the human genome has been flooded with data due to the widespread use of sequencers. The simple variations such as SNV and indel are being integrated with TogoVar, but structural variation has not yet even been standardized. Meanwhile, the pangenome graphs has emerged as a powerful tool for integrating multiple haplotypes. During this hackathon, we would like to discuss how to handle this heterogeneous data in a unified manner.

Description (Write down your proposal here.)

Join #genome-variation channel on slack.

6 of 43

Pangenome Graphs Database (PGD)

Participants

  • Toshiaki, Yosuke, Maxat, Robert (remotely), Toyoyuki

Description

  • Collect existing pangenomes into one repository
    • Do a survey on the papers published so far
      • Human: HPRC, Chinese, Arab, JaSaPaGe,
      • Primate:
      • Plants:
    • Grep existing BioProjects with the term /pangenome/
      • => 609 entries
    • GitHub repository for the draft version:
  • Define metadata in JSON (and turn it into JSON-LD by adding @context later)
    • Target (population) - do we also include non-human pangraphs?
    • (Number of) samples (haplotypes?)
      • Links to raw data and assembled haplotype sequences (e.g., SRA)
    • Availability
      • Download link - do we copy the graph data into our database? to GFA
    • License
      • Need to contact to the authors for data retrieval?
      • Requirement of IRB (institutional review board) approval
    • Method
      • Workflow and tools used to create the graph - link to the repository?
    • Authors
      • Contact information
    • Reference
      • Published paper on the graph
    • Version
      • Published date, Updated date and revisions
  • A Website with a SPARQL endpoint
    • pgd
  • Analysis environment
    • Should be replicated in cloud environments and on-premise systems
  • Submit the database to a journal (at least to the BioHackrXiv)

7 of 43

Integrating facial analysis into PubCaseFinder

Slack Channel: #pubcasefinder_gestaltmatcher

Participants

  • Tzung-Chen Hsieh
  • Hiroyuki Mishima
  • Toyofumi Fujiwara (remote)
  • Marlon Aldair Arciniega Sanchez
  • Atsuko Yamaguchi (interested)
  • Eisuke Dohi(interested)

Description

  • PubCaseFinder (https://pubcasefinder.dbcls.jp/), the framework to search for disorder/gene/patients by Human Phenotype Ontology (HPO) analysis.
  • GestaltMatcher Database (GMDB, https://db.gestaltmatcher.org/), the database containing ~10,000 facial images with rare disorders.
  • To PubCaseFinder, implement functionalities to link GMDB and perform diagnosis assistance using facial photos.
  • Input: facial image and HPO terms
  • Output: a list of suggested disorder/genes/patients. Additionally, show the links to the photo in GMDB.
  • Test data: GMDB test set and published Japanese patients from internet.

8 of 43

HPO suggest

Participants

  • Marlon Aldair Arciniega Sanchez
  • Toyofumi Fujiwara (remote)
  • Atsuko Yamaguchi
  • Orion Buske (remote)
  • Andrea (maybe)
  • Yosuke Kawai [interested]
  • Toyoyuki Takada [interested]
  • Eisuke Dohi [interested]
  • Maxat Kulmanov [interested]
  • Surasak Sangkhathat [interested]

Description

Objectives

- Given (one or more) HPO terms, suggest one or more HPO terms based on the log of PubcaseFinder queries

- Analyze PubCaseFinder queries for any biases or usage patterns to better understand users

- bias in branches of HPO being searched for, JP vs EN, IP/geography, terms in particular order

Methods

- Data cleaning (deduplicate sequential queries from same user?)

- Use data, time, or IPs related to each query

- Create a matrix of the co-occurrences between HPO terms

- Calculate conditional probabilities given the frequency of each HPO and its combinations.

- Measure performance against searches in other time period

Note: We are tracking our activities and progress in this Google doc

9 of 43

Hidden-Rad ontology

Participants

  • Key-Sun Choi (if others!) Andrea: interested but not sure if can participate. Hikaru (interested)

Description

  • Task is to give an ontology for the radiology disease decision process about
    • findings, anatomical location, impression, and checklist to confirm the impression (disease)
    • In the environment of Radiology report based on MIMIC Chest-Xray data
  • Checklist will be a clue for explaining why such disease impression was made, but usually not written in the radiology report.
  • Now a collocation of data for such checklist confirmation has been made from the patient data in MIMIC by experts.
  • https://sites.google.com/view/ntcir-18-hidden-rad/hidden-rad
    • To generate a report to include the explanation why such impression is made in the radiology report,
      • for input from MIMIC.
    • training data is generated by LLM based on experts’ checklist confirmation by crowdsourcing and corrected by experts.
  • Ontology schema consists of the
    • base ontologies: FMA, RadLex, DOID, MONDO
    • properties from RadGraph
    • specially required schema for checklists

10 of 43

Connecting healthcare data

Participants

  • Andrea (if others are interested), Kiyoko (interested), Pitiporn(intersted)
  • Evan (interested), Key-Sun (interested), Hikaru (interested), Núria (interested), Chihiro (interested), Toshiaki, Chang (interested), Surasak (interested)

→ can we connect this to clinical trials, having CT as a starting point and expanding from it?

Description

  • Make a map of what connections exists between data/ontologies for patient data and molecular (or environmental) data/ontologies
  • e.g.: From symptoms, to diagnosis, genetic basis, pathways, to chemicals and pollution and environment.
  • Can we make a chart?
  • Looks like Med2RDF is very related

11 of 43

Annotations of clinical trials

Participants

  • Thomas Liener, Jerven Bolleman, Claude Nanjo, Núria (Andrea possibly interested), … ?, Dani F(maybe can help with the model), Yuka (interested), Tore (interested)
  • Evan (interested - we worked with community to get CTO updated/modernized - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9389640/ .. PubChem has linking to clinical trials and we did in PubChemRDF but we did not release it for some reason .. I am asking why), Chunlei (interested)

→ can we connect this to “connecting healthcare data”, looking on how we can expand from basic elements in a CT? Drug, indication, symptoms, phenotypes to, for instance, what pathways are clinical trials about? What environmental factors link to a disease?

Description

  • Annotating clinical trial data from https://clinicaltrials.gov/ with ontology terms
    • How deep? You have MESH keywords, but also I/E criteria (quite convoluted) sites, protocol aspects… Is there even an ontology for to annotate protocols (maybe clinops? / USDM?)
    • Entity Recognition necessary? LLM? Using existing resources (FHIR, existing MESH annotations for indications)?
  • Building a (simple?) semantic model for clinical trials
  • Linking/connecting clinical trials to other resources (Uniprot, pubchem?)�

Brainstorming gdoc here and brainstorming slack #clinical_trials

12 of 43

Visualization for cohort data

Participants

  • Akio Nagano, Yosuke Kawai [interested], Chihiro (interested), Michel (interested), chang

Description

  • A tentative plan for visualizing cohort data
  • In which direction?
    • I’m going to try a (maybe) slightly new way of visualizing information.
    • Cohort participants are represented as dots.
    • The dots will change shape depending on the visualization you're trying to achieve.
    • For example, if you're representing a histogram, the dots representing cohort participants will move to the bin they belong to and become part of the bars.

13 of 43

Visualization for HPO and MP

Participants

  • Eisuke Dohi,Kushida Tatsuya, Kozo Nishida, Yasunori Yamamoto, Yuka Tateisi, Terue Takatsuki

Description

Objectives:

- To Visualize the HPO and MPO much easy way

Methods

- Data extraction form Riken Metadatabase SPARQL endopoint

https://knowledge.brc.riken.jp/bioresource/sparql

- Create Visualization and WebApp

https://drive.google.com/drive/folders/1XmcCRT1iwGOfRL9QZOmY6F4UknK3uGEf

Achievement (Both HPO and MP)

- Data extraction

Among each subclass (first tier category) for each node,

① leaf or not? ② No of layer ③ The path to the node ④ Overlapped nodes

- Visualization in a tree structure

Future Direction

- Develop WebApp for Ontology curation

(Selecting or Adding term)

→1:For mapping HPO and MP

→2:For curation of HPO with clinicians

14 of 43

R2

Other organisms

15 of 43

Viral phylogenomics

Participants

  • Russell, Yosuke, David (interested)

Problem : Species trees are usually built using sets of universal marker genes. Viruses don’t have universal genes!

Proposal : Cluster gene trees by topology, build species trees for taxonomic groups with compatible gene trees.

Motivation : Species trees are the starting point for studying recombination among viruses and their hosts, testing models of species concepts in viruses, illuminating the origin of cellular and viral life, and many other things.

Dataset : IMG/VR (https://img.jgi.doe.gov/cgi-bin/vr/main.cgi) is the largest collection of viral genomes, with 5,576,197 genomes and MAGs in 2,917,521 vOTUs spanning all clades of the viral world.

Workflow : 木槌 (kizuchi) (https://github.com/ryneches/kizuchi/) uses prodigal-gv, hmmer, mafft, trimal, and fasttree to generate gene trees.

e.g. : https://ggdc.dsmz.de/victor.php

16 of 43

Cultivation media & phenotypic traits

Participants

  • Julia, Shuichi, Natsuko, Risa, Kohei, Yoko, Tatsuya, (Erick: curious), Susumu (interested)

Description

  • Sharing media information between MediaDive and TogoMedium
    • Understand the structure of each terminology
    • Align terminologies manually and automartically
    • Expanding and applying the cultivation media ontology
    • Developing an exchange format for media between the two platforms
  • Calculating similarity between media
  • Integrating more information on media design
  • Cleansing phenotypic trait data and test data

  • If time allows it: strategies and prototypes for AI-prediction of cultivation media

17 of 43

R3

Broader life sciences

18 of 43

GLYCO and all things sweet!

Participants

  • Kiyoko, Evan, Issaku, Nathan, Akihiro, Masaaki, Masae, (Núria interested), Dani (could help here).

Description

  • The structure of glycans lacks clear, distinguishable definitions, making it problematic to determine which structures should be considered as glycan data.
  • Therefore, we will analyze the data from GlyTouCan, a glycan structure repository, to understand what structures are considered glycans and what structures are considered monosaccharides.
  • We aim to discuss the results and establish rules for defining which structures should be classified as glycans.
  • GlyCosmos development:
    • Archetypes and subsumption (Akihiro with help from Masaaki)
    • Motifs (Masae)
    • GlyCosmos RDF for RDF Portal (Masae)
  • Development of tools
    • Update GlycanBuilder2, GlycanFormatConverter, wurcs2pic, etc.

19 of 43

Human Glycome Atlas (HGA)

Participants

  • Kiyoko, Achille, Ruwan, Hannah

Description

  • Evaluate various infrastructure components
    • QLever
    • GRASP
    • UniProt
    • Others?
  • Try to load GlyCosmos RDF into QLever to assess its performance

20 of 43

PubChem ⇔ Nikkaji Alignment

Participants

  • Yuka, Evan, Tatsuya, Issaku

Description

  • Update Data in PubChem originated from Nikkaji
    • Remove duplicate entries (same CID, different SIDs) in PubChem
    • Remove inconsistencies between Nikkaji/Pubchem (Nikkaji ver 2018)�and Nikkaji RDF (Nikkaji ver 2022)
    • Set up the procedure for finding inconsistencies and upload�new Nikkaji entries to PubChem

21 of 43

Plant Breeding Ontology(PBO)

Participants

  • Erick Antezana, Hiromi Kajiya-Kanegae, Shuichi, Wasin Poncheewin, Núria, Akio Nagano

Description

  • Update the current version of PBO (OBO and RDF)
    • add new terms
    • refine some definitions
    • add Japanese translations
    • add new « categories » = hierarchy
  • Review & update the support scripts (Python)
  • Load PBO (in RDF) into a triple store
  • Generate a few sample queries
    • on PBO
    • combining other resources (federation?)
  • Explore new oportunities
  • Publication
    • update the draft
    • japanese characters as images (fix)
  • We need a nice ontology image/logo <— CALL FOR ARTISTS! FOUND!

22 of 43

Japanese Food Ontology

Participants

  • Chihiro, Tatsuya, Kiyoko (interested), Erick A. (interested), Shuichi (interested), Risa (interested), Núria (interested), Julia (interested), Susumu (interested)

Description

  • Integration Japanese food ontology using another Japanese food resources
    • based on FGNHNS (https://bioportal.bioontology.org/ontologies/FGNHNS)
    • addition of food composition data (MEXT)
    • addition of standard food name data (MIC)
  • Consideration
    • addition of allergen information
    • relation of crop information
    • relation of FoodOn

23 of 43

BH24 Wikiblitz (fun and sidetopic)

Participants

  • Andra, Yasunori, Shuya, Russell, Michael、Tore

What is a Wikiblitz?

A Wikiblitz combines Wikidata/Commons with a Bioblitz:

• A Bioblitz is a communal effort to record as many species as possible within a specific location and time.�

Why Participate?

• Your observations, under an open license, can be reused.

• Using Wikidata, we link these observations to the semantic web.

• You might even discover a species not yet observed!

Join Us!

Slack: #wikiblitz

iNaturalist: Biohackathon 2024 Project

Let’s explore and contribute together! (Maybe interesting? https://www.earthmetabolome.org/)

24 of 43

R4

Data analysis and methods

25 of 43

Hindsight/best practices

Participants

  • Jerven, Evan, Yoko, Erick, Andrea, Andra…, Yasunori, Michel, Julia, Thomas
  • Jose (interested), Arto (interested), Takatomo (interested), Shuichi (interested), Chunlei (interested), Risa (interested)

Description

  • UniProt, PubChem, Rhea
    • We did some stuff, what do we regret, what do we want to improve
    • Can we “fix” it in spec compatible ways (e.g. owl:equivalentClass)
  • Advice for the next gen
    • Query optimizer friendly
    • Human friendly SPARQL, RDF and identifiers
    • Long term data preservation
    • Multi-Language support
  • Input from data integrators
    • What do they love/hate?
    • What is best way to improve interoperability of RDF data sets?

26 of 43

Getting shapes from large RDF inputs

Participants

  • Dani, Yasunori Yamamoto, Jose Labra, Andra Waagmeester, Jerven
  • Evan (interested)
  • Gos Micklem
  • Maxat (interested)

Description:

We aim to automatically extract RDF shapes (ShEx, SHACL) from large data sources. To address scalability challenges, we've developed a solution that involves splitting the input source into manageable slices and then merging the resulting schemas. However, 1) this is just one approach, and 2) we need to enhance the subsetting process to ensure the subgraphs are as complete as possible while remaining manageable by commodity hardware.

We would appreciate assistance with:

  • Developing subsetting strategies
  • Suggesting parallelization techniques
  • Hands-on support for implementation

docker run -p 5000:5000 gm-api

27 of 43

Visualize sheXer results

Participants

  • Dani
  • Kozo
  • Andra
  • Gos
  • Jose (interested)
  • Toshiaki

Introducing sheXer: Automate Your RDF Schema Inference!

What is sheXer?

A Python library that automatically infers RDF schemas.

Current Features:

• Outputs ShEx, SHACL, and PlantUML visualizations.

Our Goal:

• Enhance schema visualizations with new and diverse visualization backends.

Slack: #shexer

28 of 43

Using (discovered) schema

Participants

  • Dani, Jerven, Jose Labra, Núria, Yasunori, Andra
  • Evan (interested), Chunlei (interested)
  • Gos, Toshiaki

Description

  • We can use shexer or void-generator or rdfdoc to discover the schema of data.
  • With the schema we can
    • generate code
      • generate an RDF-config model stab file.�(hopefully automatically name variables based on Class/Property labels (rdfs:label / rdfs:comment))
    • validate/generate sparql using rudof
    • link examples to the schema
    • improve query auto complete

29 of 43

LLM-SPARQL

We propose to develop an LLM-assisted SPARQL query answering system

  • schema-informed in context learning by LLM
  • corrective SPARQL query generation
  • evaluation over human and AI generated benchmarks

Tasks:

  1. SPARQL query benchmark (human and AI generated)
  2. LLM-framework (llama-index)
    1. identify schema-relevant info from user query
      1. NLP, schema extraction & store, schema mapping, ontology reasoning, graph analysis
    2. generate SPARQL query
      • baseline LLMs: local (llama3.1), cloud (GPT-4o)
      • constrained sampling (syntax-directed token selection)
      • fine-tune LLM
      • (train a new generator via stable diffusion)
    3. validate and iteratively generate SPARQL query
      • analyze syntax and semantics
      • suggest improvements
  3. Evaluation

Project Slidedeck

Participants

  • Michel Dumontier
  • Jerven Bolleman
  • Andra Waagmeester
  • Hikaru Nagazumi
  • David Steinberg
  • Chang Sun
  • Arto Bendiken
  • Yasunori Yamamoto [interested]
  • Claude Nanjo
  • Eric Prud’hommeaux
  • Jose Labra
  • Gos Micklem
  • Dani Fernández
  • Julia (interested)
  • Chihiro (interested)
  • Chunlei
  • Shuichi [interested]
  • Toshiaki

30 of 43

SPARQL - Schema conversions

Building blocks identified as part of the LLM SPARQL project

Challenge 1: ShEx → NL Question + SPARQL :

To compare with the other direction: NL Question SPARQL

SPARQL query benchmark

Challenge 2: SPARQL → ShEx:

Goal: Schema extraction

Challenge 3: Compare between ShEx schemas

Goal: Schema mapping

Challenge 4: Visualize ShEx schemas

Goal: Help humans understand the schemas

Participants:

Jose Labra,

Hikaru Nagazumi,

Eric Prud’hommeaux,

Claude Nanjo

Andra

Yasunori Yamamoto

Dani

Gos

…???

Schema

(ShEx)

RDF

Data

Endpoint

SPARQL

queries

Generated

Schemas

(ShEx)

=

?

NL

queries

Slack channel:

#sparql_schema_conversions

31 of 43

LLM-assisted BioSample curation

Participants

  • Shuya, Tazro, Shinya, Zhaonan, Yuki, Takatomo, Shuichi, Susumu

Description

  • Improve the quality of metadata registered in the BioSample database using LLMs
    • Metadata of BioSample is very heterogeneous and hard to interpret algorithmically
    • Extract phrases to be mapped to ontology terms
    • For evaluation of the task, create a testset manually
      • Complete the testset

{� "accession": "SAMN15915146",� "Matrigel_Passages": "0",� "isolate": "SW480",� "organism": "Homo sapiens",� "replicate": "1",� "tissue": "cell line",� "title": "Human sample from Homo sapiens"�}

{� "cell line": "SW480"�}

32 of 43

Characterize Biology use of LLMs

The interaction with LLMs presents a new way of using computers and should be studied directly.

WildChat dataset includes 1m real-world usages of ChatGPT including many biological questions. Let’s find out what people used it for (and if it was any good!)

  • Find subsets of WildChat dataset relevant to bioinformatics/biology
  • Benchmark usage from other LLMs
  • Summarize the usage (using LLM and hand curation)
  • Review the quality of responses
  • Generate synthetic ChatGPT conversations using WildChat model

Participants:

David

Hirokazu

Tazro [interested]

Toshiaki [interested]

Susumu [interested]�Pitiporn(Sam) [Interested]

33 of 43

Ruby coding with help of LLMs

Participants

  • Naohisa, Hiroyuki, Arto,

Description

  • Trying to write Ruby code to do Bioinformatics tasks with the help of ChatGPT and LLMs
  • Developing Ruby libraries/applications with LLM
    • BioRuby: Bioinformatics library for Ruby

Slack channel: #ruby

34 of 43

UMAP all the APIs

Make a visual representation of metadata about DBCLS services

  • Create a sparse vectorized representation of API/services
  • Generate a dimensionally reduced visualization of the services
  • Provide an interactive interface for accessing underlying services
  • Characterize clusters

Participants:

  • David

35 of 43

Data quality

Participants

  • Andrea (if others!), Kiyoko (interested), Yasunori
  • Achille (Interested)
  • Yuka (interested but not sure if I can participate)
  • Erick (Interested)
  • Evan (interested) , Jerven (Interested), Chunlei (interested)

Description

  • How do you annotate a dataset quality, so that a consumer can evaluate if it is viable for a given purpose?
  • Objectives of this project are to develop:
    • A set of properties for data quality labeling (e.g.: dataset coherence)
    • A set of metrics for such properties
    • Implementation for such properties

36 of 43

Enhancement of the Bioresource Retrieval System using ChatGPT

Participants

  • Tatsuya, Chihiro, Terue, Hiromi (interested)

Objectives:

  • Implement the ChatGPT API in the bioresource retrieval system to enable fuzzy search capabilities.

Tasks:

  • Integrate the ChatGPT API with the FileMaker system.
  • Develop and refine effective prompts for the ChatGPT API.
  • Test the fuzzy search results to ensure accuracy.
  • Compare FileMaker+ChatGPT with other services, such as Dify, and NotebookLM.

Expected Results:

  • Enable users to perform more intuitive and flexible queries.
  • Enhance the overall user experience.
  • Prevent failures in information retrieval due to user errors, such as typos or misspellings.
  • Improve search accuracy, with a particular focus on enhancing recall.

37 of 43

Mass Spectrum Viewer

Participants

Description

  • Developing some kind of viewer of mass spectrum data.
    • Heatmap
    • 3D
    • Mirror Plot

https://github.com/masspp

38 of 43

Workflow and Container helpdesk

Participants

  • Tomoya, Kentaro(Yamaken), Pitiporn(Sam), Manabu, David,�Naohisa, Michael (online), Arto, �Chihiro (interested)

Description

  • Help others to develop their workflows
    • e.g., CWL, snakemake, nextflow, …
  • Help others to use workflow-related technologies
    • Containers such as Docker, Singularity, Podman, …
    • Job Schedulers such as Slurm, GridEngine,…
  • Develop and improve workflow ecosystems
    • e.g., executors, specifications, related tools,

and workflows!

Help !!

  • We want better name for our group !!

Slack channel: #workflows

39 of 43

Revisiting SRAmetadb.sqlite

Participants: Nishad

Description

�SRAmetadb.sqlite is an SQLite database that assembles the Sequence Read Archive (SRA) metadata into an offline SQLite database. This database is utilized by the SRAdb R package and the pysradb Python CLI tool to query SRA metadata. However, it has not been updated frequently, with the last update in late 2023, and no public tools are available for rebuilding or updating it. This project aims to create an open-source pipeline for generating and updating a similar SRAmetadb.sqlite database from SRA metadata.�

Rationale:

  • SRAmetadb.sqlite is valuable beyond the R package and can be leveraged by other tools and languages, like DuckDB.
  • Introduce features such as generating subsets of SRA metadata, e.g., limiting to specific species.
  • Offline access to SRA metadata enhances speed for querying and analysis
  • Adapt to emerging use cases, including LLMs and RAG applications.

Directions:

  • This project prioritizes low resource usage and ease of maintenance/update rather than optimizing the speed of database generation.
  • This pipeline is not intended to replicate the original SRAmetadb.sqlite or create a drop-in replacement. Still, it will attempt to be as compatible with the original as possible and explore the potential of some of the modern SQLite features.

40 of 43

Additionals

41 of 43

42 of 43

Request for tutorials

  • QLever: Arto, Gos, Toshiaki, Evan, Julia, Ruwan
  • Neptune: Arto, Evan, Jerven, Hannah
  • RDF Portal: Andrea, Gos, Núria,
  • RDF-config: Núria, Gos, Dani, Evan
  • TogoVar:
  • SPARQList
  • TogoID: Evan, Chunlei
  • PubCaseFinder: Andrea
  • TogoDX
  • TogoDB: Hiromi
  • RDF-doctor
  • SPARQL-proxy
  • TogoStanza
  • MetaStanza
  • UmakaYummy
  • PubAnnotation
  • Grasp
  • JSON2LD Mapper: Chunlei
  • TogoWS
  • TogoGenome: Sam
  • Endpoint browser: Jerven, Chunlei
  • TogoMedium
  • D2RQ Mapper: Julia, Jerven
  • PubDictionaries: Andrea
  • Allie
  • SPANG
  • Colil
  • inMeXes
  • Med2RDF: Andrea

43 of 43

Group photo

Let’s take a group photo in YUMORI before lunch!