1 of 97

DataFusion and Apache Arrow

Supercharge Your Data Analytical Tool with a Rusty Query Engine

Andrew Lamb

Staff Engineer, InfluxData

Apache Arrow PMC

Daniël Heres

Data Engineer, GoDataDriven

Apache Arrow PMC

2 of 97

Introduction

Your Speakers

Andrew

Daniël

Staff Engineer @ InfluxData

Previously

  • Query Optimizer @ Vertica, Oracle Database server, embedded Compilers
  • Chief Architect + VP Engineering roles at ML startups

Data/ML Engineer @ GoDataDriven

Previously

  • Data / ML Engineer @ bol.com
  • Startups

2

3 of 97

Why should you care?

3

Andrew

4 of 97

Recent Proliferation of Big Data systems

4

5 of 97

Recent Proliferation of Databases

5

DB

6 of 97

6

7 of 97

What is going on?

COTS → Totally Custom

“Buy and Operate”

  • Buy software from vendors
  • Operate on your own hardware, with sysadmins

“Build and Operate”

  • Write software for, and operate all components
  • Optimized for exact needs

“Assemble and Operate”

  • Assemble from open source technologies
  • Operate on resources in a public cloud

7

IT

FANG

Current Trend

8 of 97

Apache Arrow

Multi-language toolkit for Processing and Interchange

Founded in 2016

Apache Software Foundation

Low level / foundational technology to build fast and interoperable analytic systems

Open standard, implementations in 12+ languages

Adopted widely in industry products and open source projects

8

9 of 97

DataFusion: A Query Engine

“DataFusion is an extensible query execution framework, written in Rust, that uses Apache Arrow as its in-memory format.”

  • DataFusion Website

9

10 of 97

DataFusion: A Query Engine

10

SQL Query

SELECT status, COUNT(1)

FROM http_api_requests_total

WHERE path = '/api/v2/write'

GROUP BY status;

Data Batches

DataFrame

ctx.read_table("http")?

.filter(...)?

.aggregate(..)?;

Data Batches

Catalog information:

tables, schemas, etc

11 of 97

Implementation timeline for a new Database system

11

Client

API

In memory

storage

In-Memory

filter + aggregation

Durability / persistence

Metadata Catalog +

Management

Query

Language

Parser

Optimized /

Compressed storage

Execution on

Compressed

Data

Joins!

Additional Client

Languages

Outer Joins

Subquery support

More advanced analytics

Cost based optimizer

Out of core algorithms

Storage Rearrangement

Heuristic Query Planner

Arithmetic expressions

Date / time Expressions

Concurrency

Control

Data Model /

Type System

Distributed query execution

Resource

Management

“Lets Build

a Database”

🤔

“Ok now this is pretty good”

😐

“Look mom!

I have a database!”

😃

Online recovery

Window functions

12 of 97

12

But for Databases

🤔

13 of 97

LLVM-like Infrastructure for Databases

13

Inputs

Logical Plan

Execution Plan

Plan Representations

(DataFlow Graphs)

Expression Eval

Optimizations / Transformations

Optimizations / Transformations

HashAggregate

Sort

Optimized Execution Operators

(Arrow Based)

Join

Data

(Parquet, CSV, statistics, …)

DataFusion

Query

(SQL, code, DataFrame, …)

Code

(UDF, UDA, etc)

Resources

(Cores, memory, etc)

14 of 97

DataFusion: Totally Customizable

14

Inputs

Logical Plan

Execution Plan

Plan Representations

(DataFlow Graphs)

Expression Eval

Optimizations / Transformations

Optimizations / Transformations

HashAggregate

Sort

Optimized Execution Operators

(Arrow Based)

Join

Data

(Parquet, CSV, statistics, …)

DataFusion

Query

(SQL, code, DataFrame, …)

Code

(UDF, UDA, etc)

Resources

(Cores, memory, etc)

Extend ✅

Extend ✅

Extend ✅

Extend ✅

Extend ✅

Extend ✅

Extend ✅

Extend ✅

15 of 97

DataFusion Project Growth

15

5.0.0

6.0.0

7.0.0

8.0.0

Number of Unique Contributors

Date

16 of 97

DataFusion Project Growth

16

17 of 97

DataFusion Milestones: Time to Mature

5+ year labor of love

17

Dec 2016 Initial DataFusion Commit By Andy Grove

Feb 2019 Donation to Apache Arrow

Apr/May 2022: Subqueries

Apr 2021 Ballista is donated to Apache Arrow

2020-2021: (hash) joins, window functions, performance & parallelization, etc.

Mar 2018 Arrow in Rust started, DataFusion switches to Arrow

Nov 2021

DataFusion Contrib

18 of 97

Overview of Apache Arrow DataFusion

18

Daniël

19 of 97

From Query to Results

19

ExecutionPlan

LogicalPlan

Optimize

Optimize

Execute!

Query / DataFrame

20 of 97

From Query to Results

an example

select

count(*) num_visitors,

job_title

from

visitors�where

city = "San Francisco"

group by

job_title

1

2

3

4

5

6

7

8

9

10

20

21 of 97

From Query to Results

datafusion package available via PyPI

visitors = ctx.table("visitors")

df = (

visitors.filter(col("city") == literal("San Francisco"))

.aggregate([col("job_title")], [f.count(literal(1))])

)

batches = df.collect() # collect results into memory (Arrow batches)

1

2

3

4

5

6

7

8

9

10

21

22 of 97

From Query to Results

22

ExecutionPlan

Optimize

Optimize

Execute!

Query / DataFrame

LogicalPlan

Logical Plan represents the what

23 of 97

Initial Logical Plan

SQL is parsed, then translated into a initial Logical Plan.

(Read plan from bottom to top)

23

Projection: #COUNT(UInt8(1)) AS num_visitors, #visitors.job_title

Aggregate: groupBy=[[#visitors.job_title]], aggr=[[COUNT(UInt8(1))]]

Filter: #visitors.city = Utf8("San Francisco")

TableScan: visitors projection=None

select count(*) num_visitors, job_title

from visitors

where city = "San Francisco"

group by job_title

visitors = ctx.table("visitors")

df = (

visitors.filter(col("city") == literal("San Francisco"))

.aggregate([col("job_title")], [f.count(literal(1))])

)

24 of 97

Let's Optimize!

  • Massively speed up execution times (10x, 100x, 1000x) by rewriting queries to a equivalent, optimized version
  • 14 built-in optimization passes in DataFusion, adding more each version
  • Add custom optimization passes

24

ExecutionPlan

Optimize

Execute!

Query / DataFrame

LogicalPlan

Optimize

25 of 97

Let's Optimize!

Projection Pushdown

Minimizing IO (especially useful for formats like Parquet), processing

25

Projection: #COUNT(UInt8(1)) AS num_visitors, #visitors.job_title

Aggregate: groupBy=[[#visitors.job_title]], aggr=[[COUNT(UInt8(1))]]

Filter: #visitors.city = Utf8("San Francisco")

TableScan: visitors projection=None

num_visitors

26 of 97

Let's Optimize!

Projection Pushdown

Minimizing IO (especially useful for formats like Parquet), processing

26

Projection: #COUNT(UInt8(1)) AS num_visitors, #visitors.job_title

Aggregate: groupBy=[[#visitors.job_title]], aggr=[[COUNT(UInt8(1))]]

Filter: #visitors.city = Utf8("San Francisco")

TableScan: visitors projection=Some([0, 1])

projection_push_down

Projection: #COUNT(UInt8(1)) AS num_visitors, #visitors.job_title

Aggregate: groupBy=[[#visitors.job_title]], aggr=[[COUNT(UInt8(1))]]

Filter: #visitors.city = Utf8("San Francisco")

TableScan: visitors projection=None

num_visitors

27 of 97

Let's Optimize!

Filter Pushdown

Minimizing IO (especially useful for formats like Parquet), processing

27

Projection: #COUNT(UInt8(1)) AS n, #visitors.job_title

Aggregate: groupBy=[[#visitors.job_title]], aggr=[[COUNT(UInt8(1))]]

Filter: #visitors.city = Utf8("San Francisco")

TableScan: visitors projection=Some([0, 1])

num_visitors

28 of 97

Let's Optimize!

Filter Pushdown

Minimizing IO (especially useful for formats like Parquet), processing

28

filter_push_down

Projection: #COUNT(UInt8(1)) AS n, #visitors.job_title

Aggregate: groupBy=[[#visitors.job_title]], aggr=[[COUNT(UInt8(1))]]

Filter: #visitors.city = Utf8("San Francisco")

TableScan: visitors projection=Some([0, 1]), partial_filters=[#visitors.city = Utf8("San Francisco")]

num_visitors

Projection: #COUNT(UInt8(1)) AS n, #visitors.job_title

Aggregate: groupBy=[[#visitors.job_title]], aggr=[[COUNT(UInt8(1))]]

Filter: #visitors.city = Utf8("San Francisco")

TableScan: visitors projection=Some([0, 1])

29 of 97

Let's Create...

The ExecutionPlan

The Execution Plan represents the where and how

29

Optimize

Execute!

Query / DataFrame

LogicalPlan

ExecutionPlan

Optimize

30 of 97

The Initial Execution Plan

30

ProjectionExec: expr=[COUNT(UInt8(1))@1 as number_visitors, job_title@0 as job_title]� HashAggregateExec: mode=FinalPartitioned, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]RepartitionExec: partitioning=Hash([Column { name: "job_title", index: 0 }], 16)

HashAggregateExec: mode=Partial, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]

FilterExec: city@1 = San Francisco

CsvExec: files=[./data/visitors.csv], has_header=true, limit=None, projection=[job_title, city]

31 of 97

And... Optimize!

31

Execute!

Query / DataFrame

LogicalPlan

Optimize

Optimize

ExecutionPlan

32 of 97

Optimize

CoalesceBatches: Avoiding small batch size

32

ProjectionExec: expr=[COUNT(UInt8(1))@1 as number_visitors, job_title@0 as job_title]� HashAggregateExec: mode=FinalPartitioned, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]� RepartitionExec: partitioning=Hash([Column { name: "job_title", index: 0 }], 16)

HashAggregateExec: mode=Partial, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]

FilterExec: city@1 = San Francisco

CsvExec: files=[./data/visitors.csv], has_header=true, limit=None, projection=[job_title, city]

33 of 97

Optimize

CoalesceBatches: Avoiding small batch size

33

ProjectionExec: expr=[COUNT(UInt8(1))@1 as number_visitors, job_title@0 as job_title]� HashAggregateExec: mode=FinalPartitioned, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]� CoalesceBatchesExec: target_batch_size=4096

RepartitionExec: partitioning=Hash([Column { name: "job_title", index: 0 }], 16)

CoalesceBatchesExec: target_batch_size=4096

FilterExec: city@1 = San Francisco

CsvExec: files=[./data/visitors.csv], has_header=true, limit=None, projection=[job_title, city]

coalesce_batches

ProjectionExec: expr=[COUNT(UInt8(1))@1 as number_visitors, job_title@0 as job_title]� HashAggregateExec: mode=FinalPartitioned, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]� RepartitionExec: partitioning=Hash([Column { name: "job_title", index: 0 }], 16)

HashAggregateExec: mode=Partial, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]

FilterExec: city@1 = San Francisco

CsvExec: files=[./data/visitors.csv], has_header=true, limit=None, projection=[job_title, city]

34 of 97

Optimize

Repartition: Introducing parallelism

34

ProjectionExec: expr=[COUNT(UInt8(1))@1 as number_visitors, job_title@0 as job_title]� HashAggregateExec: mode=FinalPartitioned, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]� CoalesceBatchesExec: target_batch_size=4096

RepartitionExec: partitioning=Hash([Column { name: "job_title", index: 0 }], 16)

CoalesceBatchesExec: target_batch_size=4096

FilterExec: city@1 = San Francisco

CsvExec: files=[./data/visitors.csv], has_header=true, limit=None, projection=[job_title, city]

35 of 97

Optimize

Repartition: Introducing parallelism

35

ProjectionExec: expr=[COUNT(UInt8(1))@1 as number_visitors, job_title@0 as job_title]� HashAggregateExec: mode=FinalPartitioned, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]� CoalesceBatchesExec: target_batch_size=4096

RepartitionExec: partitioning=Hash([Column { name: "job_title", index: 0 }], 16)

CoalesceBatchesExec: target_batch_size=4096

FilterExec: city@1 = San Francisco

CsvExec: files=[./data/visitors.csv], has_header=true, limit=None, projection=[job_title, city]

repartition

ProjectionExec: expr=[COUNT(UInt8(1))@1 as number_visitors, job_title@0 as job_title]� HashAggregateExec: mode=FinalPartitioned, gby=[job_title@0 as job_title], aggr=[COUNT(UInt8(1))]� CoalesceBatchesExec: target_batch_size=4096

RepartitionExec: partitioning=Hash([Column { name: "job_title", index: 0 }], 16)

CoalesceBatchesExec: target_batch_size=4096

FilterExec: city@1 = San Francisco

RepartitionExec: partitioning=RoundRobinBatch(16)

CsvExec: files=[./data/visitors.csv], has_header=true, limit=None, projection=[job_title, city]

36 of 97

Getting results

Return record batches (or write results)

36

Query / DataFrame

LogicalPlan

Optimize

ExecutionPlan

Execute!

Optimize

Arrow Batches

37 of 97

DataFusion Features

  • Mostly complete SQL implementation (aggregates, joins, window functions, etc)
  • DataFrame API (Python, Rust)
  • High performance vectorized, native, safe, multi-threaded execution
  • Common file formats: Parquet, CSV, JSON, Avro
  • Highly extensible / customizable
  • Large, growing community driving project forward

37

38 of 97

SQL Support

Projection (SELECT), Filtering (WHERE), Ordering (ORDER BY), Aggregation (GROUP BY)

Aggregation functions (COUNT, SUM, MIN, MAX, AVG, APPROX_PERCENTILE, etc)

Window functions (OVER .. ([ORDER BY ...] [PARTITION BY ..])

Set functions: UNION (ALL), INTERSECT (ALL), EXCEPT

Scalar functions: string, Date/time,... (basic)

Joins (INNER, LEFT, RIGHT, FULL OUTER, SEMI, ANTI)

Subqueries, Grouping Sets

38

39 of 97

Extensibility

Customize DataFusion to your needs

User Defined Functions

User Defined Aggregates

User Defined Optimizer passes

User Defined LogicalPlan nodes

User Defined ExecutionPlan nodes

User Defined TableProvider

User Defined FileFormat

User Defined ObjectStore

39

40 of 97

Systems Powered by DataFusion

40

Andrew

41 of 97

FLOCK

  • Overview:
    • Low-Cost Streaming Query Engine on FaaS Platforms
    • Project from UMD Database Group, runs streaming queries on AWS Lambda (x86 and arm64/graviton2).
  • Use of DataFusion
    • SQL API:
    • DataFrame API: To build plans
    • Optimized native plan execution

41

42 of 97

ROAPI

  • Overview:
    • read-only APIs for static datasets without code
    • columnq-cli: run sql queries against CSV files
  • Use of DataFusion
    • SQL API:
    • DataFrame API: (to build plans for GraphQL)
    • File formats: CSV, JSON, Parquet, Avro
    • Optimized native plan execution

42

43 of 97

VegaFusion

  • Overview:
    • Accelerates execution of (interactive) data visualizations
    • Compiles Vega data transforms into DataFusion query plans.
  • Use of DataFusion:
    • DataFrame API: To build plans
    • UDFs: to implement some Vega expressions
    • Optimized native plan execution

43

44 of 97

Cube.js / Cube Store

  • Overview:
    • Headless Business Intelligence
    • cubestore pre-aggregation storage layer
  • Use of DataFusion (fork)
    • SQL API (with custom extensions)
    • Custom Logical and Physical Operators
    • UDFs: custom functions
    • Optimized native plan execution

44

45 of 97

InfluxDB IOx

  • Overview:
    • In-memory columnar store using object storage, future core of InfluxDB; support SQL, InfluxQL, and Flux
    • Query and data reorganization built with DataFusion
  • Use of DataFusion:
    • Table Provider: Custom data sources
    • SQL API
    • PlanBuilder API: Plans for custom query language
    • UD Logical and Execution Plans
    • UDFs: to implement the precise semantics of influxRPC
    • Optimized native plan execution

45

46 of 97

Coralogix

  • Overview:
    • Stateful streaming analytics with machine learning enables teams to monitor and visualize observability data in real-time before indexing
  • Use of DataFusion:
    • Table Provider: custom data source
    • User Defined Logical and Execution Plans: to implement a custom query language
    • User Defined ObjectStore: for queries over data in object storage
    • UDFs: for working with semi-structured data
    • Optimized native plan execution

46

47 of 97

blaze-rs

  • Overview:
    • High performance, low-cost native execution layer for Spark: execute the physical operators with Rust
    • Translates Spark Exec nodes into DataFusion Execution Plans
  • Use of DataFusion
    • Optimized native plan execution
    • HDFS Object Store Extension

47

48 of 97

Ballista Distributed Compute

  • Overview:
    • Spark-like distributed Query Engine (part of Arrow Project)
    • Adds distributed execution to DataFusion plans
  • Use of DataFusion:
    • SQL API
    • DataFrame API
    • Optimized native plan execution
    • File formats: CSV, JSON, Parquet, Avro

48

49 of 97

What’s Next?

49

Daniël

50 of 97

Future Directions

  • Embeddability
    • More regular releases to crates.io, more modularity
  • Broader SQL features
    • Subqueries, more date/time functions, struct / array types
  • Improved Performance
    • Query directly from Object Storage
    • More state of the art tech: JIT, NUMA aware scheduling, hybrid row/columnar exec
  • Ecosystem integration
    • FlightSQL, Substrait.io
    • Databases
  • GPU support

50

51 of 97

Come Join Us

We ❤️ Our Contributors

  • Contributions at all levels are encouraged and welcomed.
  • Learn Rust!
  • Learn Database Internals!
  • Have a great time with a welcoming community!

More details:

https://arrow.apache.org/datafusion/community/communication.html

51

52 of 97

Andrew Lamb

Staff Engineer, InfluxData

Apache Arrow PMC

Daniël Heres

Data Engineer, GoDataDriven

Apache Arrow PMC

52

53 of 97

53

54 of 97

Backup Slides

54

55 of 97

Thank You!

Andrew Lamb

Staff Engineer, InfluxData

Apache Arrow PMC

Daniël Heres

Data Engineer, GoDataDriven

Apache Arrow PMC

56 of 97

DataFusion / Arrow / Parquet

Parquet

Arrow

sqlparser-rs

DataFusion

57 of 97

A Virtuous Cycle

Increased Use of Drives Increased Contribution

57

Increased use of open source systems

Increased capacity for maintenance and contribution

DataFusion, and Apache Arrow are key open source technologies for building interoperable open source systems

58 of 97

delta-rs

  • Overview:
    • Native Delta Lake implementation in Rust
  • Use of DataFusion
    • Table Provider API: allows other DataFusion users to read from Delta tables

58

DISCLAIMER: Not yet cleared / verified with project team

59 of 97

Cloudfuse Buzz

  • Serverless cloud-based query engine
    • map using cloud functions (AWS Lambda)
    • aggregate using containers (AWS Fargate)
  • Project (expected to be) continued from june

59

DISCLAIMER: Not yet cleared / verified with project team

60 of 97

dask-sql

  • Overview:
    • TBD
  • Use of DataFusion:

60

DISCLAIMER: Not yet cleared / verified with project team

61 of 97

Apache Arrow Analytics Toolkit

Where does DataFusion fit?

61

Parquet (“Disk”)

Arrow (“Memory”)

Compute Kernels

Arrow Flight

Arrow FlightSQL

DataFusion

Data Formats

Low Level Calculations + Interchange

Runtime Subsystems

IPC

C ABI

Analytics / Database Systems

C++ Query Engine

Analytic systems built using some of this stack

Native Implementations

Language Bindings

62 of 97

Query Engines

What is it and why do you need one?

  1. Add SQL or DataFrame interface to your application’s data
  2. Implement a custom query language / DSL
  3. Implement a new data analytic system
  4. Implement a new database system (natch)

Maps Desired Computations: SQL and DataFrame (ala Pandas)

To Efficient Calculation: projection pushdown, filter pushdown, joins, expression simplification, parallelization, etc

62

63 of 97

datafusion-python

  • Overview:
    • Python dataframe library (modeled after pyspark)
  • Use of DataFusion
    • SQL API
    • DataFrame API
    • File formats: CSV, JSON, Parquet, Avro
    • Optimized native plan execution

63

DISCLAIMER: Not yet cleared / verified with project team

64 of 97

Common Themes

Come for the performance, stay for the features (?)

Native execution

Native (non JVM) of Spark/Spark like behavior

SQL interface

Projects are leveraging properties of Rustlang

SQL / DataFrame API

64

65 of 97

Better, Faster, Cheaper

The DataFusion Query Engine is part of the commoditization of advanced analytic database technologies

Transform analytic systems over the next decade

65

Better

Faster

Cheaper

66 of 97

Andrew’s Notes

Proposal: Data + AI Summit talk

Desired Takeaways:

  1. If you need a query engine (in Rust?), you should use DataFusion

Thesis: DataFusion is part of a larger trend (spearheaded by Apache Arrow) in the commoditization of analytic database technologies, which will lead to many faster / cheaper / better analytic systems over the next decade

Other decks for inspiration:

DataFusion: An Embeddable Query Engine Written in Rust

xA Rusty Introduction to Apache Arrow and how it Applies to a Time Series Database

2021-04-20: Apache Arrow and its Impact on the Database industry.pptx

66

67 of 97

Instructions: Read me!

Getting started with our slide template

When using this template, create your new slides at the very top of the slide order, above this slide. Explore the advice and example slides below to find useful layouts and graphics to pull into your design. When your slide deck is complete, delete this slide and every slide below it.

67

68 of 97

Presentation best practices

Get creative

Make it scannable

Less is more

Clarity over density

There are great baseline slides in this template, but it may not have everything you need. Don’t be afraid to craft your own layouts! Just pay attention to the font and grid guidelines, and take advantage of starter shapes.

Use text hierarchy to create order and keep your content scannable. No walls of text! Try to keep headlines short.

Don’t try to cram everything onto a limited number of slides. More slides with less text per slide is easier to digest.

68

69 of 97

Font Guidance

Font selection

All text in our slide decks should use one of two available event brand fonts: DM Sans or DM Mono.

If you do not see these fonts in your font selection menu, they can be added by selecting “More fonts” and searching for “dm.” Click on DM Sans and DM Mono, then hit OK.

69

1

2

70 of 97

Font Guidance (Cont.)

Font sizing

Using consistent type sizing is a good way to help your slides feel uniform. When selecting type sizes, try to stick to multiple of 8, with the exceptions of 12 and 20 as in-betweens.

DATA+AI Summit

12

DATA+AI Summit

DATA+AI Summit

DATA+AI Summit

DATA+AI Summit

DATA+AI Summit

DATA+AI Summit

DATA+AI Summ

16

20

24

32

40

56

64

70

71 of 97

Grid Guidance

Keep it orderly

Your presentation template has a 12 column grid to help you organize the elements on your slides. When laying out objects, consider using the grid to help.

Toggle the grid visibility by navigating to View > Guides > Show Guides.

71

72 of 97

Color Guidance

Keep it on brand

When customizing charts or adding other visual elements, do your best to stay within our defined event color palette. This will ensure that all your content looks great together and doesn’t clash with the slide template design.

Always use black text when placing content over a colored background. The only exception is when using a black background. Any color text is acceptable on black.

72

10121E

00B6E0

85DDB5

F16047

EDEEF1

8FDDEF

AFE9CF

F3A89B

73 of 97

Example Slides

73

74 of 97

Choose Your Title Slide

Eighteen colorful title slide options with varying shapes

Add your Name

Add your title, company

74

75 of 97

Choose Your�Title Slide

Eighteen colorful title slide options with varying shapes

Add your Name

Add your title, company

75

76 of 97

Choose Your�Title Slide

Eighteen colorful title slide options with varying shapes

Add your Name

Add your title, company

76

77 of 97

Basic Content Slide

Your all-purpose zone

Use this slide as a starting point for crafting your own layouts, or for simple text slides.

77

78 of 97

Activate Dark Mode

Mix in black slides to add contrast and variety

Or make your whole presentation dark!

78

79 of 97

Insert your charts or images

Take advantage of the content panels

Insert Image by URL

If you want to insert a gif or other image from the web, simply navigate to�Insert > Image > by URL.

Crop and resize your image to fit within content panels, if you’re feeling fancy.

79

80 of 97

Andrew PonsSlide Designer

“With just a few adjustments to text size and alignment, you can use the basic content slide for other types of content such as quotes.”

80

81 of 97

81

Column A

Column B

Column C

Column D

Column E

Column F

Row A

You can create simple tables to help organize information.

Row B

Row C

Row D

Row E

Row F

Row G

Row H

82 of 97

Timeline Style One

Your subtitle here

82

Timeline Item

Timeline Item

Timeline Item

Timeline Item

Timeline Item

Timeline Item

Timeline Item

83 of 97

Timeline Style Two

Your subtitle here

83

Your gantt chart item

Q1

Q2

Q3

Q4

Your gantt chart item

Your gantt chart item

Your gantt chart item

Your gantt chart item

Your gantt chart item

Your gantt chart item

Your gantt chart item

84 of 97

Single Column

Content Tile

Multi-purpose

Use this panel for content, images, diagrams, or whatever else you want to include. You can use the line tool to divide this panel into multiple sections if you want.

84

85 of 97

Two Column

Content Tile

Multi-purpose

Multi-purpose

Use these slides for comparing two topics or just for splitting your content into multiple pieces.

Use these slides for comparing two topics or just for splitting your content into multiple pieces.

85

86 of 97

Three Column

Column 3

Column 2

Column 1

86

87 of 97

Four Column

Column 1

Column 4

Column 3

Column 2

87

88 of 97

Half Panel

Right aligned

Open Content

Panel Content

This space is great for supporting text that compliments whatever content is inside the panel.

This space can be for text content, images, diagrams, or whatever you need

88

89 of 97

Half Panel

Left aligned

Panel Content

Open Content

This space can be for text content, images, diagrams, or whatever you need

This space is great for supporting text that compliments whatever content is inside the panel.

89

90 of 97

⅔ Panel

Right aligned

Panel Content

Open Content

This space can be for text content, images, diagrams, or whatever you need

This space is great for supporting text that compliments whatever content is inside the panel.

90

91 of 97

⅔ Panel

Left aligned

Panel Content

Open Content

This space can be for text content, images, diagrams, or whatever you need

This space is great for supporting text that compliments whatever content is inside the panel.

91

92 of 97

Code Display

Paste snippets

select

count (*),

age

from

visitors�where location="SanFrancisco"

group by job_title

1

2

3

4

5

6

7

8

9

10

92

93 of 97

Use breaker slides to divide your deck into sections

93

94 of 97

Use breaker slides to divide your deck into sections

94

95 of 97

Starter Shapes

Copy and paste these wherever you need them

95

Floating panel for text or graphics

Medium pill label

SMALL PILL LABEL

Medium pill label

SMALL PILL LABEL

96 of 97

Logos

Partners and cloud platforms

96

97 of 97

Logos

Open source projects

97