State of Web3 Grants Report

Summary

From late June to early September 2023, 20 interviews were conducted with individuals across 13 organizations. The results of these interviews have been organized in this document.

Three types of grant programs were covered: active programs, sunset programs, and quadratic funding grant operators. The specific programs covered are as follows:[1]

The 11 active and sunset grant programs have issued upwards of half of a billion dollars worth of grants. Four programs have issued or committed at least $50m in grants - Algorand (~$100m), Ethereum (~$110m), NEAR (~$540m), and Solana (~$50m).[2] If we include grant round operators, Gitcoin would also make the list, having facilitated over $50m dollars worth of grants. Across all of the covered programs. Over 5,900 grants were issued.

This report by no means covers all there is to cover about granting in web3. It is a working doc that will evolve as we get more feedback and input from the community. However, we do hope it serves as an additional stepping stone in the journey of better understanding grant programs in web3. The ‘Takeaways’ section delves into a variety of things that we hope can contribute to grant programs improving. Feel free to reach out with any feedback or to collaborate.

Thanks,

Eugene Leventhal and Mashal Waqar


Table of Contents

Summary        1

Table of Contents        2

Introduction        3

Public Goods and web3        4

Team and Motivations        5

Team Member Bios        5

Methodology        6

Grant Program Explorations        7

Active Programs:        7

Sunset Grant Programs        8

Quadratic Funding Grant Operators        8

Active Web3 Grant Programs        8

Aave        8

Compound (Administered by Questbook)        14

Ethereum Foundation        20

Mantle        26

Solana Foundation        30

TON        34

Uniswap Foundation        39

Sunset Grant Programs        46

Algorand Foundation        46

NEAR Foundation        52

Polygon        58

Protocol Labs        65

Quadratic Funding Grant Operators        70

Clr.fund        71

Gitcoin        74

Discussion and Takeaways        79

Conclusion & Future Areas of Exploration        85

Acknowledgements        86


Introduction

From a mainstream perspective, web3 - a term used to encapsulate blockchain, cryptocurrencies, and the totality of communities and possibilities that emerge as a result of their creation[3] - is known for multimillion JPEGs,[4] record-breaking auctions,[5] and a veritable slew of scams and questionable activity.[6] Over the past 3 years, we have seen this space grow in terms of number of developers,[7] communities,[8] and tangible use cases,[9] along with the tens of billions of dollars in funding that have poured into it.[10] Technical advancements aside, one area continues to remain less examined: grants. Despite hundreds of millions of dollars given out in grant funding and over a billion dollars in existing and forward looking commitments, there’s still a significant lack of understanding of web3 grants programs. This report aims to bridge this gap.

The purpose of this study is to demystify and bring all-round awareness to web3 grants programs as a whole. From conversations with operators of some of the longest-standing grants programs who have witnessed the inception of protocols and web3, to nascent programs launched this year, we present a full spectrum of the grants landscape.

The evolution of this industry has led to creations of subcultures, language,[11] and an idealism that has spun off DAOs (Decentralized Autonomous Organizations), communities, and a transition to more promising intended usage of blockchain technology with positive potential.[12] While some of the grant programs in web3 were birthed as bounties to attract developers before they grew to sophisticated grants programs, others had more simple beginnings to raise awareness about their protocols.

As you delve deeper, you’ll read about grants programs actively issuing grants, to those that have sunset and/or transitioned to different formats, followed by grant round operators using quadratic funding. Throughout, we’ve strived for unbiased analysis, drawing from research that utilized both qualitative and quantitative data.

Our hope is that this report sparks curiosity and encourages more people to learn more about grants programs, the ecosystem at large, and takeaways that better guide approaches to building, growing, and operating in this space. Before we get into the grant programs themselves, we wanted to take a quick detour into public goods.

Public Goods and web3

Public goods have traditionally been understood as resources that are universally accessible, such as clean air or public parks. Economically speaking, they’re characterized as non-excludable and nondepletable (or “non-rivalrous”). A good is non-excludable if one cannot exclude individuals from enjoying its benefits when the good is provided. A good is nondepletable if one individual’s enjoyment of the good does not diminish the amount of the good available to others.[13]

As more activities shift from physical activities to digital ones, the idea of digital public goods has emerged in recent decades. Groups like the UN talk about the importance of having digital public goods in the form of open source and accessible software tools that can help democratize access to information and algorithmically powered tools.[14] The UN supported the creation of a Digital Public Goods Alliance that is meant to facilitate “the discovery and deployment of open-source technologies, bringing together countries and organizations to create a thriving global ecosystem for digital public goods and helping to achieve the Sustainable Development Goals”.[15] 

These digital public goods, often birthed and nurtured by dedicated communities, usually operate outside conventional for-profit frameworks. Their lack of commercial viability underscores the pressing need for alternative funding mechanisms, an area of concern for open source projects broadly.[16] These public goods are sometimes referred to as digital infrastructure, as discussed in detail in Nadia Eghbal’s seminal report titled “Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure”.[17]

As web3 is a digitally native domain, the idea of public goods has its own incarnation in this space. Here they manifest as open-source software, decentralized applications (DApps), and protocols or tools that serve as basic infrastructure. Given the general funding challenges of public goods, grant programs in the web3 space have stepped in to fill the role that governments, foundations, and other philanthropic funders play elsewhere. The origins of web3 grant programs can be argued to have been centered around public goods, focused on building the infrastructure needed to help bring more developers to build and more users to interact with open projects and protocols.

Web3 grants, often disbursed by foundations or DAOs, aim to support projects that promise to significantly advance web3 broadly, with groups like the Ethereum Foundation[18], Solana Foundation[19], and Uniswap[20], to name a few, having publicly stated mandates supporting public goods. The impact of some of these grants is already evident, with numerous grant-funded initiatives now serving as cornerstones of the web3 infrastructure. There are also success stories of grantees who have grown to return as funders, giving back to continue the funding and development of public goods. Uniswap is a prime example of this, having received grant support from the EF and now issuing their own grants. These grants not only provide financial support but also foster a spirit of collaboration and innovation, ensuring the ecosystem remains vibrant and responsive to emerging challenges and user needs.

In this study, the authors did not do a review of the nature of projects that have been funded and whether they are or are not public goods. An attempt at doing so would be complicated by the fact that some projects that started as public goods have or may evolve into funded projects with actual business models, arguably precluding a subset of them from still being considered public goods. While the goal of this report is not to promote or shame certain projects for their commitment (or lack thereof) to public goods, we did believe that providing some of this context is important to get a flavor for what inspired some of the earlier programs, at least in part.

To sum up, web3 grants can be more than just financial tools; they symbolize a potential commitment to a vision of an internet that’s more open, decentralized and centered around users. The continued evolution of these grant mechanisms is crucial in realizing the full potential of this industry and its long-term impact. Only time will tell if these intentions will materialize.

Team and Motivations

The team that put together this report included Eugene Leventhal and Mashal Waqar. They each brought a variety of interests and relevant experiences to this work. Both of them were interested in more deeply exploring the nature of grant programs, the types of projects they funded, and how they defined and measured impact.

Team Member Bios

Eugene Leventhal is currently the Head of Operations and Partnerships and will be stepping in as the Interim Executive Director as of October 1 at Metagov, a governance research nonprofit. Prior to joining Metagov, Eugene was the Executive Director at the Smart Contract Research Forum, a nonprofit project focused on spurring more conversation around web3 research.

Eugene also supported what later became the Secure Blockchain Initiative at CMU, where he worked as a project manager for 2 years after finishing his policy masters there, all of which came after spending 7 years in professional services in the finance industry. He first got into the space in 2016 when he worked on eduDAO, a DAO meant to help schools and nonprofits crowdfund more transparently. Eugene is passionate about DAOs and governance as a means of pushing towards a more cooperatively rooted future.


Contact
Email: eugene@metagov.org
Twitter:
@bbeats1 

Mashal Waqar is the Head of Ecosystem at Octant, and Managing Director at Milestone Ventures. Her recent projects in web3 include heading partnerships at Bankless Publishing, operations at a web3 venture studio, researching token models for seed club, QF and UNICEF research for Gitcoin, and being an affiliate researcher in Ethereum Foundation’s Summer of Protocols. She initially entered the space by starting the Security Practices and Research Student Association (SPARSA)’s RIT Dubai chapter in 2015, and writing security newsletters while pen-testing fintech products at TPS (Transaction Processing Systems).

Post that, Mashal co-founded a global media company (The Tempest), an accelerator program for early stage female founders, and has co-authored a paper on challenges faced by them. Since 2021, she’s been DAOing with Gitcoin, Shefi, Protein, RADAR, and BanklessDAO. Mashal holds a B.S. in Computing Security with a minor in International Business from Rochester Institute of Technology (RIT). She’s a Forbes Middle East 30 Under 30 and winner of the 19th WIL Economic Forum Young Leader of the Year award.

Contact

Email: mashal@milestoneventures.co
Twitter:
@arlery

Methodology

We conducted 20 interviews with individuals from 15 different organizations. Interviews lasted 30 to 60 minutes. Research on the grant programs was conducted ahead of time and the general history of the grant programs were outlined during the interviews. The majority of the conversations focused on:

  • The current operations of the grant program at the time of the interview
  • The review process and size of the review team
  • How impact is defined and what impact measurements are used by the programs

Our interviews, and subsequently the grant program breakdowns that follow, were centered around the below points. Indented sections are subsections (e.g. org mission is a subset of the history portion).

  • History: covers some of the overall background information on the organization and is meant to help contextualize their grants program
  • Mission: overall mission of the organization
  • Grants Program Mission: mission of the grant program specifically
  • Grants Process & Operations: includes 5 subsections to help better understand the process and operations pertaining to the grant program
  • Grant Categories: did the program utilize prospective vs retrospective vs RFP-based vs research-focused grants
  • Grant Process: what was the specific process for submitting and reviewing grants
  • Team Size: what was the size and composition of the grants team
  • Response Timelines: how quickly would grants be reviewed and processed, at least in terms of publicly available information or stated goals
  • Grant Types: what were the specific topics that the grant program focused on
  • Reviewer Structure: more information on the reviewers and any information on the review rubrics
  • Impact Measurements: what metrics were tracked by the grants team
  • Community: what role did the community play in the context of the grants program
  • Stats: what overall stats were available or provided to us regarding the grant programs
  • Applications Received: the number of unique applications the grant program received
  • Projects Funded: the number of unique projects funded
  • Amount Issued: the amount of capital deployed in the form of grants
  • In some instances Amount Committed or Stated will be used to indicate that the amount issued was unavailable or had limited clarity around it and either the amount of funds committed to grants or stated in a single source were used instead
  • It is important to note that the term ‘committed’ varied across programs. Solana Foundation used the term in the context of grants they have already committed to that are awaiting grantees to hit their milestones. Others use it more vaguely to imply the total sum of grants to be issued in the future, frequently without providing concrete timelines
  • Challenges faced by the grants program: overview of the some of the challenges faced by the program
  • What do they do very well: overview of some of the elements that this specific grant program executed well

Grant Program Explorations

As part of this report, we have interviewed program operators from a range of programs. We decided to break these programs down across Active Programs, Sunset Programs, and Quadratic Funding Grant Operators. The programs covered in this report are, broken by these categories, are as follows:

Active Programs:

Sunset Grant Programs 

Quadratic Funding Grant Operators

Active Web3 Grant Programs

We begin the breakdown of programs by focusing on those that are currently still in operation as of August 2023.

Aave

History 

Aave, originating as ETHLend in Switzerland back in 2017[21], has matured into a prominent decentralized finance (DeFi) protocol. This evolution enables users to seamlessly lend and borrow cryptocurrencies without intermediaries. While initially built on Ethereum, to counteract the limitations of Ethereum, Aave expanded its reach, releasing versions on Polygon and Avalanche in 2021. The transition from ETHLend to Aave in 2018 signified more than just a name change; it was a shift towards pool-based lending and borrowing, greatly enhancing user experience.

By January 2020, Aave had surged in adoption, managing over $1 billion within six months. Along with the tech advancements, the essence of Aave's community-driven approach began to shine. Initially, the Aave ecosystem supported projects through individual grants. These gradually morphed into two significant rounds[22] of Aave Ecosystem Grants[23] led by Aave Companies in 2020. However, with the protocol's decentralization, grant initiatives soon transitioned to community control. A pivotal moment came in 2021, with the inception of the Aave Grants DAO (AGD), a community-driven grant program. This shift from company-led to community-driven grant initiatives distinguishes Aave’s commitment to decentralization, transparent operations, and a deeply engaged community.

Aave Grants DAO (AGD) funds ideas submitted by the Aave protocol’s community, with a focus on empowering a wider network of developers, DeFi projects, community builders, and anything that helps the broader Aave Ecosystem.

Aave Protocol’s Mission

The Aave Protocol aims to enable global permissionless tokenized assets money markets.[24]

Aave Grants DAO’s Mission[25]

AGD is a community-led grants program that supports the growth of the Aave ecosystem by:

  1. Funding innovative projects backed by strong teams with creative ideas that benefit and strengthen the Aave ecosystem through activity.
  2. Fostering a strong and inclusive culture around Aave. A strong culture helps to attract, retain and incentivize the best contributors to the Aave ecosystem, and encourages the development of new and creative solutions to key areas in the Aave protocol and Aave DAO.[26]

Grants Process & Operations:

Grant Categories  

Aave’s grant program is focused on prospective grants that it breaks down into:

  • Rapid Grants
  • Community Grants

The prospective grants that are eligible for funding fall into the following categories:

  • Protocol development
  • Applications and integrations
  • Developer tooling
  • Code audits
  • Committees, sub-committees, and DAOs that serve the Aave ecosystem
  • Community (marketing and educational)
  • Events & hackathons

Grant Process

The grant application process involves a written application, followed by a review that can vary depending on the grant size. The reviewers quickly assess the quality and legitimacy of the application, filtering out spam or low-quality applications.

Application Process

  • Rapid Grants (Less than $20k, fast process, smaller grants) - applicants fill out an application that’s shared with members of the grants committee. If it passes a round of review, they’ll approve it and send a confirmation of the same within a few days.
  • Rapid Grants ($20K – $80k, fast process, bigger grants) - applicants fill out an application that’s shared with members of the grants committee. The application then goes through an initial review, an interview with the applicant, and internal deliberation. Positive feedback leads to the grant being approved.
  • Grants Above $80k - they recommend filling out the same application as the previous grant types, with additional reasoning. They then assess the proposal and guide applicants on how to directly share it on Aave's governance forum. Grants beyond $80k fall outside the mandate of Aave Grants DAO, however applicants can still receive feedback from the grants team on their project.

The grant process involves multiple payments and milestones, with grantees being encouraged to reach out for support if needed.

Team Size: 10 People

Response Timelines: 9.6 Days

Grant Types

  • Rapid Grants (<$20k, faster process, smaller grants)
  • Grants ($20K – $80k, fast process, bigger grants)
  • Grants Above $80k - snapshot vote by the community

All grants are either Rapid (<$80k) with the two categories and Community (>$80k), which get the DAOs input and vote. This breakdown comes from a breakdown shared in the original AGD proposal.[27]

The AGD program’s recent focuses have been on community engagement, grantee outreach, and treasury prudence. Treasury prudence has become a focus based on the sentiment in the Aave community.

Reviewer Structure:

A committee of 10 members: 1 lead, 1 analyst, 1 operations lead, and 7 reviewers. The lead is the organizational backbone of the program and ensures that things move smoothly and efficiently. The lead dedicates a significant amount of time to the program. They compensate reviewers on a per-review basis rather than an hourly basis, in order to incentivize thorough and thoughtful reviews.

Both the lead and the reviewers serve for a period of two quarters. After two quarters, the grants program and the committee member positions are up for renewal. This is put up on the governance forum for a discussion and subsequent on-chain vote.

The compensation for everyone in the team is shared in the governance forums.[28]

Impact Measurements: 

Impact is measured in a variety of ways including assessing whether the grantee completed what they set out to do, with monthly and semiannual reports provided.

Metrics they look at include:

  • Increasing TVL or other protocol metrics
  • Expanding the capabilities of the platform and utility of Aave
  • Providing accessible and novel insights to the community
  • Engaging new users and enhancing retention of current users
  • Growing GHO by helping to stimulate demand, expand its utility, and accelerate its transactional velocity
  • Growing the number of service providers (teams who receive funding directly from the main Aave DAO) who originally received funding from AGD to help validate and support their work

While these are some of the metrics they use, they don’t limit impact or protocol ROI to any single list of KPIs or metrics as they believe any project can benefit the Aave ecosystem or Aave protocol.

The AGD governance forum posts also mention measurable criteria for the grants program itself, that were tracked and reviewed in the form of:

  • Growth in the number of grants applications received quarter-over-quarter
  • Growth in the number of projects, ideas, and events funded
  • Growth in community engagement (i.e. increased activity on forums, Discord, etc.)
  • Growth in Aave pools driven by applications funded via grants (e.g. increased TVL, borrow activity, and unique addresses due to apps funded by grants)

And subjective criteria in the form of:

  • Improved sentiment and goodwill within the community
  • Improvement to the Aave protocol’s brand and positioning in the market

Community: 

AGD has fostered a strong inclusive culture, where they’ve successfully incentivized the best contributors to stick around and support their ecosystem.

The AGD team posts monthly updates on the forum, breaking down details around grant spend including the approved number of grants, grant funding allocated, as well as grantee lists, along with any other relevant information.

AGD also highly emphasizes and encourages grantees to engage with other grantees in the community. There's a strong focus on a collaborative culture within the community.

The AGD is often iterating their focus and how to improve and move forward based on the Aave ecosystem.

They prioritize high-quality grantees and make regular updates to the system. They also actively seek feedback from the community and are considering ways to deploy more funds effectively. The grants program has focused on creating long-term contributors to the Aave protocol. They incentivize grantees to become service providers and delegates, and to contribute to the ecosystem. They recognize that the strength of the protocol lies in their community.

Stats:

The chart below indicates the AGD budget for each proposal from 2021 - as of September 2023, and it includes event grants too,[29] whereas the table indicates information for project grants excluding event grants.

Year

Applications Received

Projects Funded[30]

Amount Issued

2019

N/A

N/A

N/A

2020  - Aave Ecosystem Grants: Round 1

30+

23

70,000+ aDai

First Proposal
(May ‘21 -Nov ‘21)

N/A

38

$1.17M[31]

Second Proposal (Dec ‘21- May ‘22)

N/A

32

N/A

First - Third Proposal
(May ‘22 - Dec ’22)

1386[32] 
(from May ‘21 - Dec ‘22)

197
(from May ‘21 - Dec ‘22)

$4,611,956

(from May ‘21 - Dec ‘22)

First - Fourth Proposal
(from May ‘21 - July ‘23)

2097[33]
(from May ‘21 - July ‘23)

249
(from May ‘21 - July ‘23)

$4,852,753

(from May ‘21 - July ‘23)

Fourth Proposal
(Jan 2023 - July 2023)[34] 

N/A

50

$885,020

Total
(May ‘21 - Sept ‘23 update)

2097

249

$4,852,753[35]

Challenges faced by the grants program: 

Some of the challenges include coordinating within a larger DAO, staying up to date with a large portfolio of grantees and ensuring a consistent stream of high quality applications.

They’ve also acknowledged the importance of diversity of thought among reviewers and grantees, but have expressed the challenge of finding and retaining the right people. They’re considering changes to increase community involvement and address potential conflicts of interest.

What do they do very well:

AGD has consistently demonstrated a commitment to robust community engagement, as evidenced by their regular newsletters, which recently celebrated its hundredth edition. They prioritize grantee outreach, ensuring they remain accessible and responsive to their grantees' needs. Feedback from the community, although potentially biased, has been overwhelmingly positive.

AGD’s measurement and tracking of program metrics, shared in their monthly updates, is an important call-out. They’re one of the few programs who are diligent in transparency around program operations from an internal performance standpoint. Their commitment to regularly sharing these updates not only reflects a culture of transparency and accountability, but also sets them apart in the industry. Such practices are crucial for stakeholders and the community to understand the effectiveness of grants programs, and make informed decisions. AGD’s approach can serve as a benchmark for trust building, and operational excellence.

AGD also practices financial prudence, choosing not to spend their budget impulsively but rather focusing on supporting high-quality grantees. They’ve been proactive in responding to the community's desires, such as advocating for Treasury prudence. Additionally, AGD has been a significant supporter of upcoming updates, like the launch of GHO[36]. Their presence at events and sponsorships, like rAave, has solidified their position in the ecosystem. AGD's culture of continuous improvement is evident in their regular system iterations, making two to three changes every month.

Lastly, while they are cautious about their spending, they actively seek ways to deploy their funds more effectively, ensuring they support initiatives that align with their mission and the community's needs.

Additional Materials:

Compound (Administered by Questbook)

History: 

The Compound Grants Program (“CGP”) went live in March of 2021 after Larry Sukernik posted a proposal[37] (boardroom.io view[38]) that ended up passing. CGP 1.0 ran for 6 months, as approved in the initial proposal, and funded over 30 grantees while issuing over $1m in funding. Larry provided a retrospective[39] on the learnings from the first grant program, which led to the conclusion of pausing grants.

The focus of this section is specifically on CGP 2.0, which was proposed[40] in December 2022, kicked off in January of 2023, and was managed by Questbook.[41] CGP 2.0 started with a budget of $1 million across 2 quarters, and this budget was managed by 4 people - the domain allocators who were selected from the community and chosen by the community. These allocators were meant to have expertise and an established network in a specific domain. These domain allocators would oversee $200k each in their domain. Questbook provided the operational and infrastructure support for the delegated domain allocators, as they were called in those program.

As of July 2023, CGP 2.0 renewal from Questbook was proposed with the same budget, though only one domain (D’apps) grows in size and the other two are lower based on the demand and allocation from the previous 6 months. As of mid-August, at the time of writing, there is an active community discussion on Questbook’s proposal[42] as well as two other proposals that have come in, albeit these proposals are more focused on growth and business development as opposed to grant programs per se - AlphaGrowth[43] and Alostar + web3 studios.[44] 

Compound Mission

“Compound is an algorithmic, autonomous interest rate protocol built for developers, to unlock a universe of open financial applications.”[45]

Compound + Questbook Grants Program Mission: 

From the CGP 2.0 proposal:

“Specifically, this program aims to

  • Grow Compound’s grant program - measured by the number of builders applying and building on top of Compound
  • Delegate capital allocation - to identify, attract and fund projects/builders that the grant program would otherwise not have funded by delegating capital allocation to members of the community rather than a central disbursing committee.
  • Strengthen the builder community in the bear market - a crucial component of the community is to keep the builder activity alive even during the bear market. The building keeps innovation and optimism in an otherwise grim market condition. It also drives participation and transparency for the broader community.”[46] 

Grants Process & Operations:

Grant categories  

Both CGP 1.0 and 2.0 accepted prospective grants. For 1.0, there were some learnings right from the start about the need for RFPs to help focus priorities and to focus the grantees. Here’s a quote from Larry’s retrospective discussing it,

“What happened is applicants would apply for grants for all sorts of projects, putting us in the position of assigning priorities to applications after they were submitted. Instead, the right approach is for grants programs to understand the level of priorities for the protocol and put out RFP’s that community members can begin working on. We didn’t do that at first, but we learned quickly. Now, we have a list of RFP’s 65 that are a priority for the protocol to complete and encourage community members to work on them.”[47]

As one of the domain allocators, allthecolors, put it, “In practice, CGP 2.0 operated on a hybrid model where sample RFPs were provided as a part a general call for proposals in each domain.”[48] Each domain had a sample RFP as allthecolors called them - Protocol Ideas and dApps[49], Multi Chain[50], DevTooling[51], and Security[52]. These RFPs were available through the Questbook app where the proposals were to be submitted, under the ‘Program Details’ section. These docs were generated with the Compound roadmap in mind.

There seems to be more of a push for formalizing RFPs into 2.0 renewal, with harsha (Sriharsha Karamchati, Questbook’s founder), sharing the following on August 16th, “Although all proposers were encouraged to review domain-specific RFPs before submitting their proposals, we will finalize the RFPs for the updated program after actively seeking input from both Compound Labs members and the broader community to ensure alignment with Compound’s focus areas and roadmap.”[53] 

Grant process

The CGP 2.0 review process hinged around the 4 domain allocators and the 1 program lead. Applicants would submit their applications to the domain allocators, who would conduct the review. As any operational support was needed, the Questbook team provided that.

Once domain allocators reviewed the projects, they would coordinate with the applicants of interest to clearly define milestones that would trigger payments (no payments were issued upfront).

Team Size: 5 people

The team consisted of 4 domain allocators and a Program Manager (PM). All five were on a parent SAFE (3 of 5 signatures required) and each domain had its own sub-SAFE, which required the signature of the PM and the relevant domain allocator.

The domain allocators were (these are their forum names on the Compound discourse):

  • @allthecolors
  • @cylon
  • @Bobbay_StableNode
  • @madhavanmalolan

Response Timelines: 2 days (communication) and <2 weeks (funding decisions)

The team did not commit to specific response timelines. There were, however, comments indicating that the knowledge of who the domain allocators were helped applicants know to follow up with when questions arose. The domain allocators were able to maintain an average communication turn around time of under two days and a funding decision within less than two weeks as per the Questbook team.

Grant Types

The grants were put into four domains for CGP 2.0, as follows:

  • Protocol ideas and dapps by @allthecolors
  • Security tooling by @cylon
  • Multichain Strategy by @Bobbay_StableNode
  • Dev Tooling by @madhavanmalolan

As part of the voting for CGP 2.0, ‘miscellaneous’ was a topic of interest from the community though that was not included in the final proposal.

If you want to see the specific applications submitted across these four domains, check out Questbook.app[54] to see the applications and the milestones pertaining to them.

Reviewer Structure:

All of the proposals for CGP 2.0 were reviewed by the domain allocators, who themselves were elected by the community, and reviewed on a rolling basis. These domain allocators would use the rubrics they developed to assess the applicants which would in term generate a score to help them decide on whether or not to fund the projects. Given the overviews of desired applications provided by the team, some of the domain allocators felt that many of the submissions fell outside of the scope and hence did not get funded.

There are also a number of planned improvements in terms of the review process. While there was no official community input into the reviews as part of CGP 2.0, there seems to be discussion of factoring in such feedback in the future. Here is a quote from a recent post at the time of writing:

“I believe CGP can help elevate this involvement:

  • Feedback Incentives: We could stimulate quality feedback from the community by acknowledging their contributions in meaningful ways.
  • Interactive Sessions: We could create more venues for discussions by facilitating structured question periods or topic-specific discussions. This could foster active participation and a more dynamic exchange of ideas.
  • Understanding the community’s needs is crucial in delivering something truly valuable. We need feedback and I believe these improvements could encourage it. Looking forward to seeing the growth of CGP 2.0!.”[55]

Impact Measurements: 

The Questbook team did come up with some benchmarks as part of working with CGP, one of which was the life cycle of impact. This lifecycle needs to be viewed on a time horizon of over a year given that projects take time to build and mature,

These metrics were designed by the domain allocators and the Questbook team, and were shared with the community. The Questbook team supporting CGP and the domain allocators provided information on the metrics during community calls though we were unable to find the rubric itself on the forum.

Some metrics that were used were net promoter score and an experience score. They shared all of their findings with the community and included the learnings from CGP 2.0 in the Grants Program Playbook the Questbook team put together. From our understanding, Questbook as the underlying grant infrastructure provider, is still exploring the set of metrics that can be gleaned from the data they capture. This is an ongoing process and it will be interesting to see what other metrics they come up with for both CGP and any other programs they support.

Community: 

The Compound community has been deeply involved in the process of shaping the proposals. The community also showed its appreciation for the time of expertise of the reviewers and strongly voted to increase the compensation of the allocators.

A demo day was organized to share info on projects that took place on July 21, 2023.

Feedback from a community member:

“Another thought is it might be better for you to consider doing a survey among grantees at the end of every grant program where grantees anonymously fill up a simple survey form on what their experience was, and where they would like to see improvements. I first filled up such a survey during our work with the Balancer protocol last year and I thought it was a great idea to get feedback.”[56] The Questbook team did indeed put together a feedback form and shared some of the takeaways from that survey with the community as well.[57]

Stats:

Year

Applications Received

Projects Funded

Amount Issued

2021

Not available

30+

$1m+

2022

N/A

N/A

N/A

2023

Q1&Q2 - 75

Q1&Q2 - 46

Q1

  • $670k committed
  • $252k disbursed

Total

105+

76+

$1.25m

Here is a specific breakdown of funding by domain allocator from the CGP 2.0:

  • Protocol ideas and dapps by @allthecolors : $247K
  • Security tooling by @cylon : $100K
  • Multichain Strategy by @Bobbay_StableNode : $82K
  • Dev Tooling by @madhavanmalolan : $22K

A detailed funding breakdown by project was provided by the Questbook team[58].

Challenges faced by the grants program: 

From the original 1.0 program, Larry wrote of two main challenge areas:

  • “‘Launch and ‘they will come.’
  • 2-week turnaround.[59]

It was tough to source quality submissions, which led to the development of RFPs to keep submissions more focused. In terms of turnaround time, even with a dedicated review committee, it was not possible to stick to the desired two week turnaround time.

One challenge area for 2.0 as with 1.0 was getting quality submissions that focused on the problem areas that were aligned with the priorities of the domain allocators. Only the dapps domain received a large number of applications, which resulted in a different proposed budget in CGP 3.0 from Questbook.

Some comments brought up by a community member include a fundamental concern around the value that some of the funded projects were providing the ecosystem. One specific example was around a project that had already received funding from Optimism “despite Compound Labs already communicating the work being done there months prior.” There were some clarifications provided on the logic behind it, though regardless, there are the questions from at least one community member on the due diligence towards some of the grants and in terms of the overall impact of some of the funded projects.

What do they do very well:

The CGP 2.0 proposal clearly had a lot of thought and both internal (foundation and community) and external input that went into the proposal. The nature of the discussion that followed showed the interest from the community in discussing the program and the openness in revising and adjusting the program based on the community input. The transparency in the program design seemed to benefit from a) learnings from other community-approved grant programs as well as b) general learnings on effectively running grants programs.

The transparency in terms of the evaluation process and criteria also helped limit duplicate proposals and help applicants know what they needed to improve in order to receive a grant in the first place.

Additional Materials:

Ethereum Foundation

History

Ethereum is a decentralized, open-source blockchain system that features smart contract functionality. While the development of Ethereum started in 2013[60], and the first Proof of Concept (PoC) rolled out in 2014, the beta launch of Ethereum, Miners, was in 2015, and the second major version release of Ethereum, Homestead[61], was finally rolled out in 2016. Ethereum is one of the most established[62] crypto networks today.

The Ethereum Foundation (EF) was created to support the Ethereum ecosystem.[63] They’re part of a larger community of organizations and individuals that fund protocol development, ecosystem growth, and advocacy of Ethereum, ensuring its security, scalability, and sustainability.

While the term “web3” became mainstream in 2021, it was coined in 2015, and was often hashtagged in EF’s blog posts that year. 2015 was also when EF launched their inaugural grants program, ÐΞVgrants. Prior to this, they’d started a bug bounty program in 2014.[64]

The EF, being one of the oldest organizations in crypto, has a rich history of supporting the Ethereum ecosystem. Over the years, they’ve experimented with various methods of support, leading to their current landscape of the Ecosystem Support Program (ESP) along with select programs run by other teams at the EF with support from ESP as needed. For this report, we’ve outlined ESP, which has an outward-facing grants program.

The Ethereum Foundation's Mission:

To do what is best for Ethereum's long-term success. To allocate resources to critical projects, to be a valued voice within the Ethereum ecosystem, and to advocate for Ethereum to the outside world.

Ecosystem Support Program's Role:

As the public facing allocation arm of the Ethereum Foundation, ESP provides funding and other forms of support to eligible projects working to improve Ethereum. ESP’s focus is on work that strengthens Ethereum's foundations and enables future builders, such as developer tools, research, community building, infrastructure and open standards. The work they support is free, open-source, non-commercial, and built for positive sum outcomes.

The original scope of EF grants in 2018,[65] was open source, funding for future work, trying not to advantage teams over one another and avoiding playing favorites or giving EF’s “stamp of approval”. Overall, their approach was and has remained staying neutral, especially in regards to endorsements of different projects.

ESP grants are not prescriptive, with a few different routes of grant giving. Some examples of these are Proactive Community Grants rounds, such as the Academic Grants round, L2, Account Abstraction, etc.. Different teams within the EF award grants based on their priorities. While ESP grants are more open ended, these grant rounds allow teams to bring awareness to particular domains. The ESP provides cohesion across these teams, offering support and infrastructure for grants.

They’ve developed a "layered" model to allocate resources effectively:

  • ESP: They have 3 tiers of support for applicants, with two different grant tiers depending on the size of the project. We’ve outlined more details around this below.
  • EF Teams: Internal groups that contribute directly to the Ethereum ecosystem. Other teams within EF also allocate and manage grants to achieve their ecosystem goals.
  • This includes proactive grants where internal teams identify and present priority projects to the ecosystem hoping to learn of various creative solutions, and initiate financial support discussions.
  • Delegated Domain Allocators: They collaborate with external groups to decide funding within specific domains like zero-knowledge tech, developer experience, and more.
  • Third Party Funding: Direct funding to external groups like 0xPARC, Nomic Foundation, and ETHGlobal, empowering them to decide allocation.

They’ve continuously refined their processes to ensure efficient decision-making while not missing out on quality applications. They’ve split their grant application process to cater to different needs and have adjusted their focus based on feedback.

ESP grant application process depends on the type of project and funding required. There’s additional support offered in the form of Office Hours, where folks can directly connect with a member of the EF's Ecosystem Support team for support other than funding, including including project feedback, redirection and navigation within the ethereum ecosystem, and questions around the process of submitting an application with the process of submitting a grant application. This is best for non-financial guidance.

Small grants, capped at $30,000, have a streamlined application and evaluation process to deliver a decision around two weeks after submission.

  1. A potential grantee submits an application via ESP’s small grants form here.
  2. After submitting, they'll likely receive a confirmation email within two business days.Potential grantees can reply to the confirmation email if they have questions or want to share additional relevant information.
  3. Potential grantees can expect to hear back from the ESP team with a final decision around two weeks after they submit it.
  4. The next steps might include
  • gathering more information
  • getting input from advisors
  • working with potential grantees to refine or rescope the project proposal.
  1. Grantees undergo the grantee onboarding process, including a KYC check, and may receive funds in fiat, ETH or DAI.
  2. Once the grant work is completed, depending on the scope of work required, grantees deliver the completed work to their assigned evaluator. The deliverables results can vary in format, reports, blog posts, videos, github repos, presentations, demos, proof of concept, MVP, etc.

A subset of small grants are community events and sponsorships capped at $20,000. These can cover events such as meetups, hackathons or conferences. Those who are awarded sponsorships have a much simpler route as they don’t typically need to sign agreements or complete KYC.

Project grants have no specific funding cap, and undergo an identical process of review and potentially rescoping as they do for Small Grants. The biggest difference from Small Grants are approval timelines which depend on scope and complexity and may result in several rounds of review and re-scoping, although they target 2-3 months for decision-making.

Decisions are made faster on Small Grants because of the smaller scope compared to a Project Grant.

  1. A potential grantee submits an application where they go into depth about their goals, motivations, plans and intended impact via ESP’s project grants form here.
  2. If the project is within scope for ESP support, they’ll begin a deeper evaluation of the project's technical approach, potential impact, risks, and other factors.
  3. The next steps might include
  1. gathering more information
  2. getting input from advisors
  3. working with potential grantees to refine or rescope the project proposal.
  1. Once the proposal is finalized, they’ll make an allocation decision based on their  assessment as well as input received from advisors.
  2. Grantees sign a grant agreement, complete KYC, and receive funds in fiat, ETH or DAI.

Grantees will have a point of contact at the EF who will check in with them regularly as they progress with their work for both Small Grants and Project Grants.

Reviewer Structure:

The ESP team did not want to comment publicly on the size of their review team.

The EF has a thorough review process and they give importance to diverse thought in it. Despite an overwhelming number of applications, they want to ensure each application is thoroughly reviewed. There was mention of digging deeper into projects that may not seem as “typically great” projects, but in extracting and discovering the potential through further questioning.

Impact Measurement: 

Impact is a core focus for the EF. They monitor active grant agreements and milestones, ensuring that the objectives are met.

However, defining impact can be challenging. For the EF, success isn't just about the amount allocated or the number of grants but the actual impact on the ecosystem, which makes it difficult to measure, for them. One perspective they have is impact through talent building and individual evolution and collaboration within the ecosystem, however this is difficult to measure at scale. They don’t have strict metrics for impact, making it a continuous point of discussion and evaluation internally.

They try not to highlight success stories because they believe in letting the project’s successes speak for themselves. However, an exception to this is their Grantee Roundups, where they highlight some recent grantees projects. The motivation is not necessarily to highlight the success stories, but rather to highlight impactful projects, and to shed additional light on what it means to be a grantee.

Community and transparency:

They don’t have a formal process for community feedback that feeds into the grants program, but they do informally keep an ear to the ground, listening to the community's needs and feedback. They sometimes prioritize applications on specific topics based on the current priorities of Ethereum or the industry at large.

They maintain transparency through their quarterly reporting, with updates provided for every quarter from Q1 2021 to Q2 2023.

Stats Overview

Year

Projects Funded

Amount

2018

52 projects (Wave III)

$11M

2019

68 projects

$7.7 million

2020

99 projects

$12.9 million

2021

136 projects

$26.9 million

2022

232 projects

$30,043,145.70

2023 Q1 - Q2

113 projects

$22,128,774.87

2018 - 2023 Total

700+ projects

$110,671,920

What do they do very well:

ESP has several strong points. A major one is their meticulous approach to reviewing grants. They take the time to closely examine each application, ensuring they don't just spot obvious good projects but also uncover hidden potentials. Their focus isn't just on a single area but on the entire ecosystem. At times, they may direct applicants to other groups better suited to assist them, showing their genuine interest in the growth of the ecosystem over merely dispensing funds. Even after being in operation for some time, the EF remains committed to its original goals. Their decisions are rooted in their values: long-term vision, the principle of subtraction, and the overarching mission of Ethereum. By focusing on clarity in their vision, making intentional choices, and upholding their core tenets, they've fostered one of the most vibrant communities in web3.

Finally, the EF has done a good job with their adaptability. They've transitioned from extended grant durations to shorter, more agile periods between six months to a year, streamlining their review process and resource utilization. What's also impressive is the operational independence within EF. While some teams can directly award grants, the Ecosystem Support Program (ESP) is consistently available for assistance, tailoring its support to the specific needs of each team and grant round.

Challenges faced by the grants program: 

At the start, the program faced challenges due to a one-size-fits-all application process. This shifted dramatically when they split the process into three specific application types. However, the constantly evolving nature of the ecosystem posed another set of challenges. A grant scoped out for a duration of 12 to 18 months might find its relevance questioned in just half that time due to the rapid changes in the ecosystem. This unpredictability necessitates flexibility and adaptability in deciding whether to pursue the original scope, redefine it, or even potentially pull back funds. The sheer volume of applications, especially in favorable market conditions, is yet another hurdle. Despite a clear set of criteria, many just try their luck by applying. Sifting through these to find the gems that can genuinely impact the Ethereum ecosystem demands immense effort. A notable issue is that some applications might inadvertently conceal their true potential, making the review process even more challenging.

The EF's relationship with the community also introduces complexities. While the community's prevailing interests might influence the prioritization of some grant topics, they don't directly dictate how the EF allocates its funds. It's essential to strike a balance to ensure the ecosystem grows in a cohesive manner.

Additional Materials:

Mantle

History: 

Mantle is a Layer 2 blockchain that is EVM compatible and developed by the former BitDAO ecosystem.[66] The idea came together in 2022,[67] BitDAO and Mantle rebranded under the Mantle name in May-June 2023 under BIP 21[68],[69] and they had their mainnet launch in July 2023.[70] 

The grants program was first proposed on their forum in February 2023 and the vote for launching a “a capital pool of $200 million to be deployed within the Mantle ecosystem over the next 3 years from the Mantle EcoFund and Strategic Venture Partners.”[71] To be clear, the Grant Program and EcoFunds are two separate pools of funds. The Testnet phase budget was allocated in early 2023 with 10 million BIT and 14 million USDC committed to Ecosystem Incentives, Bug Bounty & Security Audits, and Operations for the program. Here is a breakdown across these funds:

[72] 

With that vote passing, granting at Mantle kicked off during the summer of 2023, supplementing some of the other types of ecosystem support they were providing. This included access to auditing, bug bounties, developer relations and support, integration support,  and co-engineering.

From the original forum post, the VC style EcoFund was also described: “The Mantle EcoFund should strive to be the ‘first money’ into teams building quality and innovative projects within the Mantle ecosystem. We would invest alongside the Strategic Venture Partners of BitDAO and Mantle Ecosystem, and start supporting projects at the Pre-seed and Seed stage with the option to double down on potential big winners with promising traction and stronger use cases with $BIT whenever possible.”[73] BIT was going to be the token of BitDAO, but was rebranded under Mantle, as mentioned above.

Mantle Mission:

While there is no exact mission statement, their site does say, “Mass adoption of token-governed technologies.”[74]

Mantle Grants Program Mission:

“Mantle Grants Program provides milestone-based funding to support initiatives aimed at expanding, securing, and decentralizing the Mantle network.”[75]

Grants Process & Operations:

Grant categories  

Aside from doing investments as well, the Mantle grants program launched focusing on prospective grants and is looking to add retrospective granting. They are already issuing grants to some projects for work that has already been completed or at least are in progress.

They do not currently have an RFP program but work to provide some guidance for potential builders.

Grant process

When the grant program started accepting applications, they were accepting grants on a rolling basis and would review them on a weekly basis and they had a set number of grants they strove to issue on a weekly basis.

The program is in the process, as of our interview in early August 2023, of switching to a cohort based model. These cohorts are defined with specific funding targets in mind and come along with a general ecosystem development strategy for that cohort. The purpose of structuring in cohorts is to have more “focused funding and more application-specific standards when evaluating projects.”[76] At the time of our interview, the grants program was in cohort 2 (where cohort 1 was all of the grants that were accepted on a rolling basis prior to that).

Grants seem to be sourced roughly two ways:

  1. From grant applications
  2. From the business development team as they spot potential contributors to Mantle

Once applications are submitted, they go through the review process outlined below and the successful grants proceed to due diligence and fund disbursement. As part of the process, they work with prospective grantees to co-define the milestones to ensure that they are aligned with organizational KPIs and to have alignment on vision of success. Throughout the discussions exploring the grant, the team looks to get a sense of how this project will add value to the Mantle ecosystem.

There are more changes to come on the grants program and announcements will be out later in the Fall. One of the changes they’re exploring is how to make the review process more operationally efficient.

Team Size

The team was initially just the grants lead and an additional full time employee was brought on to support, and so the team was grown to 2 people. Additionally, the Mantle operations team supports the actual disbursement of funds. Including the operational support, the business developers, and those involved in the review process, roughly 20 or so people support the program all in all (though the majority are not focused on the grants program full-time.

Response Timelines

The specifics of response timelines, both in terms of desired and realized timelines, are unclear at this point.

Grant Types

There are primarily three grant types that Mantle focuses on:

  • Type A
  • Type B
  • Type C

Type A projects are focused on early stage development and deployment and retrospective applications are accepted. The target grant amount is $10,000, preferably in USDC.

Type B projects are for projects that are already live on the Mantle network and are focused on incentivizing expansion or migration to Mantle. Grant sizes can vary from $50,000 to $150,000, preferably in MNT.

Type C refers to projects that are seeking venture funding, which is outside the scope of the grants program. Projects that apply to Type C via the grants program are then directed to the appropriate teams internally.

There currently are no plans for community or education focused grants as those sorts of applications are shared with marketing and business development to assess. If there is appetite to support them, those funds would come from the marketing and communications team.

Reviewer Structure:

Mantle has 4 levels of review, which start with the business development team reviewing the grants to understand and worth generally pursuing.

From there, the grants lead reviews, specifically exploring if the milestones make sense in the scope of the project and whether the project is aligned with organizational priorities. From there, a memo is drafted that gets shared with the Venture Capital team that is also part of the ecosystem team alongside grants. They explore if it’s worth doing an investment versus a grant.

From there, the grants committee reviews the information and makes a final call. The composition of the grants committee is not public.

Impact Measurements: 

As a newer program, Mantle very much views the metrics they have as ones that are evolving and are likely to change overtime. One aspect they’re considering internally in the context of apps is to support projects across the major app topics that are already present in other ecosystems.

They also have some domain specific metrics they consider, such as:

  • DeFi - TVL as a driver of liquidity and volume
  • Gaming - engaged and sustained users, number of transactions per users
  • NFTs - presence of infrastructure is the main focus for now

They are also exploring some data driven metrics across their grants program, such as the average revenue per user, while trying to get more nuanced with the definition of a user (trying to prioritize for users who stick around).

Community: 

While the proposal for the grants program was shared on the forum, the internal team really led its development and rollout. They did get some level of input and feedback from core community members and builders, but that was done in a more direct vs forum-based way.

The grants team is currently thinking of how to best share grants results with the community. They believe that the more transparent they are with the community, the more projects will apply to the program. Given it’s still early days, it has been challenging to find time for that, especially while the dedicated grants team was a single person.

Stats:

The Mantle team does not have data to share yet. They are clarifying their numbers and coming up with a release strategy.

Year

Applications Received

Projects Funded

Amount Issued

2023

TBA

TBA

TBA

Challenges faced by the grants program: 

One challenge has been managing the flow of grant applications with a small team, especially as they try to figure out how to share more with the community. Similarly, getting the grants opportunities to the right folks and finding more qualified builders has been a challenge as the program is ramping up in its first months.

What do they do very well:

The Mantle grants program is still quite new so it is tough to hone in on what has gone well. The fact that they were able to launch with a focus on both prospective and retrospective grants is something that has been appreciated by the community.

The Mantle grants team is interested in thinking about how to help teams self-serve more effectively. Figuring out the process and rollout for that is taking some time and considerable effort.

Additional Materials:

Solana Foundation

History: 

The origins of the Solana blockchain go back to November 2017, when Anatoly Yakovenko first published a paper describing a Proof of History consensus mechanism. In early 2018, the first open source implementation based on Anatoly’s whitepaper was built by Greg Fitzgerald who would go on to be a co-founder and the CTO. In July of 2019, the team released a permissioned, public testnet.

The Solana Foundation (SF) was founded in 2019 with the goal of supporting the Solana ecosystem. As a subset of that goal, the capital team of the foundation has a goal of getting great builders on Solana the right type of funding at the right time. While the first wave of grants was announced from the foundation in November 2020, Solana Foundation transitioned to and has since stuck to a rolling basis for grant applications.

The team at the foundation responsible for grants is thought of internally more as a capital team, where grants are one of the tools at their disposal to accomplish their goal, which is outlined below. For consistency with the rest of the report, we will refer to this team as the grants team though they do some convertible grants and direct investments as well, even if the latter has been done quite rarely.

Solana Foundation’s Mission

“To help build the Solana protocol into the most censorship resistant network in the world.”[77]

The Solana Foundation Grants Program’s Mission

“The Solana Foundation Grants Program provides milestone-based funding to support initiatives aimed at decentralizing, growing, and securing the Solana network.”[78]

Grants Process & Operations:

Grant process

The overall process is as follows for prospective grants on a rolling basis:

  1. A potential grantee submits an application via SF’s online form
  2. An initial pass is conducted on the applications on a daily basis that is meant to a) highlight priority applications that are most aligned with pressing needs in the Solana ecosystem, with an emphasis on public goods and b) remove any spam applications.
  3. Potential grants get funneled into the appropriate vertical (i.e. DAOs, NFTs, DeFi, etc.)
  4. Weekly meetings take place between the grants team and the vertical teams, which include a mix of technical and business oriented subject matter experts (SMEs) as needed
  5. The SMEs provide inputs on all aspects of the grant with particular attention paid to the milestones and deliverables
  1. If needed, a rescoping of milestones and/or deliverables takes place at this stage
  2. The grants team may also work with the prospective grantee to refine
  1. Once a project is done with scoping, the grants team focuses on remaining administrative and contractual approvals needed to execute

RFPs follow a similar process, though the application is expected to be clearly aligned with the RFP itself. The RFPs are crafted by the relevant SMEs within the foundation.

Convertible grants follow a similar process with the main difference being that if the project raises a round of funding after any grant payments, then those payments are converted into shares or tokens at an agreed upon rate. Any returns from such a convertible grant or investment are funneled back to the grant program.

Investments have a more ad hoc process that involves a much wider committee and has only been exercised 2 or so times.

Team Size

The grants team has a total of three dedicated individuals, including 2 dedicated internal staff and 1 contrator. Additionally, over 15 subject matter experts support the team from the perspective of grant review.

Response Timelines

The team has a general timeline for getting back to applicants. For projects that are clearly not a fit for the Solana Foundation Grants Program, the team works to let those individuals or teams know within a week of receiving the application.

For applications that require a conversation (further scoping, more information needed, etc.), the meeting with the relevant team member (grants team and/or vertical subject matter expert) will be scheduled within 10 days.

The team has a goal for finalizing grants within a month. To get a sense of how variable it can be, the fastest time in which the team was able to process a grant was 4 days and the longest it took was over 2 months.

Grant Types

The grant program is broken out into three types:

  • For smaller sized grants, the SMEs are able to move forward with their decision
  • For mid sized grants, there needs be sign offs from the grants team/SME and one executive from the capital funding committee
  • For large grants, applications go to review from the executive committee for capital funding with input from the majority of members.

The exact delinations between these grant tiers are not public knowledge as of August 2023.

Reviewer Structure:

The Solana Foundation Grants Program has dedicated subject matter experts who work on grant review every week. These SMEs are team members in specific domains (i.e. business development, technical leads, etc.) within verticals (i.e. DAOs, NFTs, DeFi, etc.) and are tapped for review as applications come in. As the applications get more technical, engineers may be tapped.

Impact Measurements: 

The Solana Foundation grants team does have a rubric that they use to help them craft initial agreements in terms of milestones, deliverables, and general expectations, though that rubric is not public. The rubric below is one that is intended to help them evaluate the success of a grant when looking back on the grant’s progress; it is not meant to be a full encapsulation of potential impact measurement but is viewed as a starting point and a way to ground their thinking.

The five metrics that all grants get assessed by are as follows:

  • Did the team hit the milestones they’re committed to hitting?
  • This presumes that the milestones were crafted well in the beginning and are realistic.
  • Adoption impact
  • Co-defined with grantees
  • The grants team assesses whether they underperformed relative to expectations, performed as expected, or exceeded expectations.
  • Public good
  • How usable are the outputs of this grant for others in the Solana ecosystem?
  • Does it solve problems beyond the specific problems of the grantee?
  • Staying power
  • Is this grantee still in Solana?
  • Has the grantee gone on to raise funds?
  • Is the project still ongoing beyond the initial scope of the grant?
  • Value for $
  • Could someone else have done this generally and could have someone done it cheaper and/or faster

Community: 

No consistent process of reporting results to the community or more publicly. They are working on getting a quarterly grant report to the community going by early next year.

The foundation has informal processes for community feedback on an ad hoc basis.

The foundation funds multiple community grant programs such as MonkeDAO and Superteam, amongst others. Additionally, the SF contributes to matching pools for community grant programs such as Cubik.

Part of the support that they provide projects includes having ecosystem engineers on staff at SF to ensure that the builders in the community have the technical knowledge and research they need to advance on their work.

The team is exploring the addition of retroactive public goods funding models to better compensate contributions to the ecosystem. They are also looking into various ways to support projects with funding beyond grants, such as creating networks of VCs with vertical or geographic expertise.

The foundation is interested in figuring out better ways to share knowledge with and to directly support the community. This includes exploring how to source potential grant ideas and RFPs from the community.

Stats:

Year

Applications Received

Projects Funded

Amount Committed

2022

5k+

+300

$50*

2023

* Solana Foundation has committed to issue $50m from 2022 - 2023. We are using the term ‘committed’ for SF because it is predicated on grantees hitting all of their milestones. If all of the grants they have agreed upon fulfill their contractually agreed upon milestones, then the $50m will be fully issued to grantees. This is to differentiate from groups that broadly ‘commit’ a pool of capital over some period of time in the future.

Challenges faced by the grants program: 

Like other programs, especially amongst well known L1 grant programs, they deal with a high level of outreach and both sourcing and filtering quality projects can be a challenge.

The SF team also mentioned that a challenge with transparency has been potential applicants reading into the numbers too much. For example, when some form of ‘average size of grant issued’ was shared with the community, there was a significant increase in the amount of applications that requested that average amount.

A challenge that was brought up that is endemic to web3 grants more broadly is that desire for free money without any expectations for milestones or deliverables. The presence of such applications makes filtering to the quality applications all the more challenging.

What do they do very well:

The SF has found a review structure that works well for them, balancing the workload placed on different individuals and finding the right expertise. They are also a program that has been focused on technical public goods, as indicated by their mission and their operations.

The SF has shown creativity and flexibility in its approach towards grants. They are the only active grant program that utilizes convertible grants, to our knowledge. Additionally, with the launch of the AI grants focus, a subset of the overall grants program, the SF showed its ability to quickly deploy a new thread of granting by launching this in under two weeks.

The foundation also works with the ecosystem to ensure that there is sufficient funding coverage both geographically and in each vertical.

Additional Materials:

TON

History: 

TON, or The Open Network, is a layer one chain developed by the Telegram team. Though the team had developed an initial chain in 2018, due to legal issues they had to return funds to investors and cease developing TON.[79] The whitepaper is listed with a date of June 2021,[80] building off of the additional elements that the open source developers that took over TONs development after the Telegram team had to step back. These include the TON blockchain, TON proxy, TON payments, and TON storage.

In May of 2022, a grants program was launched as part of an announcement that the foundation         would allocate a total of $200m for a grant and sponsorship program split across direct grants and event sponsorships. These two streams are managed by different individuals and the subsequent information is focused on the grants program itself. The grant program utilizes Questbook to administer its grant program, which features some integrations with Telegram as a result of a grant that the Questbook team received.[81] 

TON Mission:

A decentralized and open internet, created by the community using a technology designed by Telegram.”[82]

TON Grants Program Mission:

“TON Foundation provides grants for projects that contribute to TON core infrastructure and introduce new practical use cases.

Our grants program is designed to support a wide range of initiatives:

  • Open-source technical projects
  • Teams that develop unique commercial use cases with a compelling value proposition
  • Integration with other ecosystem players, promoting collaboration and interoperability

TON supports talent. We are helping hundreds of builders to battle-test their skills and knowledge while contributing to public good.“[83]

Grants Process & Operations:

Grant categories  

The grant program at TON commenced with a focus on prospective grants to allow for the community to not be limited in its proposals.

While the teams internally, such as the business development team, have some sense of internal priorities, these have not been formalized to the community as RFPs as of July 2023.

When it comes to project grants (described below), grants are only issued to new projects. Funds are not currently being issued for retrospective contributions.

Grant process

The TON grants team utilized Questbook to manage their inbound grant flow and grant review process. As a result, prospective grantees are able to submit their project via Questbook.[84] 

Once applications are submitted, someone from the grants team does an initial review of the proposals. If the proposals are of interest to the foundation, then relevant experts from the wider foundation team will be tapped for review.

From there, a decision is made as to whether or not to support the grant.

Team Size

The grants team itself is made up of three individuals who are focused on reviewing and managing the inbound grant proposals. The total number of individuals contributing to the grants program overall is closer to 10 people. This includes those working in ecosystem tooling who have built tooling from the foundation side, or the DeFi lead or other domain experts.

Team structure is a bit fluid for the purpose of staying focused on being “servants of the builders” and taking the time to support whichever approach (grants, sponsorship, accelerator, etc.) may be best for a specific builder.

Response Timelines

The TON grants site states that it “takes, on average, 7-10 business days to process a proposal and make a decision.”[85] Based on what is seen on the Questbook app, that claim seems accurate.

Once a month, the grants team announces roughly 5 grants though these are not considered to be monthly rounds per se. The grant program is continuous, the frequency of announcements are more reflective of a communications cadence.

Grant Types

The TON grants site[86] currently lists the following areas of interest:

More broadly, the grants team sees two types of grants: contribution grants and project grants. Contribution grants are focusing on trying to fulfill certain needs inside of the tooling ecosystem, or onboarding gaps or can cover certain educational efforts. Project grants are meant to support new projects beginning to build in the TON ecosystem.

As noted above, the grants team doesn’t have any RFPs or formalized methods of outbound grants. However, there is an awareness of what types of dapps and projects may be missing in the TON ecosystem relative to other L1 ecosystems and so there may be some amount of encouragement of the community to propose in certain areas. The team also does direct outreach to major contributors in the TON ecosystem as well as to any groups delivering quality work to explore potential future collaboration.

The team grants to both open and closed source projects.

As of September 5th, 2023, the TON team released the following breakdown of the types of grants they’re focused on:[87]

  • Type A – Initial Deployment
  • Type B – Live TON-based Projects with Existing User Base
  • Type C – Live Non-TON-based Projects with Community
  • Type D – Projects Seeking Venture Funding

Reviewer Structure:

Grant reviews are currently done by a mix of the grant team and relevant domain experts from the organization.

Impact Measurements: 

The TON grants program is focused on milestone based payouts. As a result, one metric is whether a project hits its first milestone and whether or not it hits all of the milestones that were outlined for that grant.

Beyond that, metrics are currently evolving. Awareness is maintained of on-chain related stats such as TVL and active users though the tracking of these is on the newer side and is not set in stone. The grants team is exploring the type of data that can be captured and what relevant metrics could be.

Community: 

The TON grants program focuses attention on grantee support. One example of this is the blog posts and general grant acknowledgements that they do. The grant lead will also host occasional twitter spaces with grantees to both share their work and to share the experience of being a grantee of the TON grants program.

Additionally, they provide technical and business expertise both to the grantees and are working on extending these support efforts to various contributing organizations in their ecosystem, as relevant.

The grants team is also working on increasing the general involvement from the perspective of enabling more ways for them to surface other potential grant ideas.

Stats:

Year

Applications Received

Projects Funded

Amount Issued

2022

138 (on QB app)[88]

100+

$1.2m[89]

2023

Challenges faced by the grants program: 

As a new grants program with a large, publicly stated budget, they have received interest from individuals who have not completed their milestones on grants submitted to other ecosystems. There are currently no streamlined ways of catching such information, so as a result the grants team has to manually perform due diligence which can be difficult. This is exacerbated by the quick turnaround times they strive for, so this kind of information has surfaced post grant approval in at least one instance.

This also speaks to another challenge that pertains to all grant programs, namely that there is currently little coordination of any kind across grant programs. The grant lead manages a telegram chat that includes grant operators from various ecosystems, though it has been difficult to turn that connection into more coordination.

What do they do very well:

By consistently hosting Twitter spaces and providing announcements of the funded projects, the grants team is pushing for as much awareness of these projects as possible. While TON grants has not started reporting data on their grants program, they are doing a good job maintaining transparency of their program thanks to the aforementioned communications alongside the use of Questbook’s transparent grant tooling.

Additional Materials:

Uniswap Foundation

History: 

The Uniswap Grants Program (UGP) was first proposed on the Uniswap forum (post) in December 2020 by Jesse Walden and was co-authored by Ken Ng, who would become the lead of the program once it was approved. The program was initially capped at $750k per quarter, had 5 dedicated reviewers each with a term of two quarters, and the committee functioned as a 4 of 5 multisig on the funds. The mission of the UGP was to provide valuable resources to help grow the Uniswap ecosystem.

In retrospect, it’s interesting to see that the post included some information on the state of granting at that point in time. Quoting from Jesse’s post, “For context Gitcoin CLR Round 7 distributed $725k ($450k in matched) across 857 projects and YTD, Moloch has granted just under $200k but in contrast, the EF has committed to somewhere in the 8 figure range.”

If you want a good read on Uni that includes some history and gets into some of the specific grants issued and governance changes, check out Sov’s write up, “Breaking down the Uniswap Grants Program.”

The UGP program ran through Summer-2022, at which point the creation of the Uniswap Foundation was proposed in August of that year[90] and the UF Grants program was created shortly after its approval.

Uniswap Foundation Mission:

The Uniswap Foundation supports a community of individuals and organizations dedicated to a more open, fair and decentralized financial system. We achieve this mission through three pillars of work:

1. Growth

2. Innovation

3. Stewardship[91]

Grants Program Mission:

Not explicitly stated on their site, but the program seems to focus on the growth and success of the Uniswap ecosystem.

The grant application process: 

Grant categories  

“If you’re building public goods and interested in additional support, apply for a grant. There are two types of grants: Open applications for contributors who have their own idea (accepted on a rolling basis), or Requests for Proposal (RFPs), which are ideas we’re actively looking to fund.”[92]

Rolling grants are accepted on an ongoing basis and can relate to any of the areas of focus for the UF Grants program, which are outlined below.

RFPs are sourced from pain points from the community and the Uniswap team. As of August 2023, there were four active RFPs:

  • Provide liquidity widgets (under Liquidity provider tools and resources)
  • Open-source design for Uniswap’s LP user experience (under Liquidity provider tools and resources)
  • Research - what bad hooks look like (under Developer tools)
  • Proof of concept - hooks and developer documentation (under Developer tools)

The UF team specifically calls out that some of their RFPs are intended for both academics and builders, bringing in research grants alongside technical and non-technical grants.

Grant process

The UF outlines the process at a high-level on their site, which entails three elements:

  1. Submitting the proposal
  2. Evaluation
  3. Decision / Approval

The submission varies for the rolling grants and RFPs, both of which utilize airtable to track the data. For rolling grants, users are directed to the ‘Apply for a Grant’ section of their site which includes an application checklist and has the airtable form embedded below the checklist. The application there is titled Grant/RFP Application and the user is able to choose what they’re applying for.

Meanwhile, if a user follows the UF site to the RFP section (titled ‘Opportunities’) and clicks into the details of a specific RFP, they are taken to a notion page dedicated to that RFP. At the bottom of each notion page is a link to an airtable based application which is nearly identical to the general form with some minor changes to avoid someone accidently choosing the wrong ‘Focus Area’ as they’re called on these applications.

The above is from the RFP specific application, though no information on the specific RFP is pre-filled, vs the choices of focus area as seen below from the general application.

The second step in the UF’s grant process is the evaluation. As of early August 2023, the UF was still in the process of recruiting its dedicated Head of Grants. In lieu of this position being filled, the COO and a contracted grants analyst were mainly responsible for the initial evaluation of grant applications and are the de facto grants committee.

As needed, the grants committee has an informal, non-compensated network of advisors who are tapped to provide input from domain experts.

When the grants committee is interested in a specific application, they invite that team for an interview. Additional information may be requested and milestones can be refined / co-defined to try and clarify what success could look like for each grant.

Once a grant is issued, onboarding of the grantee includes KYC and signing a contract to formalize the grant.

Grants Process & Operations:

Team Size

1.5 and seeking a Head of Grant to bring the team to 2 with support from the COO as needed. The UF does not maintain a set number of informal advisors so it is unclear how many individuals might be brought in to review specific grants or get a second opinion.

Response Timelines

The grant process, as outlined on the UF site, alludes to a 1-6 week turnaround time for grants. This timing is dependent on the size of the grant request and the complexity of the underlying topic and specific proposal.

Grant Types

The types of grants are not broken out by size of grant, at least not publicly. The website alludes to grants falling into a number of categories as follows:

  • Content
  • Events
  • Liquidity provider tools and resources
  • Developer tools
  • Governance stewardship
  • Miscellaneous

Additionally, the general grant application includes the following Focus Areas:

  • Usability
  • Community
  • Tooling
  • RFP/Challenge
  • Other

In speaking to someone from the foundation, the team also broadly defines three priorities from categories on the site:

  • Developer tooling
  • Liquidity provider tooling
  • Governance participation

These last three are the priority subsets from the six categories listed on their website. Further clarification is needed as to why the Focus Areas from the application don’t map on to the six categories or the three priorities.

Reviewer Structure and Compensation:

If breaking out reviewers by compensated and uncompensated, the compensated reviewers are UF team members, which includes the COO and a grants analyst contract and will include the Head of Grants once that role is filled.

As far as uncompensated reviewers go, this includes a variety of individuals from the Uniswap ecosystem.

Impact Measurements: 

The UF does not have a formalized rubric they measure all grants against. Instead, success metrics are defined with each grantee depending on the nature of the funded activities. Additionally, they do have some broad goals and resulting metrics for each of their three priority areas:

  • Developer tooling  
  • Goal: making it easier for developers to build on Uniswap (can be anything from releasing a software development kit for devs to conducting a hackathon to any other relevant efforts)
  • Example metrics: how many developers are getting involved / building on Uniswap; how long does it take to get set up building on Uniswap / how many days it takes to start working on Uniswap
  • Liquidity provider tooling
  • Goal: making it easier for liquidity providers to partake in and contribute to Uniswap
  • Example metrics: number of LPs; how much capital is locked up
  • Governance participation
  • Goal: Getting the community more engaged in the governance process
  • Example metrics: votes; awareness of what is taking place in terms of Uniswap governance; number of delegates; delegate participation

If looking back to the original community post proposing UGP, some metrics were listed there as well, which included:

  • “Total number of projects funded
  • Quarterly increase in applications
  • Project engagement post-event/funding
  • Overall community engagement/sentiment”

Community: 

Community is an important part of the grants program at Uniswap. Both UGP and the Foundation overall had to pass public proposals in the community to be created in the first place. While there aren’t dedicated channels of feedback on the grant process as a whole, the UF team has a variety of informal mechanisms to get feedback from the Uniswap community at large. This includes, but is not limited to:

  • Encouraging conversations on the Uniswap forum
  • Conducting interviews with stakeholders and builders to help curate ideas for RFPs
  • Dedicating a channel in the Uniswap discord for grants

Stats:

From UGP 2020-2022

122 grants for 7m

Year

Projects Funded

Amount Issued

2021

122

~$7m

2022[93]

For UF Grants from August 2022 - present (August 2023 for this report)

Year

Projects Funded

Amount Issued

2022

Wave 1 - 14

Wave 2 - 19

2022 Total - 33

Wave 1 - $1,800,000

Wave 2 - $946,000

2022 Total - $2,749,000

2023

Wave 3 - 21

Wave 3 - $990,000[94]

Total across UGP and UF grants

Year

Projects Funded

Amount Issued

‘21-’23

176

~$9.8m

Challenges faced by the grants program: 

One challenge the UF Grants program faces relates to both the number of overall applications and to finding quality applications. Especially with the downturn of the last year and more VCs and grant programs tightening their purses, more individuals are coming to them seeking funding. As the volume increases, sifting through and finding the quality applications gets more difficult.

In relation to the RFP program specifically, it can be tough to craft the RFPs. Figuring out the best topic to choose and the right framing is not straightforward, especially when the topics are deeply technical. The UF grants team taps their network of advisors, relevant community members, and past grantees, but finding the right expertise to help craft a well-focused RFP can still be elusive at times. The more specific the RFP gets, the harder it gets to have individuals apply as it limits the potential size of the relevant applicant pool.

Related to the sourcing of quality applicants is the best marketing approach for disseminating the call for grants, especially technical RFPs.

Another challenge pertains to retroactive funding - how to best support projects that are already greatly contributing to the Uniswap ecosystem. UGP did a Gitcoin round with the intention of supporting some projects that have already been developed to some degree, but the question of how to best support existing contributions remains an open question

What do they do very well:

Given the connections between the Ethereum grants program and the Uni/UF program, it is not surprising that UGP and UF grants have both been dedicated to quality reporting on the grant program. As part of this, they have a database of funded projects that provides clarity on the projects they supported and links to where one can find the status of the project.

Uniswap was also one of the leaders in terms of the use of RFPs more broadly. Though a program such the one run by Protocol Labs (see below) started using RFPs earlier, the UGPs and UF’s RFP programs was one of if not the earliest RFP programs extending from pure research to include more builder oriented projects.

The Uniswap program also does a good job at publicly outlining the nature of their process on their website, including a checklist for potential applicants as well as tips and considerations to help applicants strengthen their submission. This kind of work done by the grants team at Uniswap is meant to make things easier for grantees and to increase the amount of quality submissions that come, though it’s important to note that conducting an assessment of whether or not such information led to higher quality applications was not in the scope of this report.

Additional Materials:

Sunset Grant Programs

As part of our exploration, we wanted to make sure to cover some notable grant programs that have sunset in the last 12 months. These programs include:

  • Algorand
  • NEAR
  • Polygon
  • Protocol Labs

Algorand Foundation 

History: 

Algorand is an open-source, decentralized blockchain network that leverages a two-tiered structure and a unique variation of the Proof-of-Stake (PoS) consensus mechanism. Algorand was founded in 2017 before launching its mainnet and ALGO token in June 2019[95].

The Algorand Foundation was launched in 2019 to oversee the funding and development of Algorand Inc. and the Algorand protocol itself.

The Algorand Foundation (AF) has a structured approach to funding, beginning with ideation and hackathons and culminating in VC investments. The foundation's initial funding was categorized as follows:

  • Hackathons
  • Bounties (primarily through Gitcoin)
  • Grants Program
  • Accelerators
  • VCs

The AF established a 250M ALGO[96] fund that was split into research on distributed technologies and advancements, education focused on developer growth, academic acceleration, and community learning, social for long-term global positive impact, and apps on-chain and use cases (including NFTs, FTs, AMMs, and DEX).

The Algo grant program transitioned to vertical-focused Ecosystem Funding. They now have referral-based funding programs aligned by vertical, with open calls for project proposals. Algorand Foundation also transitioned to a much smaller grants program, xGov.

xGov has been AF’s evolution to a more decentralized approach to grants, where xGov members vote on proposals for grants. The purpose of the program is to create an expert layer of governors who use their expertise to decide where funding should be allocated, to grow the Algorand ecosystem.

Each grant allocation cycle has a separate voting session. All xGov members are required to vote on these grant proposals in the session to maintain their membership status. Their first voting session for allocating 2M ALGO in grants, opened up in July 2023, and was a month long.

The foundation gravitated towards a venture-centric funding model to better align with the algo ecosystem's unique project needs. This shift emphasized the value a project brought to the ecosystem over merely completing tasks.

Algorand Foundation Mission: To foster an inclusive, decentralized, and borderless global economy using Algorand blockchain technology.

Algorand Foundation Grants Program Mission: To fund projects that develop apps, support infrastructure, and innovate research on the Algorand blockchain.

Grants Process & Operations:

Grant Categories  

  • Focus Areas: Oracles, Bridges, Launchpads, EVM Compatibility, and Developer tooling.
  • Applications: 128 applications were submitted, with 26 being accepted

Grant Process

Proposals were evaluated based on their quality, technical feasibility, potential for ecosystem impact, and the background and commitment of the team.

Process:

  • Applications were submitted via the foundation's website. A dedicated team from the foundation conducted an initial review. Complete applications were forwarded to the grant review committee, while incomplete ones were returned for additional information. The committee convened regularly to evaluate the proposals. Projects that were accepted were then broken down into deliverables or milestones. The foundation also requested formal reviews, including documentation, code repositories, and demo videos, before releasing additional funding tranches.

Projects within the ecosystem can be broadly categorized into two stages:

  • Pre-seed: Projects at this stage are in their infancy, often with a technical focus. They might have preliminary versions like demos or testnet implementations. The assessment is forward-looking, focusing on the potential market need and user base.
  • Seed: Projects at this stage have already launched a product. The assessment is more concrete, relying on metrics such as transaction volume growth, new feature launches, and the number of new user wallets.

Team Size: 5 -7 people

  • The team consisted of people with different subject matter expertise, primarily technical

Response Timelines: 4 Weeks

Grant Types

  • Research Grants
  • Open Grants
  • Super Grants
  • RFPs

Reviewer Structure:

Early Structure

Their original grants review committee would convene on a weekly basis, meeting either weekly or biweekly. The team was primarily composed of technical individuals, specifically cryptographers from IBM. The responsibilities for the review were divided based on expertise, with technical individuals reviewing highly technical grants, and business-oriented individuals or those with a Business Development (BD) skill set, reviewing more generalist applications.

Evolution Over Time

In early 2021, there was a shift in the review process with the introduction of grant program managers. Before this, foundation staff, regardless of their primary roles, would participate in grant reviews and balance the application load among themselves.

After hiring grant program managers, program managers took on most of the primary review responsibilities. In addition to this, subject matter experts called upon for their specific expertise as and when needed, such as assessing the technical viability of a proposal or evaluating a team's capabilities.

Impact Measurements: 

One of the key criteria in the selection process for the grants program was the positive impact projects would have on the wider Algorand community and the ecosystem in general.

Other criteria included:

  • Quality of submission
  • Technical or academic strength of the proposal
  • Opportunity for growth
  • Commitment of the team submitting the projects.

The primary criteria for funding decisions for the venture program revolves around:

  • The team's demonstrated ability to execute on products.
  • If available, on-chain metrics or other traction metrics related to the products.

For the grants program:

 

  • Milestone-based Funding: Grants were not given out in lump sums. Instead, they were released based on the completion of specific milestones. Teams would submit materials for review upon completing a milestone, and upon approval, the funding for that milestone would be released.
  • Incomplete Milestones: Of the 302 accepted grant applications, about 100 did not complete all of their milestones. This means that roughly a third of the grantees did not reach their final milestone as initially contracted.
  • Impact Tracking: The grants program seemed to focus on whether the grantees completed the tasks outlined in their statement of work rather than assessing the strategic value or broader impact of the work. This approach was identified as a limitation, as some grants were approved for projects that may not have brought significant value to the ecosystem.
  • The Algorand Foundation worked with a Consulting Group for impact assessment. However, only a small portion of their engagement focused on the grants program. Most of their work centered on assessing the level of decentralization in the Algorand network. The findings from engagements were not made public.

Ecosystem ROI Assessment:

  • Assessing the return on investment (ROI) for the ecosystem varies based on the maturity of the product:
  • For live products, the assessment is based on tangible metrics and user engagement.
  • For products in development or beta stages, the focus shifts to their potential value and the anticipated utility they might bring to the ecosystem.

Community: 

The nature of community rewards has changed over time at AF, from participation-based to governance-based, with governors voting on the level of rewards per quarter from 2022.

The Algorand blockchain network has its own native cryptocurrency, the ALGO, which is used for community governance on xGov. With the latest transition to a decentralized approach, the community has absolute power on grants allocation.

The way to become an xGov member with voting sway is to sign up for governance during their signup phase, at any quarter, for a 12 month Term. Each Term has a minimum of 4 voting periods, running in parallel to the General Governance quarters. While Governors can take AF’s advice on voting, there’s no obligation from AF’s side to do so.

Stats:

Year

Applications Received

Projects Funded

Amount Issued

2020 - 2022

870

302 projects were funded, including:

  • 23 research proposals
  • 30 development tools and infrastructure projects
  • 40 education and community projects
  • 209 apps and use cases

$100,344,589

($48 million in cash or stables and $52 million in Algo)

2021

Bounties through Gitcoin

$500,000

Why did the program stop and transition to a venture model?

The transition of AF from a grants model to a venture model was for a few different reasons. First, was the emergence of grant farming. There was a trend observed across the industry where projects would apply for grants from multiple foundations for the same product, treating these grants as revenue. This was a growing issue, and led to multiple projects completely reliant on grant funding instead of working towards independence and sustainability.

Changing market dynamics further complicated matters. As market conditions changed, maintaining a grants program became costlier, especially when the results were not getting the desired value. This inefficiency in resource allocation made it evident that a change was necessary to ensure sustainability and genuine growth.

And lastly, the very nature of the ecosystem itself began to evolve. Initially, the grants program was essential for attracting talent and projects. However, as the ecosystem matured with more on-chain applications and projects, the need for such grants diminished. The ecosystem no longer needed mere quantity but quality. The focus naturally shifted towards supporting applications that have actual usage and developer tools essential for building more applications.

Challenges faced by the grants program: 

The Algorand Foundation grants program, while an essential instrument for fostering growth and innovation, faced several challenges as it expanded. As the program grew, it seemingly became a magnet for numerous applications. With a considerable amount of funds at its disposal, it attracted a wide range of applicants. The challenge, however, lay in accurately assessing which applicants genuinely possessed the capability to execute their proposals effectively. A parallel was drawn to a "golden water cooler" scenario, where the allure of available funds drew many, but not all had the competence or intent to utilize those funds effectively.

Additionally, there was a noticeable dependency on the grants by several projects. Many of these initiatives had grown heavily reliant on the foundation's funding to such an extent that the eventual sunsetting of the grant program raised alarms regarding their long-term viability, leading to voiced concerns and grievances. This newfound scarcity of easy grant money pushed projects to confront and adapt to the market's unpredictable nature, prompting them to seek sustainable alternatives.

Moreover, the landscape of venture funding posed its own set of challenges. Even though the foundation exhibited a proactive approach towards investing, the broader market for fundraising turned more cautious. The foundation also demonstrated a preference for not being the sole investor in projects, emphasizing the importance of sharing the investment risk with other stakeholders.

A key learning from AF’s program is how challenging it can be to run a grants program. A well- administered grants program can be a great way to solve the cold start problem of starting or growing an ecosystem. However, it’s crucial to note project milestones need to be clearly slotted out with teams executing them, to be successful.

What do they did very well:

The Algorand Foundation grants program played a pivotal role in fostering the growth and development of the Algorand ecosystem. It attracted quality developers and teams during the ecosystem's early stages. These grantees  focused on constructing the ecosystem's foundational requirements, providing the groundwork for future expansion.

Recognizing the 'cold start' dilemma many nascent ecosystems face, the program served as a beacon, ensuring that teams with clear objectives were aptly supported, allowing for swift execution of their goals. As the program matured, it underwent significant changes, with a distinct shift towards funding projects showcasing tangible traction, linking investments directly to evident returns for the ecosystem. There was also a notable progression towards decentralization and community involvement.

What xGov does well
After sunsetting the grants program, the AF has empowered its community by offering them a smaller grants program, where they have budgetary control.

One thing the current xGov program does well is linking funding to return on investment for the ecosystem, meaning growth in some shape or form. The second is budgetary control of xGov has led to an interesting approach from the community. For context, 60,000 ALGO was approved for projects compared to the 2M allocation by xGov members. Empowering community members to participate in decision making led to the community getting a deeper understanding of the intricacies and challenges of running a successful grants program, first hand.

They’ve done a good job of developing rigorous standards for membership based on commitment and activity– lack of participation and commitment leads to revocation of membership roles. This ensures all voting members are equally active in their governance roles.

Additional Material

NEAR Foundation

History

NEAR[97] is a Layer 1, sharded (unique scaling mechanism), proof-of-stake blockchain built with usability in mind. While the NEAR protocol was built in 2018, with a vision to give developers an easy path for building decentralized applications, it launched on mainnet in 2020, and became community-operated later that year.

The NEAR Foundation (NF) was founded as a non-profit fostering NEAR’s ecosystem growth and protocol development, in 2019[98].

Before 2021, NEAR didn’t operate a full-fledged grants program. They had service-based agreements which were sometimes referred to as grants, leading to confusion. The decentralized bounty board was difficult to manage, and over time, a pivot to a traditional grant system felt fitting. When NEAR Foundation introduced the grants program in 2021, it quickly became a major part of their capital allocation strategy. It was crucial for grantees to establish clear milestones and goals that were designed to maximize potential value added to the ecosystem.

With a runway extending over five years, they wanted to ensure that they were stewarding NEAR’s Treasury in the most responsible and sustainable way. This led to a complete overhaul of their capital allocation approach, resulting in strong emphasis on decentralization and entrusting power to community DAOs.

It’s important to note NF has had one of the largest capital deployments towards ecosystem and grants funding, having issued $540M in fiat and tokens over the course of 3 quarters (Q4 2021 - Q2 2022).

Exit to community

NEAR is taking a bottom-up, grassroots approach for the upcoming year by launching Dev Hub and the NEAR Digital Collective (NDC) - two initiatives that are aimed at uniting founders, builders, users, and key stakeholders to govern the NEAR ecosystem. This approach aims to decentralize the NEAR Foundation by empowering the community to invest in its own expansion through grants. Meanwhile, the NEAR Foundation will allocate a portion of the capital to strategic projects that can drive the creation of new accounts on a mass scale, and real transaction volume. Instead, it will support the community in making these decisions, further decentralizing key elements of the ecosystem.

NDC is a project[99] that brings together[100] users, projects, stakeholders and partners of the NEAR ecosystem to govern itself. This is a critical move for the NEAR ecosystem because of its sheer size, involving over 1,000 projects, regional hubs, funding nodes, infrastructure providers, and over 20 million wallets. The NDC was established to help all those involved take an active role in how NEAR evolves. The NDC is helping develop a system of governance that will allow people to vote on a wide variety of issues, elect members to different governing councils, and even make amendments to the NEAR constitution itself.

NEAR’s Mission:

To remove every barrier for Web3 creators, by creating an ecosystem that is uniquely simple, safe, scalable and sustainable.

NEAR Foundation Mission’s:

To support the ecosystem, drive awareness to NEAR and web3, and continue to educate regulators around the world.

Grants Process & Operations:

Grant Categories  

The NEAR ecosystem has decentralized funding to their ecosystem. There are several options to get financial support for projects, and they’ve mapped out a path to help applicants realize through their website.

They’ve split funding and support sources into:

  • NEAR Digital Collective
  • Marketing DAO[101]:
  • Creatives DAO[102]:
  • Whatever “Grassroots DAOs” the newly elected House of Merit decides to fund
  • Resources including Funding Dashboard[103]
  • Protocol Development
  • Documentation Maintenance
  • Contract Standards
  • Zero Knowledge R&D
  • Paid Developer Fellowships
  • University Engagement
  • Hackathon Sponsorships
  • Events[105]

  • Ecosystem Grants: for projects and start-ups building in web3
  • Defi: Proximity Labs
  • EVM: Aurora - had a regional grants program that ran for a year and was sunset in July 2023[106]
  • NFT: Mintbase[107] 

  • Accelerators & Incubators: for projects and start-ups looking to join an incubator or accelerator
  • Accelerator (NF Near Horizon) >
  • VC, Investor & Accelerators
  • Credit model for services
  • Mentorship
  • MetaWeb - Venture capital and Incubator
  • Lyric Ventures - An investment fund focused on B2B
  • Octopus Accelerator - Web3 accelerator for projects building appchains

  • Regional Hubs
  • NEAR Vietnam
  • NEAR Korea
  • NEAR Balkans
  • Banyan Collective: The NEAR US hub dedicated to increasing the number and quality of NEAR developers and founders in the United States by running developer bootcamps, builder groups, open sprints, demo days, hackathons, social meetups, and VIP dinners.

  • NDC community treasury funding requests for projects, workgroups, and communities working on creative, marketing, or development
  • Funding Limit: 100K per mo, V0 Total Spend Cap $1.5M USD
  • V1 Total Spend Cap $2M USD
  • Every Grassroots Constellation has autonomy to set funding limits
  • Grassroots funding ($500 - $15K per month)
  • Incubation (Startup) > 10K per month

Grant Process

The MarketingDAO

  • Gives grants up to $10,000

All applicants are asked questions that fall into one or both categories:

  • General Questions based largely on the strategic goals from the NEAR Foundation.
  • Vertical specific questions to determine whether the General Questions criteria is being met for this specific type of content

Categories are deliberately broad. It is up to each project to demonstrate they meet the fundamental alignment threshold.

General Questions:

  • Will this project help develop a thriving ecosystem of high quality projects?
  • Does this project create an inspiring, vibrant community that makes people want to join?

  • Assessment by M DAO:
  • Does the project promote core qualities, characteristics, and progress about NEAR that can educate and inspire others to take action?
  • Is this project providing value to other builders and projects in the ecosystem in ways in which it accelerates their growth?

Will this project result in an entry point for Web3 talent?

  • NF KPIs
  • Assessment by DAO:
  1. Distribution Channel; is the team well placed to reach new audiences?
  2. Does the team have enough depth of knowledge about NEAR, what’s happening in the ecosystem, other unique insights and valuable knowledge;
  3. Is the project or team able to communicate clearly and concisely complex concepts to new audiences?
  4. What would potential developers/builders think/feel if they come in contact with content?
    4.1     Acceptable: that NEAR is a lea- der in blockchain technology, professionalism, inspired to build, curious to learn more, etc.
    4.2     Unacceptable: Scammy, pump and dump, vapourware.

Is this project of reasonable quality to receive capital?

  • NF KPIs
  1. Quality work
  2. Fair price for the work
  • Assessment by DAO
  1. Would a third party pay for the work at the quoted price?
  2. Is the price quoted aligned with the industry standards?
  3. Is the work generating more value to the ecosystem than it is extracting?

There’s further guidelines for vertical-specific content.

Team Size: depends on the DAO

Response Timelines: up to 4 weeks for MarketingDAO grants

Reviewer Structure and Compensation:

Each DAO and ecosystem partner has a different review process

Impact Measurements: 

Impact evaluators

  • Users
  • Volume
  • Addressing core infra gap
  • Dependencies that the infra gap unlocks if the thing is built (e.g. should PAxos get a grant to let institutions custodying with NEAR? MAybe to let Paypal list near in this made up example)

MarketingDAO tracks the following KPIs to measure impact/progress

  • NF KPIs to track progress:
  1. New Projects
  2. Active Apps
  3. Weekly Active DAOs
  4. Active Wallets
  5. Increase overall education, reach, and drive positive traction in the market

Community: 

In a move toward increased transparency and accountability, they initiated the publication of a regular Transparency report, committing to a quarterly release in the future. In addition to this, the NDC is working on a governance structure[108] that includes active NEAR accounts, House of Merit (a group of experienced community members appointed by members of the ecosystem to represent them during votes and key decisions), Council of Advisors (Advisors appointed to help shape the direction of the House of Merit), Transparency Commission (members of the community appointed to ensure checks and balances, are in place and upheld), and a Community Treasury (members of the House of Merit with support from the Council of Advisors help facilitate voting on how community funds are allocate).


The NDC's primary objective is to amplify the decentralization of NEAR’s ecosystem governance. By distributing a larger portion of the Foundation's token holdings to the community and transitioning decision-making on-chain, NEAR aims to foster a community that is more resilient, transparent, and equitable.

Stats:

Year

Projects Funded

Amount Issued

Q4 2021 - Q2 2022

Not available

$540M[109]

Metrics (as of 2022)

# of accounts

15.6 million accounts

# weekly active developers

1,500

# of DAOs

700

# of Projects

750

Why Did the Program Stop?

NEAR Foundation’s grants program faced several challenges including issues of clarity, resource constraints, trust concerns, token management, and a shift in strategic focus. While the program wasn't formally shut down, it ceased accepting new applications, with a shift towards a more decentralized approach. The foundation then pivoted to positioning itself as a marketplace of service providers, moving away from its original role of simply disbursing funds.

In line with this, NEAR foundation started placing grants on-chain through their dev hub, with the objective of creating an on-chain community of individuals who could request funds.

This shift was set against the  backdrop of a complex web of legal entities, with the end goal of shifting power to a network of DAOs. Because different verticals had varying metrics, there was a stronger push for more decentralization.

Challenges faced by the grants program: 

There were several challenges faced by NEAR Foundation’s grants program. Initially, there was a notable lack of clarity in grant objectives. Without clear guidance on what needed to be developed, it inevitably led to issues over time. Moreover, the absence of subject matter experts (SMEs) for reviewing specific topics became apparent. The individuals they had on board were predominantly devoted to development work, being compensated full-time to achieve certain metrics. Unfortunately, supporting grants did not align with these metrics. Additionally, there was a significant challenge in recruiting technical reviewers for the team who were equipped to evaluate projects and conduct proper due diligence.

The introduction of the fast grants system seemed promising, as it allowed select individuals to swiftly issue grants. However, this method was not without its pitfalls. Trust issues emerged, with some instances pointing to self-dealing, culminating in the entire program's termination. Furthermore, the decision to issue grants in NEAR tokens introduced another layer of complexity. Recipients, hoping for the token's value to appreciate, mismanaged their grants. This interplay of speculation, mismanagement, and an emotional attachment to the ecosystem, primarily due to holding the token, further complicated the process and led to numerous challenges.

Finally, the push towards extreme decentralization, though well-intentioned, inadvertently sowed seeds of confusion and chaos. This decentralization was so pronounced that even internal team members found themselves at crossroads, often uncertain about protocols, responsibilities, and decision-making processes. Such a decentralized environment, while advocating for autonomy and shared responsibility, also underscored the importance of clear communication and structure.

What do they do very well:

One of the standout successes of the grants program was the introduction of verticals, which acted as distinct channels or avenues that enabled individuals to operate with a heightened sense of autonomy. These verticals, strategically designed, allowed members to drive initiatives, innovate, and execute tasks more independently. This structure not only encouraged personal responsibility but also sparked creativity and passion among the participants.

Moreover, this vertical-driven independence led to the emergence of innovative DAOs and vibrant communities. They were organic, community-driven ecosystems that arose out of genuine interest and collaboration.

Rather than being restrictive or overly prescriptive, they were designed to empower individuals, giving them the freedom to shape their paths within the broader framework.

Additional Materials:

NEAR Foundation Transparency Report 

Refining NEAR Foundation’s Grant Approach

NEAR Funding Updates: Funding Beyond Grants

NEAR Strategic Update and Outlook for 2023

https://sovereignsignal.substack.com/p/near

Polygon

History: 

Polygon is a Layer 2 blockchain platform that offers various scaling solutions for Ethereum and a multi-chain system that supports the interconnections of multiple blockchains and networks. Started in 2017, it was originally called MATIC network, and was created to tackle the scaling and usability issues of Ethereum[110]. MATIC ultimately had a rebrand in 2021, to Polygon, reflecting its broadened mission of expanded scaling and infrastructure solutions.

The first MATIC grants program, MATIC Developer Support Program (DSP) was launched way in 2019[111].  It focused on helping with technical issues (Scaling, UI/UX, Developer Tools), financial sustainability, talent sourcing and brokering relationships with investors for projects that were investment-ready. Post the Polygon rebrand in 2021, the  #DeFiForAll (DFA) fund was also launched shortly after. It was initially launched with 2% of the total $MATIC supply (roughly $150M based on value of $MATIC at time of launch) with the mission to bring the next million users to DeFi. The fund included both a grant component and an incentive program.

Initially, Polygon's grant program was scattered, distributed through various streams and partnerships. This fragmentation sometimes made it challenging to monitor some of the program's impact and metrics. Although some program streams were constantly monitored, this wasn’t the case for all of them. Recognizing the need for better organization, Polygon transitioned towards a decentralized system with the inception of the Polygon DAO and the Polygon Village. Some of the funds and resources from the DFA fund were used to launch Polygon DAO.

Polygon initiated a multifaceted grant program aimed at incentivizing and fostering development within its ecosystem. This program spanned several streams, from hackathons to development incentives, and leveraged platforms like Gitcoin to disburse funds. Initially, these grants were small, typically ranging from 5K to 10K for ecosystem grants for projects building on Polygon, with larger funds being approved by key decision-makers. The primary objective was not just financial support but to support projects with funds to help them with smart contract audits, development, and to subsidize gas fees et al, and to entice developers to build on its platform.

The grants programs transitioned to a Polygon voucher credit system, in an effort to see more usability efforts from builders. Vouchers were services and solutions provided for free, or for a discount, in trial from solution providers in the Polygon ecosystem; they were not offered by Polygon. Polygon, in some cases, subsidized the cost of some vouchers.

The Polygon Ecosystem DAO’s Mission:

The Polygon Ecosystem DAO wants to act as a stimulus for innovation, creation and development of decentralized processes and projects that are easy to use by end users, new lego bricks on which and with which to build.

Polygon Grants Program’s Mission:

To be able to support the growth of the projects on Polygon.

Grants Process & Operations:

They’ve introduced continuous POL emission to fund Community Treasury – a self-sustainable ecosystem fund that can support the above activities. The Community Treasury is governed by the Polygon community, via an agreed upon governance process. This governance process, as well as the wider Polygon governance framework, will be established and announced as part of the Polygon 2.0 effort.

Grant Categories  

  • Protocol development
  • Protocol research
  • Ecosystem grants
  • Adoption incentives

Grant Process

The evaluation criteria, evaluation, and granting process is from their PolygonDAO grants program that was sunset earlier this year.

The evaluation criteria identified was the project:

  • Must contribute to the Polygon ecosystem in a clear way, and have milestones attached to the contribution
  • Application is clear and answers all asked questions (Team background, funding etc)
  • Unfavorable opinion for projects not yet started, which ask for a grant to start, as they do not demonstrate a sufficient level of “skin in the game” and it’s hard to evaluate impact on ecosystem. This was mostly for projects without a POC or MVP, the rubric was mostly favoring projects with a clear growth roadmap.
  • Polygon usually never offered more than one grant to projects.

Steps of the evaluation process:

  • In the first phase, the process lasted 3-4 months while new processes were developed
  • Reporting on the Polygon EDAO Notion page
  • Publication of the request on the forum, excluding any personal contacts and details not to be disclosed
  • Distribution of grant request to one of the teams
  • Acceptance of the grant request by one of the teams
  • Study of the request, in order to determine its acceptability using their rubric and the possible need to contact the project directly to clarify the milestones or review the economic requests
  • Possible contact with the grantees, acquisition of new elements and new study of the updated request:
  • The team could discard a project that failed the evaluation criterias and were not willing to change their request.
  • Team could also choose to move the project directly to Polygon, this process would be outlined separately but did not need a genesis team vote
  • Or the team would move to a vote by the genesis team
  • In the event of a favorable vote, the team would contact the project and agree on the drafting of a short post about it, then sending the data to the multisig signers
  • In the event of unfavorable vote, the team would contact the project to justify the rejection

Grant evaluation and granting process using Questbook ( 2021 - 2022)

After a first phase carried out jointly and slowly by the entire genesis team they decided to divide their group into teams to speed up the time from application to vote for all grant applications. This was to help them reach out to grantees at an earlier stage to compliment or update their application. To ensure an even burden across teams they appointed a Grant Funnel Coordinator, who’d be responsible for onboarding grants and divide workload among the teams.

For more complex requests all teams can reach out to our Advisors, which is among others the Polygon DeFi team, Polygon Studio team and key individuals both within the Genesis squad and ecosystem.

The aim was for the below process to be carried out in five days for projects where the application could be reviewed by teams and no meeting with grantees would be necessary, for projects where the application needed to be updated the process would be re-started from the submission of the updated application.

Team Size: 12 people

Response Timelines: 5 Days

Grant Types

  • Small grants - from $5 - 10K that were approved by the BD team
  • Bigger grants required approval from C-level /decision makers

Reviewer Structure:

The reviewers were compensated, although there were different methodologies over time.

Community: 

The program was pretty centralized at first, but became more decentralized considering the changes and transition of the DAO wherein the DAO was not composed of Polygon’s team but the community at large too. They’ve tried to capture what the community asked for, through this approach.

Impact Measurements: 

  • # of Apps running
  • Avg. # of monthly transactions
  • Content Creators

The impact measurements by the program have changed over time. Initial metrics as mentioned above were periodically reported, whereas once the grants program was sunset and transitioned to voucher credits, the following metrics were shared as being measured for impact:

  • Monthly outreach
  • # of teams grants distributed to
  • Amount Distributed
  • Job opportunities created
  • Business value created for partner service providers through vouchers

Stats:

Year

Projects Funded

Amount Issued

2019 (Matic Developer Support Program (DSP)

Not available

$500,000[112]

2020

Not available

$1 million[113] 

2021 (Season 0)

70+

NA ($1M was disbursed to PolygonDAO however exact metrics on grant amount issued unavailable)[114]

2022

Not available

$500,000+

2022 - 2023 (Gitcoin Rounds)

Not available

$1 million

2022 Metrics

Total funds invested in hackathons and bounties + Gitcoin[115]

$2 million

# of Apps running

19,000 (as of April 2022)

Avg. # of monthly transactions

90 million

Content Creators

153,000

Monthly outreach

200+ projects

# of teams grants distributed to

100+ Teams

Amount Distributed

$500,000+

Job opportunities created

600+

Business value created for partner service providers through vouchers

$1M+

Why Did the Program Stop?

Polygon's grants program initially aimed to foster innovation and support worthwhile projects. However, over the course of its operation, several challenges emerged.

One of the most pressing challenges was studying the genuine impact of these grants. As funds were dispersed to various projects, ensuring accountability and accurately assessing their effect became increasingly difficult. There were instances where the effective utilization of the funds came under scrutiny, with concerns that some projects might have used them for unrelated purposes.

In response, Polygon evolved its system, transitioning to providing service vouchers instead of direct funds. This meant that, for example, projects could receive smart contract audits without incurring direct costs, courtesy of agreements with auditors.

Yet, feedback from the community highlighted areas where the grant program could improve. There were concerns about transparency, objectivity, and the potential for perceived bias. Moreover, broader industry trends, like "grant farming" — where entities leverage multiple grant opportunities without genuinely innovating — raised further questions about the program's effectiveness.

Taking all this feedback into consideration, the primary reason Polygon halted the program was to re-evaluate its processes. The goal was to relaunch with improved, transparent procedures and develop more targeted and accountable methods of community engagement and development support.

Challenges faced by the grants program: 

The grants program encountered several challenges throughout its operation. One of the most significant issues was the prevalence of applicants engaging in grant farming, where they sought funds without the genuine intention of executing meaningful projects. This challenge was compounded by the difficulty in distinguishing between those truly building and those merely grant farming. Additionally, measuring the real-world impact of the funded projects and tracking how small grants were utilized by beneficiaries posed challenges. The program also grappled with maintaining objectivity in the distribution of grants, which is critical for its credibility and fairness.

What they did very well:

Polygon's grants program has done really well in multiple areas. First and foremost, through strategic hackathons and grant programs, they've dramatically increased their visibility, engaging a vast pool of developers and builders. This has solidified their position as one of the most prominent brands in web3. Secondly, by enhancing their community-centric approach, the grants program took a unique turn by emphasizing community-driven decision-making. By prioritizing the delegation of significant decisions to stakeholders and the broader community, Polygon fostered a sense of inclusivity and ownership.

Beyond these achievements, a key strength lies in their adaptability. They've consistently showcased the ability to evolve based on the effectiveness of their initiatives. This keen sense of observation and willingness to pivot has allowed them to better cater to builders, refining their approach and catalyzing growth within their ecosystem.

Transition to Polygon Village

Polygon Village’s new format was launched[116] in April 2023, a few months after the grants program was discontinued. The new format was a way to help projects building on Polygon. Taking into account learnings from the previous years where traction was great, but they weren’t able to handle the projects, the Polygon team wanted to be better prepared to help the teams. Thus, they opted to initially limit support to a few teams to learn faster and build a better support system, with the objective of scaling through supported teams helping other teams.

Their eventual aim is for Polygon to be the chain for community-owned businesses, and Village participants to be the biggest beneficiaries.

The support offered by Polygon Village, which will be progressively decentralized, mainly focused on a few pillars:

  • Connection with the rest of the ecosystem, through intros to communities and projects
  • Drive to build and grow in a composable and interoperable way
  • A light form of mentoring, which allows younger projects to learn from the experience of others and more consolidated projects to scale thanks to the know-how of others
  • Validation of the idea, speaking with others
  • Visibility
  • Follow a path of internal decentralization, implementing dedicated frameworks and tools
  • Credits and vouchers from partners

Additional Materials:

Protocol Labs

History: 

Protocol Labs (PL) was founded in 2014, to support the development of the Interplanetary File System (IPFS). At the core of IPFS and Filecoin was the intention to decentralize the web and cloud computing. After the Filecoin token sale in 2017, the organization grew considerably and started informally issuing grants to researchers as a result of the demands for the protocol to conduct more research and development.

This section is purely focused on the Research Grants efforts from Protocol Labs and does not cover any grant efforts from the Filecoin Foundation or non-research related grants from PL or other programs in that ecosystem.

By April 2018, PL formed PL Research as an official team within the organization with the “goal to direct and support all the research efforts across our projects and communities. We seek to ask and answer important questions, organize our work, share our contributions, provide a direct and explicit conduit for potential collaborators, and even to fund external research endeavors.”[117] 

In that same announcement, they also mentioned the rollout of an RFP-based research grants program, which had five initial RFPs. The program was launched with a budget of $5m and their RFPs were all listed on the PL github.[118] They awarded five grants out of the eleven strong proposals received in the summer of 2018 for a total of over $300,000. The first five grants issued that year[119] were managed with a very manual process and so in 2019, PL dedicated a Program Manager (which grew to three with time) to help increase the scope of grants, to get more grants issued, and to create more methodologies and processes around grant evaluation.

Unlike most other grant programs in the space, PL’s grants were very oriented towards research problems, including questions beyond web3 such as meta-research and metascience. In addition to these initial public research grants, they had some other research grant making activity that was issued on an ad hoc basis.  

As the research program expanded beyond the first five RFPs, they also added RFPx, which was akin to a make your own research adventure. This was primarily a way to route requests from people the organization was already working with or interested in working with whose work didn’t neatly fit into existing RFPs but was still relevant.

In late March 2020, PL also deployed a $200,000 COVID fast grants program aimed to help with COVID response. While it may seem as falling out of scope for the organization, there were a few goals ranging from supporting during a critical situation as well as conducting a meta-research experiment of sorts to better understand how to support emergent situations and how to empower relevant grassroots initiatives.[120] These grants were capped at $20,000 per grant. The program received 55 total applications and 10 grants were issued in the end for a total of $174,500.[121]

In early 2021, PL introduced a Grants Spectrum which was their final official grants program[122] until it was wound down between late 2022 and early 2023 for a mix of financial and operational reasons.

Protocol Labs Mission: “Protocol Labs is an Open Source R&D company, distributed around Earth. Our mission is to drive breakthroughs in computing to push humanity forward.[123] 

Protocol Labs Grants Mission: “The Protocol Labs Research Grant Program aims to support collaborative work on problems defined by the broader research community.”[124]

Grants Process & Operations:

Grant categories  

Grants at Protocol Labs were predominantly RFP-based research grants. They also did some prospective research grants. Given their focus in research, their grants program had to learn to deal with issuing grants to universities and all of the overhead that came in as part of that, both financial and operational.

Given this focus, many issued grants were from $50,000 - $200,000 and would have a duration of at least one year.

Grant process

Initially, individual researchers or research teams would submit their proposal in response to the RFP. As the grants process evolved to having dedicated staff and sourced RFPs from specific teams, some initial filtering was done by the grants team and then the teams that crafted the RFP were responsible for reviewing. Once they found grants they wanted to support, the program managers and grant administrative team would help finalize the process.

When they launched their RFPx program, the intended process was to have individuals first propose a problem statement and then submit an application based on that problem state. In actuality, that did not play out as intended. This, along with shifting priorities and budgets, played into the ultimate decision to wind down the program. Grants would still continue to be issued on an ad hoc basis as appropriate.

Team Size: 6+

At its peak, the PL grant program had 3 program managers supporting the grants program, who worked across all of PL’s research projects, as well as 3 support staff. This does not take into account the wide range of other individuals who contributed to the grants process in some kind of way in a more limited capacity.

RFPs were crafted by specific research teams within PL and ultimately had to dedicate reviewers as part of the grants team being able to run the RFP for them.

Response Timelines

Given the fact that all of the grants issued by PL were research grants, the general timeline would take longer than what other programs experienced. We did not have specific numbers as that was itself variable over time depending on a variety of variables, though it could take a few months to finalize some grants.

Grant Types

Below is the outline of the grants that were included as part of the Grant Spectrum the PL team developed.

  • PhD Fellowships intended to support the development of an early-stage researcher conducting research in one of PL’s fields of interest.
  • Postdoctoral Fellowships intended to support a postdoctoral researcher’s contributions to PL-related research.
  • Implementation grants designed to support PL’s external academic collaborators in hiring short-term engineering consultants and contractors to develop and deploy critical software necessary to conduct PL-aligned research.
  • Investigator awards intended to support experienced postdocs wishing to transition to running their own research project with the support of an appropriate host institution.
  • Nucleation grants intended to support the development of a close collaboration between an early-stage researcher, their doctoral advisor, and a PL researcher on a project of relevance to PL’s research interests.
  • Research sabbatical awards intended to support faculty conducting research in PL’s fields of interest during a 6-12 month sabbatical period, with a particular emphasis on supporting collaborative work with a PL researcher.
  • Summer research grants designed to support faculty in conducting summer research in one of PL’s fields of interest, with a particular emphasis on carrying out collaborative work with a PL researcher.”[125]

This wide range of grant types led to some complications in operationally managing the program, which was one of the factors that led to the program getting wound down, though the general changing priorities were a greater contributor to the wind down. Prior to fully sunsetting the program, the above grant types were narrowed down mainly to PhD fellowships, Postdoctoral Fellowships, and Sabbatical awards.

The Filecoin Foundation took some ownership of issuing all grants focused on the development of FIL[126] after the mainnet launch when the Foundation was endowed. Prior to that point, PL had initially overseen the initial development of FIL as well as projects such as IPFS[127] or libp2p[128]. As they started to get more decentralized, ecosystem staff took over the oversight of the projects and these latter two ran in a much more independent nature.

Reviewer Structure:

Initially, the RFPs were crafted by a mix of researchers and did not have clear review structures attached to them. That made for challenges with review of RFPs initially. As the program evolved, the research teams proposing the RFP would have to dedicate reviewers as a review committee. These review committees were usually 2-3 domain experts from that research team and one of the program managers.

There was a desire for how to pull in relevant community members as part of the review process, though that did not get formalized into the process.

Impact Measurements: 

For the targeted RFPs, there were specific desired outcomes so it was slightly more straightforward in terms of understanding whether those outcomes were achieved or at least if some general progress was made. Given the open-ended nature of research, it was difficult to concretize the metrics and would sometimes have to rely on the intuitions of the reviewers to have a sense of whether or not the work was useful, especially when the research didn’t go as planned.

For the more prospective research grants, some of which would not solely benefit PL related goals, there was an ever greater reliance on the intuitions or understandings of the researchers to gauge the impact of the outcomes.

Things were not purely based on intuition. There were some concrete metrics such as whether or not papers were published, numbers of times the resulting paper would get cited, and other traditional academic metrics. The challenges experienced here also fueled the organizations interest in metascience and understanding how the impacts of research could be better understood.

Community: 

The program at PL was run by the internal grants team coordinating with the relevant research teams. There were no formal mechanisms for community input into grants, despite the desire to add this with time.

PL did build a robust community of researchers and used a variety of events to convene researchers and to help create more opportunities for general collaboration. This included both technical conferences as well as their Funding the Commons series[129], which is still ongoing.

Stats:

While Protocol Labs used to have a grants portal with a lot of information, this got taken down once the program was deprecated. The numbers below are incomplete and can be taken as a lower bound.

Year

Applications Received

Projects Funded

Amount Issued

2018

11

5

$300,000

2019

Not available

3

$170,000

2020

Not available

15

$660,000[130][131]

2021

Not available

19

$1,480,000[132]

2022

Not available

24

$1,970,000[133]

2023

Not available

5

$600,000[134]

Why Did the Program Stop?

One answer is funding, though there was more to it than that. The macroeconomic conditions in the web3 space in the second half of 2022 made the research grant program tougher to justify. This was building on tensions that had arisen internally due to the specifics of how the organization was structured. The specifics of the org design is beyond the scope of this report.

What was PL Grants ultimately transitioned the Network Goods group, which itself had a very different scope. Whereas PL Grants was focused on mostly technical research, the Network Goods group had a much more expansive view exploring public goods. Projects such as Hypercerts[135] and the Open Source Observer[136] are both projects that are part of the Network Goods group.

A final thread that led to the sunsetting of the program was the desire to decentralize PL more broadly, allowing teams for more autonomy. This made having a centralized grants program much more challenging.

Challenges faced by the grants program: 

A common challenge in all grant programs is sourcing quality applications and minimizing the overhead needed to surface the quality ideas. This was no different for Protocol Labs.

A challenge faced early on was the review of the RFP-based applications. As a result, once the program managers were hired, the overall process of crafting an RPF was revised so that any team that wanted to propose an RFP also had to provide clarity on who the review committee would be. No review committee, no RFP. This made the review process much smoother.

What did they do very well:

Issuing grants have a wide ranging landscape of challenges, especially in terms of the appropriate tax compliance and legal diligence. These are only further complicated when issuing them to universities, which have a wide range of additional requirements. PL dedicated the resources to navigate that bureaucracy as well as possible and were able to issue grants to researchers at some universities with notoriously difficult compliance offices.

PL was ahead of the web3 grants game in the context of focused on RFPs. They recognized the importance of issuing RFPs, especially when it comes to advancing on deeply technical or complex problems.

The introductions of the COVID grants and the Cryptonet grants[137] both showcased PLs ability to recognize new opportunities and to craft programs around them.

PLs willingness to issue grants on an ad hoc basis also empowered some of their senior leaders to be able to quickly issue grants as needed.

Additional Materials:

Quadratic Funding Grant Operators

In addition to exploring specific grant programs, we wanted to delve into two quadratic funding (QF) based grant round operators. Gitcoin was the original one and clr.fund is a newer competitor.

Disclaimer 1, this report was funded via a Gitcoin Community Proposal.[138] Our role is not to argue for the use of either platform, nor are we attempting to do a deep-dive comparison of these two QF-based grant round operators and delve into their pro’s and con’s. Do not take our overviews or any information therein as endorsements of either.

Disclaimer 2, our grant report has a grant up as part of the Gitcoin Round (GR) 18 to fund the grant report. Both of the authors were more familiar with Gitcoin (one of us, Eugene, didn’t realize clr.fund had already launched when we started this research). There is a reasonable chance we will use both platforms in the future, and others as they arise and offer opportunities to fund research projects.

Clr.fund

History: 

Clr.fund was created in 2020 as an easily forkable open source quadratic funding stack. Learning from some of the challenges that Gitcoin has been working through pertaining to sybil attacks, clr.fund sought to build trust-minitized infrastructure from the outset.[139] One of the aspects of this was to utilize Minimal Anti-Collusion Infrastructure (MACI)[140]for collusion resistance, along with BrightID for Sybil resistance.

By July 2020, the Ethereum Foundation had committed $21,000 in funding to go towards the first matching pools. The funds were specifically for:

  • “$1,000 for matching in the first mainnet test rounds (rounds 0.0-0.x)
  • $10,000 for matching in Round 1
  • $10,000 for matching in Round 2”[141]

Clr.fund Mission: 

“Clr.fund aims to be a primary protocol by which the Ethereum Protocol and Ecosystem allocate funds towards the development of public goods that benefit the Ethereum Ecosystem.”[142]

Grants Process & Operations:

Grant Rounds

As of late August 2023, clr.fund was running its 9th official round.[143] More information on previous rounds can be found on their blog.[144] 

Review Process Across Clr.fund QF Rounds

In general, the process at clr.fund involves five phases[145]:

  1. Join
  2. Contribution
  3. Reallocation
  4. Tallying
  5. Finalized

The review process (the Join phase) is centered around checking if projects adhere to the round criteria. Clr.fund explains the round criteria whenever rounds are announced. As an example, they announced round 9 in June 2023[146]. They announced that there would be 124 total projects included in the round[147] and listed the round criteria - the project being open source, the project being owned by the submitter, and holding a soul bound token from Humanbound, amongst others.[148]

Once potential projects apply to the round, the clr.fund team and volunteer reviewers look through the applications prior to approving them as recipients in the round.

The clr.fund team explores the option of doing a round run directly by them or if they want to collaborate with another organization to run a round. In cases when they work with organizations, there is a proposal phase. From there, details will be ironed out and a round can be launched with a partner organization.  For rounds that are run by a sponsor, the sponsoring organization is the one to clearly define the round’s criteria. That organization is also responsible for identifying reviewers to assess the proposed applications.

Once the Join phase is complete in the window of time articulated by the clr.fund team, the Contribution phase begins. In round 9, this phase lasted only 1 day. After the Contribution phase concludes, the Reallocation phase begins. In this phase, contributors are able to change their project specific allocations but they are not able to change the amount contributed. This phase also lasted 1 day in round 9. From there, the Tallying phase begins and the MACI and smart contracts are triggered to calculate how much each project receives from the matching pool. Once that is complete and the Finalized phase begins, the project owners are able to come in and claim their funds.

Team Size

The organization has 1 full-time and 2 part-time employees, the full-time employee is a developer and is predominantly focused on the tech. The other two individuals focus on everything else, ranging from finding partners to define rounds with, raising matching funding, and working on all of the other aspects of building a new organization.

Impact Measurements: 

Given the nature of how clr.fund runs QF rounds, their metrics are more focused on the usage of rounds and the amount of matching funds projects received as opposed to assessing grants on an individual basis.

Some high-level metrics are number of rounds run, size of the matching pool, total contributions, and the number of contributors.

Community: 

The community at clr.fund is still growing. With any quadratic funding model, the role of having a wider community is key to help unlock matching funds. Clr.fund has been able to generate activity pertaining to its rounds.

Stats:

Year

Projects Funded

Amount Issued

2019

Not available

Round 1 $1231

Round 2 $1200

2020

Not available

Round 3 $2000

Round 4 $7.5k

Round 5 $2.9k

Round 7 $10k

Round 8 $72k

2021

Not available

$350k Ethstaker

$450k ETHColombia

2022

Not available

Round 9 $87K

2023

Not available

Round 1 $1231

Round 2 $1200

Challenges faced by the grants program: 

Finding the right organizations that both understand quadratic funding and want to run QF based rounds has been a challenge. Overall, quadratic funding is still a new tool that can be hard for projects to understand, particularly in terms of achieving the stated outcomes of improving participation and helping communities fund projects they care for.

Additionally, it has been difficult to define and execute a sustainable model as a new organization, especially one that adheres to the Ethereum ecosystems values of open source, decentralized, long-term focused, and without imposing views and beliefs.

What do they do very well:

Clr.fund came in with a new technical architecture to trial pertaining to quadratic funding and has been able to launch and run 9 rounds with a very lean team. They have already partnered with two organizations for funding rounds over $200,000.

Additional Materials:

Gitcoin

History: 

Gitcoin began as a bounty platform built directly off GitHub, aiming to provide more flexible, remote-friendly work and ensure fair compensation for open-source developers. Incubated at ConsenSys, Gitcoin initially had a bunch of experimental products: Kudos (NFTs), Quests (gamified quizzes), Codefund (OSS advertising), Townsquare (social), and grants. The grants platform saw the most adoption and eventually evolved to regular Gitcoin Grants rounds, adopting quadratic funding, allowing impactful matching of donations, incentivizing more contributions.

Over time, the grant program grew in both funding and features, introducing various categories like infrastructure and dApps. Notably, some early grantees, such as Uniswap and Optimism, later contributed back to future matching pools.

Gitcoin’s Mission:  

To enable communities to build, fund and protect what matters to them.

Gitcoin Grants Program’s Mission:

The mission of Gitcoin's Grant Program operates on two different levels. Firstly, it functions as a gateway, introducing people to Gitcoin's comprehensive suite of services and tools. In this capacity, it effectively serves as a sales funnel, channeling users towards Gitcoin's other offerings. At its core, however, the program's primary objective has always centered on championing open-source software. As the QF rounds have grown over time, their mission has evolved in response to global events and emerging needs, leading to the exploration and testing of tools for a variety of use cases.

Grants Process & Operations:

Review Process across Gitcoin QF rounds

  • Curated rounds preselect participants, open application rounds require spreadsheet-based reviews.
  • Some automatic ineligibility conditions exist.

Decision-making process for approving a project:

  • Approval implemented on Grant Stack tool's backend, any associated wallet can approve.
  • Consensus-based approach is used, typically "best two out of three" or "best three out of five".

How feedback is communicated to grantees:

Feedback to grantees is systematically communicated to ensure clarity and transparency. If a project doesn't gain approval, the reasons behind the decision are comprehensively explained in dedicated columns. Every grantee, irrespective of their project's status, receives personalized communication through email detailing their approval status. While grantees have the avenue to submit supplementary information if they believe crucial data was overlooked in the evaluation, it's important to note that there isn't a structured appeals process in place.

Criteria used in the review process:

  • Verification via Twitter, webpage analysis, and grant proposal review.
  • Consideration is also given to being a past grantee and connections within the space.
  • Repeat grantees are easier to process, new grantees undergo more rigorous review.
  • During featured rounds, Gitcoin often relies on partners for visibility into their communities.

Participating in future rounds:

For those wishing to participate in future rounds, grantees are required to reapply for funding and present updates on their developmental progress. The current evaluation system places a strong emphasis on tangible evidence of work and advancement, which helps determine a project's suitability for further funding. Open source software projects naturally find it more straightforward to showcase such progress compared to other project categories.

The achievements and strides made in the past three months play a pivotal role in the qualification process. As of now, the assessment approach leans towards a binary "approve/reject" nature rather than an in-depth qualitative analysis.

While Gitcoin initially championed an open review process, they reevaluated this stance, and changed their process. The public nature of reviews unintentionally paved the way for undesirable outcomes like personal attacks on reviewers and potential defamation of projects. As a result, they've transitioned to a more internalized communication methodology, although they continue to share broad statistical data such as the count of endorsed projects and related metrics.

Reviewer Structure:

The reviewer structure was formalized organically as the rounds got bigger and their application pipeline skyrocketed. There wasn’t a formal process initially, however they realized the need for a more rigorous, structured review and approval process after post round analysis. There are
specific work streams and teams that are focused on the review process now. For rounds 1 through 1, originally run on cGrants, any grants accepted in prior rounds were automatically grandfathered into future ones. This is no longer the case with Allo Protocol and current grants allocation in newer Gitcoin rounds.

The PGF (Public Goods Funding) workstream is predominantly responsible for the task at hand, with some assistance from volunteer community participants. It's worth noting that there used to be a distinct Fraud Detection and Defense team. However, it has since been integrated into the main review team. Within this setup, reviewers are strictly prohibited from evaluating their own grants or any grants where potential bias might come into play. In situations where there's a perceived or genuine conflict of interest, the team's approach is to include a greater number of individuals in the decision-making process to ensure fairness and transparency.

Impact Measurements: 

When it comes to impact measurements, the initial approach focused on reporting the funds raised and gauging the perceived value within the web3 space. Due to the absence of concrete metrics, the impact of cause-oriented work was articulated in a narrative form, capturing the essence and influence of the work undertaken.

How the impact of grantees has been measured over time:

When examining how the impact of grantees has evolved over time, there's a noticeable shift in the current focus. Emphasis is now placed on measuring the impact of alumni rather than the current grantees, especially since many of the latter are still in their formative stages. Upcoming rounds of evaluations are set to include application questions that delve into how projects gauge their own impact. There are also plans in the pipeline to integrate a greater number of impact certificates (hypercerts), attestations, and implement standardization procedures into their assessment process.

Community: 

In the initial stages, the community's familiarity with projects held substantial influence over the allocation of grants. However, as the platform has matured and expanded, there's been a move towards adopting more deliberate and well-considered procedures for grant distribution.

Stats: QF Pool Rounds Funding

Year

Projects Funded

Amount Issued

2019

148

Gitcoin Rounds 1 - 3 $407,521

2020

2445

Gitcoin Rounds 5 - 8
$3,034,319

2021

3198

Gitcoin Rounds 9 - 12
$12,255,000

2022

3745

Gitcoin Rounds 13 - 15
$13,950,000

2023

1107

Gitcoin Rounds 16 - 18
$5,958,462

Total

10643[149]

$35605302

Round

Start Date

Total Matching

Total Donated

Total Paid

Grantees Paid

Contributors

Contributions

GR1

1/31/2019

$25,000

$13,242

$38,242

26

126

132

GR2

3/25/19

$50,000

$56,000

$106,000

42

200

214

GR3

9/14/2019

$100,000

$163,279

$263,279

80

477

1,982

GR4

1/5/2020

$200,000

$143,642

$343,642

230

1,115

5,936

GR5

3/22/2020

$250,000

$242,000

$492,000

292

2,004

8,765

GR6

6/15/2020

$175,000

$227,847

$402,847

695

1,526

10,077

GR7

9/13/2020

$450,000

$274,830

$724,830

857

1,400

13,400

GR8

12/1/2020

$500,000

$571,000

$1,071,000

371

4,788

21,521

GR9

3/10/2021

$500,000

$1,300,000

$1,800,000

654

12,000

180,600

GR10

6/16/2021

$700,000

$1,100,000

$1,800,000

723

14,500

310,200

GR11

9/8/2021

$965,000

$1,590,000

$2,555,000

877

16,000

425,000

GR12

12/1/2021

$3,000,000

$3,100,000

$6,100,000

944

27,000

473,000

GR13

3/9/2022

$3,200,000

$1,450,000

$4,650,000

1,000

17,000

300,000

GR14

6/8/2022

$3,200,000

$1,700,000

$4,900,000

1,250

44,000

600,000

GR15

9/7/2022

$3,100,000

$1,300,000

$4,400,000

1,495

39,750

465,000

Alpha (16)

1/17/2023

$1,000,000

$667,000

$1,667,000

159

30,893

200,453

Beta (17)

4/25/2023

$1,750,000

$607,000

$2,357,000

468

19,022

106,431

GG18

8/15/2023

$1,255,000

$679,462

$1,934,462

480

47,513

328,660

Evolution to Grants Stack and Allo Protocol

Gitcoin’s current focus is a  push to the adoption of permissionless, quadratic funding and Allo protocol. They don't want to keep building up a giant team at Gitcoin to run bigger, broader rounds, but instead want to empower external teams and communities to run their own programs and rounds through the tools they’re building.


For their specific needs, they introduced the Grants Stack and the Allo Protocol. Originally, the smart contracts for Allo served as the backbone for Gitcoin's Quadratic Funding rounds. However, as they continued to run their grants program, it became evident that communities had nuanced preferences for deploying their funds. To address this, Gitcoin developed the Grants Stack, enabling anyone to launch and manage a QF round. This platform is driven by Allo, offering the flexibility for users to create innovative tools for capital allocation.

Challenges faced by the grants program: 

The grants program grapples with persistent challenges, including sybil attacks, fraud detection, and setting the right eligibility criteria. Being an open-source platform, it's inherently vulnerable to scammers, necessitating an ongoing evolution of strategies to combat them. Furthermore, orchestrating grant rounds and enhancing user experiences is an ongoing journey, marked by continuous refinement, bug identification, and subsequent improvements.

What do they do very well:

Gitcoin excels in community building, having cultivated one of the most robust communities through their Quadratic Funding (QF) rounds. Gitcoin rounds have also played a key role in funding important projects covering timely issues such as climate, advocacy[150], and even real-world causes[151] such as their Ukraine-specific pool of GR13 and their UNICEF round. The impact of their funding has crossed over from being solely web3 to supporting projects with meaningful impact globally. The distinctive donations they receive during each grant round serve as a testament to this strength. Moreover, they have successfully established a prominent and resilient brand, rooted deeply in the ethos of public goods funding.

Additional Materials:

Discussion and Takeaways

In this section, we will focus on providing some more color on the following takeaways:

  • Better alignment between intentions and positioning is helpful for grant programs and potential participants
  • Investing in operations is key to a programs success
  • There is not a single way to run grant programs; explore the landscape of options
  • Using rubrics has its pros and cons
  • Transparency can be helpful though it is quite variable in the space
  • More seriously thinking about the governance of grants programs is a natural part of the maturity of issuing grants
  • There is a lot of room for collaboration across programs, sometimes even within a single organization’s ecosystem
  • The space can benefit from more consistency around data
  • Systems of support for grantees are important
  • Conflicts of interest and accountability need to become more robust
  • Ensuring fairness through checks and balances
  • Planning and systemizing impact reporting
  • Need for continuous evolution and reassessment

Let’s dive into each one of these takeaways.

Better align intentions and positioning

During the bull market that arose post DeFi summer (summer of 2021) and started cooling down during the summer of 2022, many organizations were flush with cash and rushed to deploy capital and to keep up with their competitors. As a result, many grant programs were launched that differed greatly in terms of desired impact and yet all of them called themselves grant programs.

Grant programs should start with asking the honest question of what they hope to accomplish. If the goal is to grow the ecosystem, call it any ecosystem support fund. If the goal is to deploy capital as part of marketing, call it a growth fund. By recognizing the desired outcome and appropriately naming the program, other problems are alleviated, ranging from what metrics to use to who to market the program to. More on the importance of having a clear mission below in the Grant Governance section.

Operations are key - dedicated people, refined process

If the goal of a program is to effectively deploy capital for some intended outcome or results, then there need to be operations in place. More specifically, there need to be individuals dedicated to the grants program, especially if the intention is to run the program over time, to be able to efficiently deploy capital, to update the community with results, and to work towards developing metrics.

It seems that to deploy at least a million dollars in capital a year, a program needs at least one dedicated individual with ideally some administrative support as well as a set of reviewers. It may be the case that crafting a more decentralized program will still need the part-time focus of a few individuals to help work with the community to get such a program off the ground. The grant program that Stacks Foundation is working on will be an interesting experiment to keep an eye on in this regard.

Exploring the grant-type landscape  

As we've alluded to through this report, grants are not a monolith. There are different types of grant approaches and each can be optimized toward slightly different ends, providing grant operators with a spectrum of tools to choose from.

Quadratic funding is very good for signaling community support. QF requires finding capital for matching funds and requires a robust marketing and community apparatus.

Prospective grants are good for finding ideas that the ecosystem may not have known it needed. They can, however, be overwhelming as they may open the floodgates to a lot of applications.

RFPs are great for focused grant programs on concrete problem areas. They require more thought and diligence up front but can lead to much higher quality results.

Retrospective grants are great for supporting existing and ongoing contributions. Just focusing on these grants can ignore the support projects need to get to the point of qualifying for such a grant in the first place.

Research grants are good for, as the name implies, research. Like RFPs, they require technical reviewers and a mindset shift to better understand the more open-ended nature of research relative to engineering.

There is no ‘right’ grant structure broadly speaking. Rather, these are tools in the toolbelt of those structuring and managing grant programs and multiple tools can be utilized (though more operational infrastructure will be required to manage it).

Benefits and dangers of rubrics

Unless someone is running a grants program purely for marketing reasons (as in, just to get to say they issued grants), some degree of metrics are needed.

The benefit of having more robust metrics to the point of having a rubric is that there is a more streamlined approach towards assessing and reviewing grants. The con is that the grant program might start narrowing its scope towards the rubric, which is rarely the intended purpose. The presence of a rubric can make it too easy to focus on the rubric itself and to lose sight of the greater context.

The newer and less tested a rubric is, the more important it is to remind those who are reviewing to keep the mission of the grant program in mind.

Role of transparency

The role of transparency is one that is both fluid and unclear at this point in time, so we will focus on two aspects: how transparent grant programs are and why they chose the approach they currently have in place.

The transparency of grant programs is variable. The more transparent programs, such as Aave and Uniswap, provide full accounting of the projects they have funded. Programs that utilize Questbook’s tools, such as Compound (which as a reminder, was also administered by Questbook) and TON, make all of the proposals visible via the Questbook app.

Some programs have yet to provide detailed accounting of the grants they have issued. Communities do seem to be quite appreciative of having the knowledge of how the grants treasury has been spent, though we did not explore whether there is quantifiable direct value from transparency provided given the operational overhead of such reporting.

It is interesting to note that certain larger programs have expressed concerns pertaining to the influence that their transparency might have on future submissions. Namely, programs as the Ethereum Foundation’s ESP and Solana Foundation noted that they have experienced the community reading too much into some of the information they make available (e.g. if they mentioned that the average grant size in a certain time frame was $x USD, they would then receive most applications requesting $x even if that was not the intention). On a related note, Gitcoin shifted from an open approach with reviewer’s comments, to a more closed approach to protect reviewers from lashback and personal attacks.

A final point to note in regards to transparency is that it is seen as a very important part of allowing the community to have input or say of some sort over the grant program. It would be interesting to see programs that empower a community committee to audit the program, and the members of this committee might have access to more data than the community at large. This is to gesture at the fact that transparency doesn’t have to be a binary and that it can be helpful to think about the gradients between not transparent and fully transparent.

Operations + community begets the next frontier of grants: grant governance!

Grants governance, as we are using the term, talks about the systems of feedback loops and decision-making in a grant program.

A few things to consider when thinking through grants governance include:

  • What is your goal
  • Who is best suited to fulfill that goal
  • Who makes the decisions - the grants lead and/or a small team or the community as a whole or something in between
  • How to remain transparent and accountable to the community
  • How to resolve issues (i.e. people being unhappy with process, team unclear of impact, etc.)
  • What tools and/or mechanisms are used to deploy and/or review grants
  • What grant categories (from the landscape above) make most sense for the granting organizations goals
  • What are the systems of feedback loops and accountability one need to create in order to empower relevant experts while being accountable to the community

To quote from the Kellogg Evaluation Handbook, here are a few recommendations outlined to strike a better balance between wanting to demonstrate effectiveness and still maintaining a focus on how to improve the processes and evaluation program itself:

  • “Learn about and reflect on alternative paradigms and methods that are appropriate to our work.”[152]
  • “Question the questions”[153]
  • “Take action to deal with the effects of paradigms, politics, and values.”[154]

It sounds much easier said than done, but as the space and societies at large constantly show us - good governance is hard.

Inter-program collaborations

Collaboration across programs is something that rarely happens and builds off of existing social networks when it does. The lack of such collaboration precludes the space from tackling some of its biggest problems, especially deeply complex and technical ones that could benefit from large scale efforts.

It's unlikely that a single organization can run a multi million dollar program on a single domain area, which may be needed in areas such as zero knowledge proofs (zkp) or governance. The way that DARPA organized its autonomous vehicle challenge in the 2000s, imagine there was a zkp challenge sponsored by the largest relevant players in the space.

By splintering grant and research dollars, the potential total impact is constrained. Additionally, not increasing the amount of collaboration limits the ability to build networks of reviewers that can serve multiple ecosystems as opposed to each one competing for the same expertise to help review.

 

Need for more consistency - applications, data from programs

We saw two clear areas of adjusting for more consistency between grant programs - the grant applications and the data that gets shared from programs.

Each grant program has notable differences between their applications, even for the portions that can be standardized (i.e. name of the project, team members, project description, etc.). Creating such standardization could not only make the lives of applicants easier, but it would also open the door to having something like a common app for grants applications as an open source layer that would integrate across tooling options. Such a tool would help create a reputation layer at the level of basic grantee information across programs, which in turn could help reduce grant farming.

Another potential area for consistency is in terms of grant data sharing and metadata. The main benefit of standardizing this would be twofold - creating a shared conception of data transparency for grant programs and would make it easier for analysis of grants and grant programs. Having this metadata standard would make it much easier to have a verifiable database of the number of grants funded and the total amount issued, along with any other data most grant programs would feel comfortable sharing.

Building systems of support for grantees

The grant process doesn't end when the grant is issued. When executing the grant, the focus of the granting organization transitions from finding who to give money to what else can they help the grantee with to maximize their outcomes. This can include connecting grantees with each other, with other relevant communities, or with other relevant resources (including vetted service providers). The more mature the granting ecosystem, the more support there is for granted projects beyond the grant process itself.

Other examples of grantee support include grantee office hours, social events, or marketing support, to name a few. These support systems will differ depending on the goal of the grant program and the nature of grantees. A good place to start with adding more support can be asking grantees what they could have benefited from.

Conflicts of interest and accountability

Like in all capital issuance environments, figuring out the right processes for disclosing conflicts of interests is important. This can present challenges in the web3 space, given that there are many anonymous contributors and given that conflict of interest disclosure is generally underdeveloped in the space. There have been some attempts at codifying such things, but from what we have seen, they are mostly self-reported with little actual accountability. We are unaware of any violations that led to more than a forum discussion or someone voluntarily resigning from a role.[155] Given the decentralized nature of these programs and their parent organizations, it remains unclear what kind of legal recourse or non-legal processes would be followed in cases of clear disregard and violation.

Picking up the thread from the Accountability and Grants Governance section above, it is important to clearly state who holds others accountable in the system. Even if relying on non-traditional legal options, at least having some team tasked with auditing and having clear processes of when things get managed internally within the org versus getting sent to a Klerios type arbitration process would go far in terms of providing much more robust accountability.

Ensuring fairness through checks and balances

There is a need for more robust systems of checks and balances among reviewing teams and individuals, particularly when the reviewers and grantees have existing relationships. A potential pitfall is that familiarity bias may lead to the mistaken belief that a known grantee is automatically more legitimate or qualified. This poses a challenge as applicants with established profiles could be unfairly prioritized over newcomers who haven't yet had a chance to build a track record.

To counteract this, a rigorous and objective assessment system is essential, which distinguishes between new projects from first-time grantees and those from recurring ones. Without such checks and balances, both the review process and broader governance and review structures risk becoming biased. Accountability mechanisms must be in place to detect, rectify, and prevent such biases, ensuring fairness and meritocracy in grant allocations.

Planning and systemizing impact reporting

Most programs have thought of impact measurement as an afterthought and some do not report at all. There’s a sense of fear around reporting for some, which feels counterintuitive to the transparency and growth ethos that are communicated as values to their communities.

Introducing systems that lead to measurement of metrics of any sort can be one of the most effective ways to measure impact over the lifetime of a grants program. This can include pulling information from relevant experts and tapping the right people from the community (or from the stakeholders most affected by realizing its impact).

Default status tracking is missing in most programs unless voluntarily reported by the grantees. Most programs also recognize that purely quantitative metrics do not suffice in terms of capturing the full scope of impact, therefore figuring out the right systems for capturing qualitative information is also important, if less clear.

Need for continuous evolution and reassessment

A continuous cycle of evolution and reassessment is essential to ensure that grant programs remain aligned and responsive to the ever-evolving landscape of the ecosystem. As the ecosystem matures, its needs and challenges shift. What may have been a priority at the inception of the grants program might not hold the same importance later on. Therefore, by periodically re-evaluating the grant program, it is possible to identify emerging trends, address new challenges, and cater to the changing requirements of developers, projects, and the broader community.

Such evolution ensures that the program remains resilient against external market shocks, tech advancements, and shifts in user behavior. By staying proactive and adaptable, the grant program can ensure that it consistently offers meaningful support, fosters genuine innovation, and paves the way for the sustainable growth of the ecosystem.

Conclusion & Future Areas of Exploration

We see this report as a first step towards creating more of a shared understanding on grant programs in web3. There is still much to explore on these topics, from covering more grant programs, to delving deeper into very specific domains. We are outlining a few threads that we are personally interested in exploring after this research, but this is by no means an exhaustive list of the work to be done. If any of these interest you, do reach out.

Community

  • Bring together grant operators digitally across ecosystems for more collaboration
  • Workshops to bring together grant operators to map problems, coordinate experiment, co-fund as appropriate, and explore other avenues of collaboration

Research

  • Conduct a thorough audit of all stated grants paid against public, onchain data and disclosures of fiat based payments
  • Explore frameworks or matrices (i.e. grant maturity index, operations overhead per dollar spent, operations overhead per grant issued, etc.)
  • Conduct grantee assessments
  • Conduct a literature review of web2 / science funding and impact measurement
  • Review of the state of tooling and proposed tooling projects / experiments
  • Shadow teams to better help map their operational overhead of grant programs
  • Perform an in-depth application process review

Resources Sharing and Defining Standards

  • Create an open database of stats from programs, linked to onchain data as possible
  • Create a metadata standard
  • Create shared resources such as a repo of publicly known information about grants
  • Increase coordination amongst resource mapping efforts
  • Experiment with new mechanisms for the governance of grant programs

We hope that this report seeds both virtual and in person discussions. We will do our best to join various conversations taking place, whether on Twitter spaces or on forums or at conferences. We will also continue to publish more work as we are able to.

Despite how much is happening and consequently how fast time moves in web3, it is important to recognize that granting in the space is still in its infancy. We are greatly appreciative of the fact that we were able to contribute our views to the growing base of knowledge and eagerly await the coming explorations, experiments, and innovations.

Acknowledgements

We would like to thank all of the individuals who contributed to this report in various ways (in alphabetical order), including:  

  • Andrea Baglioni
  • Azeem Khan
  • Ben West
  • 0xBill
  • Cameron Dennis
  • Carl Cervone
  • Christopher Lema
  • Cooper Midroni
  • Connor O'Day
  • Courtney Jensen
  • Disruption Joe
  • Evan Miyazono
  • Federico Landini
  • Holke Brammer
  • Jorge Soares
  • Julia Barzyk
  • Justice Condor
  • Kenneth Ng
  • Kevin Owocki
  • Laura Banks
  • Luc Lamarche
  • Majed Alnaji
  • Marco Grendel
  • Markо Okhman
  • Meg Listler
  • Monet du Plessis
  • Natalie Crue
  • Rohit Malekar
  • Ruchil Sharma
  • Ryan Terribilini
  • Sov
  • Silvia Bessa
  • Umar Khan
  • QZ

Special thanks goes to the Gitcoin community and the stewards who helped shepard us through the GCP process and to Rich Brown for funding this work. We would not have been able to do this work without your support! Also, special thanks go to Laura and to Sov for being early supporters and encouraging to submit a GCP in the first place and to Azeem for getting the collaborators connected.


[1] For Protocol Labs, we covered the Research Grants program that has been sunset. This does not relate to any Filecoin Foundation grant programs.

[2] These numbers have not been verified using on-chain data and were gleaned from public writing or our interviews.

[3] https://hbr.org/2022/05/what-is-web3

[4] https://www.businessinsider.com/most-expensive-nft-list-top-selling-nfts-crypto-art-sales-2021-3?op=1

[5] https://decrypt.co/53950/the-10-biggest-icos-heres-where-the-money-went

[6] https://venturebeat.com/security/web3-crypto-fraud/

[7] https://thenewstack.io/web3-developer-ecosystem/

[8] https://deepdao.io/organizations

[9] https://www3.weforum.org/docs/WEF_DAOs_for_Impact_2023.pdf

[10] https://beincrypto.com/what-bear-market-web3-investments-soared-2022/

[11] https://banklesspublishing.com/the-essential-web3-glossary/

[12] https://knowledge.wharton.upenn.edu/article/blockchain-brings-social-benefits-emerging-economies/ 

[13] https://www.britannica.com/money/topic/public-good-economics 

[14] https://www.un.org/techenvoy/content/digital-public-goods

[15] https://digitalpublicgoods.net/governance/

[16] https://stackoverflow.blog/2021/01/07/open-source-has-a-funding-problem/

[17] https://www.fordfoundation.org/work/learning/research-reports/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure/

[18] https://esp.ethereum.foundation/

[19] https://solana.org/grants

[20] https://www.uniswapfoundation.org/grants

[21] https://atomicwallet.io/academy/articles/what-is-aave-complete-guide 

[22] https://sovereignsignal.substack.com/p/aave-grants-retrospective 

[23] https://medium.com/aave/aave-ecosystem-grants-88260ede1485 

[24] https://docs.aave.com/aavenomics/ecosystem-overview 

[25] Received from AGD staff

[26] https://aavegrants.org/

[27] https://governance.aave.com/t/arc-aave-community-grants-program/3642 

[28] https://governance.aave.com/t/temp-check-aave-grants-continuation-proposal/14831 

[29] https://governance.aave.com/t/temp-check-aave-grants-continuation-proposal/14831 

[30] This does not include grants for hackathons, events, and sponsorships

[31] https://governance.aave.com/t/aave-grants-update-and-renewal/6371 

[32] Their monthly updates started from May 2022, and share comprehensive data on these metrics

[33] https://governance.aave.com/t/agd-renewal-4-recent-work-updates/11585/10 

[34]Metrics for past individual years not shared in forum posts. Monthly updates are comprehensive, and started from July 2022.  https://governance.aave.com/t/temp-check-aave-grants-continuation-proposal/14831 

[35] Does not include events

[36] https://gho.xyz/ 

[37] https://www.comp.xyz/t/compound-grants-program/1292

[38] https://boardroom.io/compound/proposal/cHJvcG9zYWw6Y29tcG91bmQ6YXJjaGl2ZTo0MA==

[39] https://www.comp.xyz/t/compound-grants-program-lessons-and-next-steps/2264

[40] https://compound.finance/governance/proposals/136

[41] https://www.questbook.xyz/

[42] https://www.comp.xyz/t/cgp-2-0-updates-and-renewal/4518/25

[43] https://www.comp.xyz/t/draft-for-compound-grants-program-3-0/4496

[44] https://www.comp.xyz/t/alastor-w3s-compound-strategy-growth-proposal/4569

[45] https://compound.finance/

[46] https://www.comp.xyz/t/cgp-2-0-delegated-domain-allocation-by-questbook/3352 

[47] https://www.comp.xyz/t/compound-grants-program-lessons-and-next-steps/2264 

[48] https://www.comp.xyz/t/cgp-2-0-updates-and-renewal/4518/29

[49] https://docs.google.com/document/d/1PYYFazs9XxcidJn0Q0zl-XZEHWTaWZh_KX72Ym6qS0Y/edit 

[50] https://docs.google.com/document/d/1d_Fj8gXPqPuPhf6nJEPc-SwTV7EqBzAfXfl4OGFZjjE/edit#heading=h.aze6a5lp1qrf

[51] https://questbook.notion.site/Madhavan-CGP-DevTooling-Onepager-2a680b4aca784147b5f2ff6a9aadba11

[52] https://docs.google.com/document/d/1Dm_x3pThG1ROjjZVDFpS41L5ghtQVy3bcT00SmCujV4/edit#heading=h.aze6a5lp1qrf

[53] https://www.comp.xyz/t/cgp-2-0-updates-and-renewal/4518/25

[54] https://www.questbook.app/

[55] https://www.comp.xyz/t/cgp-2-0-updates-and-renewal/4518/5

[56] https://www.comp.xyz/t/cgp-2-0-updates-and-renewal/4518/20

[57] https://docs.google.com/forms/d/e/1FAIpQLScAZHpkkh12KwBfOw8kZaM7mWBUdj7uSo9XT1TZiDqCykMBsQ/viewform

[58] https://questbook.notion.site/CGP-2-0-Funding-Breakdown-e1c8469c50054d25b8930b71e0666d10

[59] https://www.comp.xyz/t/compound-grants-program-lessons-and-next-steps/2264

[60]https://web.archive.org/web/20140208030136/http://www.ethereum.org/ 

[61] https://blog.ethereum.org/2016/02/29/homestead-release 

[62] Taking into account # of developers, market cap, and maturity of ecosystem

[63] https://ethereum.foundation/ef 

[64] https://blog.ethereum.org/2014/12/18/call-bug-bounty-hunters 

[65] https://blog.ethereum.org/2018/10/24/how-the-ethereum-foundation-grants-program-makes-decisions 

[66] https://snapshot.org/#/bitdao.eth/proposal/0xe81f852d90ba80929b1f19683da14b334d63b31cb94e53249b8caed715475693

[67] https://forum.mantle.xyz/t/archived-bit-network-an-iterative-modular-chain-approach/2988

[68] https://forum.mantle.xyz/t/passed-bip-21-optimization-of-brand-token-and-tokenomics/5327

[69] https://snapshot.org/#/bitdao.eth/proposal/0xe81f852d90ba80929b1f19683da14b334d63b31cb94e53249b8caed715475693

[70] https://www.mantle.xyz/blog/announcements/mantle-network-mainnet-alpha

[71] https://forum.mantle.xyz/t/passed-mip-24-mantle-ecofund/4692

[72] https://forum.mantle.xyz/t/passed-bip-19-securing-the-future-with-mantle-a-comprehensive-plan/4533

[73] https://forum.mantle.xyz/t/passed-mip-24-mantle-ecofund/4692

[74] https://www.mantle.xyz/

[75] https://www.mantle.xyz/grants

[76] Comment from Cooper Midroni, Mantle’s grant lead

[77] https://solana.org/about

[78] https://solana.org/grants

[79] https://ton.org/en/roadmap

[80] https://ton.org/whitepaper.pdf

[81] https://questbook.app/dashboard/?proposalId=0x2f8&isRenderingProposalBody=true&chainId=10&grantId=0xe92b011b2ecb97dbe168c802d582037e28036f9b

[82] https://ton.org/en

[83] https://ton.org/en/grants

[84] https://questbook.app/dashboard/?proposalId=0x2f8&isRenderingProposalBody=true&chainId=10&grantId=0xe92b011b2ecb97dbe168c802d582037e28036f9b 

[85] https://ton.org/en/grants

[86] https://ton.org/en/grants

[87] https://blog.ton.org/ton-foundation-announces-telegram-web3-grants#grants-categories

[88] https://questbook.app/dashboard/?proposalId=0x2f8&isRenderingProposalBody=true&chainId=10&grantId=0xe92b011b2ecb97dbe168c802d582037e28036f9b

[89] https://ton.org/en/grants

[90] https://gov.uniswap.org/t/governance-proposal-create-the-uniswap-foundation/17499

[91] https://www.uniswapfoundation.org/

[92] https://www.uniswapfoundation.org/grants

[93] Through June 2022

[94] https://uniswapfoundation.mirror.xyz/RM7VRw1TLrZPio0G7QgiS2q-kooo8D65zNihK_2NlDc

[95]https://www.algorand.foundation/news/algorand-foundation-announces-first-auction 

[96] https://www.algorand.foundation/news/three-initial-recipients-have-been-awarded-over-6m 

[97] https://near.org/learn 

[98] https://near.org/about 

[99] https://medium.com/nearprotocol/what-is-the-near-digital-collective-d7bb4da9b400 

[100] https://pages.near.org/blog/ndc-v1-governance-elections-faq/ 

[101]https://docs.google.com/document/d/1OcTKyl6j9H5pF7D5b42NbBuSp0bwqZ0TIeYi2RvZWuQ/edit?usp_dm=false#heading=h.bomy2ycwdcf0 

[102]https://docs.google.com/document/d/1Gj43xzngdxvU5U6Nyr14WdhXwX0Jim2s7FsToWAJqR0/edit#heading=h.5o1fnawgsjg3 

[103] https://app.neardc.org/ 

[104] https://near.social/mob.near/widget/MyPage?accountId=devgovgigs.near 

[105] https://docs.google.com/document/d/1-0kw-kV1iDa-3h6bMA9MEdmdJal8w5yiLH56K9hfKBE/edit#heading=h.k37ycskk6vzz 

[106] https://forum.aurora.dev/t/transitioning-aurora-community-to-near-digital-collective-empowering-decentralization-and-the-vitality-of-community-in-the-aurora-ecosystem/2304 

[107] https://github.com/Mintbase/Grants-Program 

[108] https://gov.near.org/t/near-constitution-v1-feedback-requested/30729 

[109] https://pages.near.org/blog/near-foundation-transparency-report/

[110] https://messari.io/project/polygon/profile 

[111] https://sovereignsignal.substack.com/p/polygon 

[112] https://sovereignsignal.substack.com/p/polygon

[113] https://www.coindesk.com/markets/2020/08/03/matic-pledges-5m-in-tokens-to-entice-defi-projects-into-building-on-its-network/

[114] https://forum.polygon.technology/t/building-the-polygon-ecosystem-dao-a-recap-of-season-0/2037 

[115] https://polygon.technology/blog/polygons-journey-in-first-half-of-2022-developer-recap 

[116] https://forum.polygon.technology/t/polygon-village-launch-new-2023-format/11609 

[117] https://protocol.ai/blog/ann-research-rfp/

[118] https://github.com/protocol/research-grants

[119] https://protocol.ai/blog/announcing-grants-april2019/

[120] https://research.protocol.ai/blog/2020/protocol-labs-launches-a-covid-19-open-innovation-grants-program/

[121] https://research.protocol.ai/blog/2020/announcing-our-covid-19-open-innovation-grant-awardees/

[122] https://research.protocol.ai/blog/2021/introducing-our-new-grant-spectrum/

[123] https://protocol.ai/about/

[124] https://research.protocol.ai/outreach/

[125] https://research.protocol.ai/blog/2021/introducing-our-new-grant-spectrum/

[126] https://github.com/filecoin-project/devgrants

[127] https://github.com/ipfs/devgrants 

[128] https://github.com/libp2p/devgrants

[129] https://fundingthecommons.io/

[130] https://research.protocol.ai/blog/2020/meet-the-latest-protocol-labs-research-grant-recipients/; these numbers include 10 COVID related grants and 5 RFP based grants

[131] https://research.protocol.ai/blog/2020/announcing-our-covid-19-open-innovation-grant-awardees/

[132] https://research.protocol.ai/blog/2021/protocol-labs-research-funding-recipients-2021-part-1/

https://research.protocol.ai/blog/2021/protocol-labs-research-funding-recipients-2021-part-2/

[133] https://research.protocol.ai/blog/2022/protocol-labs-research-funding-recipients-2022/

[134] https://research.protocol.ai/blog/2023/private-retrieval-grant-2023-roundup/; one of these grants was accounted for in 2022 and that total was subtracted from the $750k in the blog post

[135] https://hypercerts.org/

[136] https://www.opensource.observer/

[137] https://research.protocol.ai/blog/2022/introducing-cryptonet-network-grants/

[138] https://gov.gitcoin.co/t/gcp-012-state-of-web3-grants-report/14562/

[139] https://blog.clr.fund/clr-fund-explained-pt-1/

[140] https://privacy-scaling-explorations.github.io/maci/

[141] https://blog.clr.fund/progress-update-2/

[142] https://github.com/clrfund/constitution

[143] https://blog.clr.fund/round-9-is-underway/

[144] https://blog.clr.fund/

[145] https://clr.fund/#/about/how-it-works

[146] https://twitter.com/clrfund/status/1668245518152265728?s=20

[147] https://twitter.com/clrfund/status/1668245521201508352

[148] https://twitter.com/clrfund/status/1668245523965558789

[149] Total grantees paid number calculated from summing up the individual Grant Round grantees paid numbers

[150] https://www.gitcoin.co/blog/crypto-advocacy-gitcoin 

[151] https://decrypt.co/95417/gitcoin-grant-gr13-raised-1-million-in-crypto-for-ukraine/#:~:text=GR13%20is%20the%2013th%20quadratic,Ukraine%20as%20of%20Thursday%20evening.

[152] https://wkkf.issuelab.org/resources/9191/9191.pdf Page 10

[153] https://wkkf.issuelab.org/resources/9191/9191.pdf Page 12

[154] Ibid

[155] https://dydx.forum/t/unveiling-alexios-unmasking-fraudulent-activities-with-concrete-evidence/952