|Timestamp||Project Name||Project Description||Code repository (URL)||Website (URL)||Project Roadmap (URL):||Code of conduct (URL)||I understand that if accepted, the project will be required to follow the CNCF IP Policy (https://github.com/cncf/foundation/blob/master/charter.md#11-ip-policy)||I understand that I am donating all project trademarks and accounts to the CNCF||Please explain how your project is aligned with the cloud native computing ecosystem.||Please list similar projects in the CNCF or elsewhere||Guidelines/help for project contributors (URL)||Explanation of alignment/overlap with existing CNCF projects (optional)||Existing project overview presentation (optional)||Email Address||Why do you want to contribute your project to the CNCF? What would you like to get out of being part of the CNCF?||Maintainers file (optional)||Link to project artwork: (optional)|
|5/30/2022 13:11:47||ContainerSSH||ContainerSSH launches a new container for each SSH connection in Kubernetes or Docker. The user is transparently dropped in the container and the container is removed when the user disconnects. Authentication and container configuration are dynamic using webhooks, no system users required.|
The main use cases for our present users are building labs and vendor access systems or debugging production systems with strict auditing, or running honeypots for security and academic research.
|https://github.com/containerssh||https://containerssh.io/||https://containerssh.io/development/dashboard/||https://github.com/ContainerSSH/community/blob/main/CODE_OF_CONDUCT.md||I accept the CNCF IP Policy||I Accept||ContainerSSH bridges the gap between traditional SSH and container workloads in container environments to ensure compliance and auditability, and improves security and flexibility.||There are currently no similar open source projects.||https://containerssh.io/development/||ContainerSSH is complementary to Kubernetes and exposes Prometheus metrics. It can also use Open Policy Agent for authentication and configuration as shown in the "examples" repository.||https://www.youtube.com/watch?v=Cs9OrnPi2IMfirstname.lastname@example.org||We are seeing an increase in users (pull stats https://docs.google.com/spreadsheets/d/1vOrYWDTfl75c5bmDK52ApV8NyDW43dL0bJLAaV9U6ks/edit#gid=0) and would like to work towards more contributors and faster feature releases.|
As this project is fairly complex and we are working together with large companies and institutions, we believe that the CNCF is the right organization for ContainerSSH.
|6/13/2022 10:46:34||OpenFGA||OpenFGA is a Fine-Grained Authorization System inspired by Google's Zanzibar paper. |
It’s based on a Relationship-Based Access Control model that is more expressive than alternatives like RBAC/ABAC, while providing high reliability and low latency at scale.
The combination of expressiveness and the ability to scale makes it suitable to be used across multiple domains, enabling standardization on a single authorization implementation.
|https://github.com/openfga | https://github.com/openfga/openfga||https://openfga.dev/||https://github.com/orgs/openfga/projects/1/views/1?layout=board||https://github.com/openfga/.github/blob/main/CODE_OF_CONDUCT.md||I accept the CNCF IP Policy||I Accept||The need for a Fine-Grained Authorization (FGA) is growing and needed in a variety of cloud native applications. It’s becoming increasingly common to enable end-users to manage permissions for specific resources (e.g. ‘Share’ in Google Drive, or a manager granting access to a specific virtual machine to direct report). This comes with the need for auditing who grants access to what, who has permissions to access specific resources, and who is actually accessing them.|
Many developers have to make trade-offs between the expressiveness of their access policies versus the cost of development and maintainability of the policy engine and data plane enforcing the policies. OpenFGA aims to make it easier to integrate authorization into your applications, and we do so by making it easier to model arbitrary access policies and provide you a permission database engine optimized to make access policy decisions with low latency, consistency, and at large scale.
There are already solutions in the cloud-native space that help addressing part of these problems (e.g. Open Policy Agent), and OpenFGA can integrate with them to help with these more complex problems.
There’s an opportunity to create a large ecosystem of open source projects around OpenFGA: integrations with policy engines like OPA, proxies like Envoy, portals like Backstage, API gateways like Kong, SCIM-compatible user directories, developer frameworks like React, Express, etc.
Open Policy Agent lets you define authorization policies. Our product can be used in conjunction with OPA. OPA policies will be able to apply Zanzibar-style FGA rules.
Other Open Source projects:
Zanzibar-inspired, ReBAC, no significant conceptual differences, different technical choices/trade offs
Not based on ReBAC
|https://github.com/openfga/.github/blob/main/CONTRIBUTING.md||Open Policy Agent lets you define authorization policies and evaluate them by loading data into memory and evaluating the policies. With OPA, the data plane providing the permission/policy data is the responsibility of the client.|
OpenFGA is similar to OPA in that you define authorization policies and load data into it to evaluate the policies. However, it acts as a permission database which stores the actual relationships/policy data that is used to evaluate the policies. These are stored in a scalable way backed by persistent storage solutions, so the permission data doesn't have to be loaded every time it is needed, it is simply queried as needed. OpenFGA is a database for the relationships that govern access to resources in your system, as opposed to just being a policy engine that you have to load data into at runtime to evaluate a policy.
In scenarios where there’s a large number of permissions or when those change often, OPA users can benefit from integrating it with OpenFGA.
Portals like Backstage usually have a pluggable model for handling authorization. They can use OpenFGA to control access to different sections in the portal.
Proxies like Envoy also provide a way to develop authorization filters that can be implemented using OpenFGA.
|https://www.youtube.com/watch?v=un2KKTE25qgemail@example.com||Google Zanzibar’s approach to authorization is becoming increasingly popular, and it’s an opportunity to solve authorization at scale for the cloud-native ecosystem. Most applications require fine-grained authorization, and before Google Zanzibar, nobody has designed a generic way to do it that can scale.|
Being part of the CNCF provides our project with the visibility needed to get feedback from the CNCF community, which will help us enhance the solution to meet the needs of the various interested parties.
A project sponsored by Okta/Auth0 that’s part of CNCF will give confidence to early adopters and foster an ecosystem of integrations that will benefit the cloud-native ecosystem.
|6/22/2022 23:50:33||Kured||Kured (KUbernetes REboot Daemon) is a Kubernetes daemonset that performs safe automatic node reboots when the need to do so is indicated by the package management system of the underlying OS. This is a critical solution for many Kubernetes environments when continual node OS patching is required in order to ensure secure operations.||https://github.com/weaveworks/kured||https://github.com/weaveworks/kured||https://github.com/weaveworks/kured/milestones||https://github.com/weaveworks/kured/blame/main/README.md#L411||I accept the CNCF IP Policy||I Accept||Kured is a community-driven, inclusive open source project with a mission to standardize a common operational requirement according to Kubernetes-native best practices. To date the project has continually adapted to community- and industry-driven requirements and delivers a vendor-agnostic solution.|
As Kured tries to solve the problem for everyone regardless of vendor, operating system, etc, we believe our work is beneficial for the whole Kubernetes community.
|https://github.com/rancher/system-upgrade-controller||https://github.com/weaveworks/kured/blob/main/DEVELOPMENT.md||https://github.com/weaveworks/kured/blob/main/README.firstname.lastname@example.org||The Kured project started out as a Weaveworks-directed solution. It has evolved to be a widely adopted tool across many Kubernetes cluster scenarios (1.5k github stars), maintained over time by engineers working for many different companies (in addition to Weaveworks: Microsoft, SUSE, Fastly, Indeed, Fujitsu, among many others). As a community project, the long-term health of the kured project would be better situated under the sponsorship of the CNCF.|
The benefits of CNCF adoption for the kured community would be the long-term project resiliency that vendor-neutral governance allows, in addition to encouraging convergence and standardization for the Kubernetes node reboot problem surface area.
tl;dr We want to keep kured boring and reliable, and to enable more easy discovery by Kubernetes users seeking a standard Kubernetes node reboot management solution.
|7/15/2022 13:23:41||Carvel||Carvel provides a set of reliable, single-purpose, composable tools that aid in your application building, configuration, and deployment to Kubernetes.||https://github.com/vmware-tanzu/carvel https://github.com/vmware-tanzu/carvel-kapp-controller https://github.com/vmware-tanzu/carvel-kapp https://github.com/vmware-tanzu/carvel-ytt https://github.com/vmware-tanzu/carvel-imgpkg https://github.com/vmware-tanzu/carvel-kbld https://github.com/vmware-tanzu/carvel-vendir https://github.com/vmware-tanzu/carvel-kwt https://github.com/vmware-tanzu/carvel-secretgen-controller||https://carvel.dev||https://github.com/vmware-tanzu/carvel/blob/develop/ROADMAP.md||https://github.com/vmware-tanzu/carvel/blob/develop/CODE_OF_CONDUCT.md||I accept the CNCF IP Policy||I Accept||Carvel is a toolchain composed of multiple tools; each tool follows the UNIX philosophy and aims to do one thing well. In doing so, each tool can be used in combination with other tools for the Carvel toolchain and with other tools and projects in the CNCF ecosystem. A common pattern for Carvel users is to “pipe” various CLI tools together. For example, `ytt | kbld | kapp` or `helm template | kbld | kapp` or `ytt | kubectl` and so on. Carvel’s kapp-controller facilitates a similar workflow on cluster. All of the Carvel tools are aligned with the cloud native computing ecosystem by aiding in your application building, configuration, and deployment to Kubernetes.||Within CNCF: Helm, kubectl, kustomize, Flux CD, Argo, Operator Lifecycle Manager|
Elsewhere: cue, kpt, CNAB
|https://github.com/vmware-tanzu/carvel/blob/develop/CONTRIBUTING.md||We share what the core responsibility of each Carvel tool is and how that overlaps with and can be used with other CNCF projects below:|
- ytt: ytt CLI helps users manage Kubernetes configuration in an easy, flexible and sustainable way. By blending templating and overlaying features in one language, it is successfully used to manage first-party Kubernetes configuration and enhance and customize third-party configuration (and sometimes combination of the two). Since ytt solely focuses on configuration building, it is used as a main or pre/post-processing step of configuration with deployment tools such as kubectl, Helm install via Helm's post-renderer feature (or as a step after helm template), kustomize (or even as a kustomize plugin), and as an Argo CD plugin. In all these examples, ytt helps with further customization of Kubernetes configuration which may not be easily done with existing tools. Outside of Kubernetes, we have seen ytt used to shape various other structured configuration such as Envoy configuration, Concourse pipelines, Github Action workflows.
- kapp: kapp CLI helps manage groups of Kubernetes resources as a single unit by introducing the concept of "app" -- collection of resources with the same label. kapp includes various resource management features (separation of preview from execution of changes, automatic and custom ordering of changes, fine grained control of how resource is created/updated/deleted/replaced, indication of reconciliation progress, etc.) making it appealing to use it instead of kubectl-apply or similar deployment mechanisms. Since kapp specifically focuses on deployment it is easily composed with other tools such as helm template, kustomize, jsonnet, ytt, etc.
- kbld: kbld CLI allows users to encode as YAML configuration how to build container images via different builder tools. Whether users use docker build, buildkit, bazel, Buildpacks (via pack build), or others, digests of built images are inserted into their Kubernetes configuration. Since kbld specifically focuses on building/digest-ifying images, it is used in a variety of existing flows such Helm (via post-renderer or after helm template), kubectl, kustomize, ytt, as an Argo CD plugin.
- imgpkg: imgpkg CLI embraces the idea of storing arbitrary content (Kubernetes configuration, ML models, etc.) within OCI registries. It enables users to represent their assets as a graph within an OCI registry, making it possible to move such graphs between registries. imgpkg can be used with Helm to make Helm charts air-gap friendly. ORAS project explores related ideas but more from a registry server-side perspective.
- vendir: vendir CLI manages directory content (on filesystem) declaratively and commonly acts as a first step in managing content as part of GitOps flow. It integrates with various sources such as Helm repos, Github releases, mercurial repos, git repos, OCI images. It's commonly used by GitOps practitioners.
- kapp-controller: kapp-controller blends the world of GitOps and local development/packaging for first-party and third-party software by providing lower (App CR) and higher level (PackageInstall/Package CR) primitives for continuous configuration reconciliation. It overlaps with Argo CD and Flux CD in that its goal is to enable users to continuously manage software on the cluster; however, kapp-controller finds a different balance in its provided primitives that we believe are more universal for software installation/distribution. It also can be made to work with Flux CD other controllers (with more tighter integration possible). It overlaps with Helm in that kapp-controller offers a way to install third-party software on a cluster, but differs in that it offers an API first experience and delegates to plethora of different other tools (such as cue, ytt, Helm template, sops, age, and other Carvel tools) to provide necessary functionality.
- secretgen-controller: secretgen-controller focuses on making it easier to manage secrets within a Kubernetes cluster. It's especially useful to GitOps practioners that want on-cluster secret generation, secret sharing between Kubernetes namespaces, and secret content shaping (assembling secret contents from other resources). Since functionality is exposed via CRs, they can be used with any Kubernetes deployment workflow such as kubectl, kapp, Helm, Argo CD, Flux CD, kapp-controller, etc.
|https://youtu.be/qwKjC4mRdVMemail@example.com||We would like to contribute Carvel to the CNCF in order to provide a vendor-neutral environment for the project as well as gain guidance from CNCF as Carvel continues to grow as an open source project. We hope that this will encourage others outside of VMware to contribute to the project as well as become maintainers, supporting the project’s growth within the Open Source ecosystem. Additionally, we hope this donation will open up more opportunities to collaborate with other open source projects within CNCF.||https://github.com/vmware-tanzu/carvel/blob/develop/MAINTAINERS.md||https://github.com/vmware-tanzu/carvel/blob/develop/logos/CarvelLogo.png|
|7/19/2022 4:48:06||Unikraft||Unikernel Build and Deployment Toolkit||https://github.com/unikraft||https://unikraft.org/||https://unikraft.org/community/roadmap||https://unikraft.org/docs/contributing/code-of-conduct/||I accept the CNCF IP Policy||I Accept||Unikraft would fit a the bottom of the ecosystem, providing highly specialized, cloud-ready stacks that are compatible with standard application and orchestration frameworks.||https://osv.io/, https://mirage.io/||https://unikraft.org/docs/contributing/||Should not be any overlap, Unikraft should be complimentary to the ecosystem||https://firstname.lastname@example.org||Unikraft can act as a CNCF project to bring efficiency and savings to cloud deployments, all of the while integrating seamlessly with CNCF projects such as Kubernetes. By joining, we aim to grow our OSS community, accelerate bottoms-up adoption, and to gather feedback about missing features. Note that Unikraft is a 4-year old Linux Foundation project with a BSD-3 license.||ttps://github.com/unikraft/governance/tree/main/teams||https://github.com/unikraft/docs/tree/main/static/assets/imgs|
|7/26/2022 7:06:59||Lima||Lima provides Linux virtual machines for running containerd and Kubernetes on macOS, with automatic host file system sharing and port forwarding.|
Aside from macOS, Lima is known to work on Linux and NetBSD hosts as well.
Lima has already received 8.7k stars on GitHub.
|https://github.com/lima-vm/lima||https://github.com/lima-vm/lima||https://github.com/lima-vm/lima/blob/master/ROADMAP.md||https://github.com/lima-vm/.github/blob/main/CODE_OF_CONDUCT.md||I accept the CNCF IP Policy||I Accept||Lima helps developing cloud native apps by allowing running containerd (including nerdctl, a Docker-compatible CLI) and Kubernetes on laptops.||Multipass, WSL2, vagrant, docker-machine||https://github.com/lima-vm/.github/blob/main/CONTRIBUTING.md||The scope of Lima overlaps with minikube (Kubernetes SIG-cluster-lifecycle project) for running Kubernetes, but Lima also focuses on seamlessly running non-Kubernetes applications including containerd/nerdctl.||https://email@example.com||We are seeking a vendor-neutral home for the project, to take ownership of the project assets, and also to provide legal protection, if that ever becomes necessary. We are also interested in a refresh of our logo and getting a #lima channel on the CNCF Slack instance.||https://github.com/lima-vm/lima/blob/master/MAINTAINERS.md||https://github.com/lima-vm/lima/blob/master/docs/images/lima-logo-01.svg|
Merbridge is a fully open-sourced cloud-native project to accelerate popular service meshes such as Istio, Linkerd, and Kuma by replacing iptables rules with eBPF, which allows transporting data directly from inbound sockets to outbound sockets, effectively shortening the datapath between sidecars and services.
Merbridge uses eBPF to intercept outbound and inbound traffic and provides a feedback mechanism to enable eBPF to get the IP address of current namespace and redirect the Envoy connection to a pre-defined port. After applying Merbridge, the outbound/inbound traffic can skip many filter steps to effectively improve network performance.
With Merbridge, developers can use eBPF to accelerate the Mesh without any additional operations or code changes.
|I accept the CNCF IP Policy||I Accept|
Merbridge is designed for service mesh products, some of which belong to CNCF such as Linkerd and Istio. Merbridge itself is also built based on the Kubernetes system. It uses CNI to implement a variety of capabilities (such as obtaining the current Pod IP via eBPF) to improve service mesh performance without any intrusive modification.
Merbridge is inspired by Cilium and provides a stronger improvement on the network acceleration. It combines the features of service mesh and uses eBPF to replace the capabilities of iptables to boost existing service mesh products. For details see [Merbridge and Cilium](https://merbridge.io/blog/2022/04/23/merbridge-and-cilium/).
For istio-tcpip-bypass, it is mainly to solve the network acceleration problem of Istio and does not implement the capability similar to iptables.
https://merbridge.io/blog/2022/03/29/solo-io-livestream/ and IstioCon 2022 https://events.istio.io/istiocon-2022/sessions/ebpf-iptables/
Merbridge is intended to replace iptables rules with eBPF to help existing service mesh products achieve better network performance. At present, the most representative service mesh products, Istio and Linkerd, are already known as CNCF projects. Merbridge can efficiently support both Istio and Linkerd from the very beginning, and now also supports Kuma. Merbridge is a good supplementary for the CNCF and the CNCF ecosystem can further expand the Merbridge’s profile, which will be very helpful to enable more users to enjoy the service mesh accelerated by Merbridge.
DevSpace is an open-source developer tool for Kubernetes that lets you develop and deploy cloud-native software faster.
It simplifies the entire development process for cloud native applications by being able to automate the building, deploying and running of your applications within a Kubernetes cluster. It has a number of features including a rich UI, the capability to develop locally and see the results immediately in the target Kubernetes cluster whilst providing a consistent environment.
|I accept the CNCF IP Policy||I Accept|
The DevSpace project is a widely popular open source tool (3K+ stars) that is a truly cloud native development tool, enabling developers to create and develop their cloud native applications within modern platforms such as Kubernetes.
We believe that not only is it completely aligned with the cloud native ecosystem, it can also help enable new and existing projects to be developed within the ecosystem
The Devspace project both aligns and enables all CNCF projects that are being actively developed to run on Kubernetes.
We would like to contribute DevSpace to the CNCF because we believe that this is a project that can benefit the entire cloud native ecosystem. DevSpace has the capability to improve the development experience of all creators within the ecosystem. Being part of the CNCF would validate the project as a tool for all cloud native space developers, whilst allowing the project to align with the values of the CNCF.
Additionally having DevSpace as a CNCF project will allow further growth of the project through larger visibility, this will in turn enable growth in development and the DevSpace community.
|8/11/2022 22:42:13||Serverless Devs|
THIS IS A RESUBMISSION AFTER FEEDBACK FROM Serverless WorkGroup：
We got a strong suppport from serverless group after we had our serverless devs introduced and demoed. Serverless Groups is really interested in our no vendor locked in tool chain and expecting us for further steps by offering developers from the group to anticipate our tool building.
Serverless Devs is a cloud-native, open source serverless application full life cycle management tool. Through component construction, developers can play the serverless architecture like a mobile phone. Through Serverless Devs tools, developers can quickly experience and use FaaS services of many cloud vendors, including AWS Lambda, Alibaba Cloud Function Computing, Baidu Smart Cloud Function Computing, Huawei Cloud Function Computing Workflow, Tencent Cloud Function, etc., and Open source FaaS capabilities such as OpenFunction; based on the Serverless Devs Model, developers can quickly perform serverless application initialization, packaging and construction, deployment, and post-operation and maintenance of multiple FaaS platforms through a more consistent user experience. Serverless Devs will continue to pay attention to the happiness of serverless developers, and continue to provide efficient, concise and open serverless developer tools for serverless developers.
1: 2022/07/21: Community contributor Chelsea introduced Serverless Devs to Serverless Workgroup, and Serverless Workgroup said they would like to have a demo.[https://www.youtube.com/watch?v=LX-w6hwzd2o 20:52-41:35]
2: 2022/08/18: Community contributor Chelsea show the Serverless Devs‘s demo to Serverless WorkGroup, and Serverless WorkGroup expressed their willingness to support the project and build Serverless Devs together.[https://www.youtube.com/watch?v=tpwFU700Xko 11:18-35:00]
Serverless Devs introduction PPT：https://drive.google.com/file/d/18xi0v638tdD5r6XDtH8229QH2gg4bcB8/view?usp=sharing
|I accept the CNCF IP Policy||I Accept|
Serverless itself is a field of cloud native, or a direction, and the Serverless Devs tool itself was born for Serverless, serves Serverkess, and grows on Serverless. Whether starting from the community partnership of Serverless Devs, starting from the mission of Serverless Devs, or starting from the form presented by Serverless Devs, it can be considered that "Serverless Devs are part of cloud native", if we go further To put it this way, it can be said as follows: Serverless Devs was born from cloud native, grew up in cloud native, and served cloud native projects. The current relationship with cloud native community includes but is not limited to:
Support components conforming to CloudEvent specification and Serverless Workflow specification;
Supports components related to many cloud native projects, including but not limited to OpenFunctio, Kubevela, Kubernetes, Buildpacks, etc.;
Serverless Devs is a developer tool in the serverless field. By linking with more cloud-native projects, it can build more flexible and complete serverless application lifecycle management capabilities for developers in the serverless field; Multiple open source FaaS products provide standardized and friendly tool-side experience.
Through the Serverless Devs developer tools, many products of the cloud native community ecosystem can be linked: For example, the linkage between Kubebela, Kubernetes, Buildpacks and other projects and the serverless architecture can make it easier and more convenient for serverless developers to get started with the serverless architecture, and use the This fast build serverless application;
Serverless Devs can be used as a common tool chain for FaaS to support these projects at the tool level: for example, support for the OpenFunction project, etc., to promote the consistency of the Serverless architecture in the user experience layer, and improve the development of more open source FaaS products at the tool layer performance, user experience.
Existing cloud native community ecological open source projects fit points:
OpenFunction: OpenFunction is an open source FaaS platform, which has now entered the CNCF Sandbox. As an open source serverless toolchain product, Serverless Devs has been recognized by the OpenFunction community, and with the support of the community, the connection between OpenFunction and Serverless Devs has been completed. In the future, Serverless Devs will serve as OpenFunction's toolchain, providing toolchain support for OpenFunction developers;
Kubevela: Kubevela is an open-source, out-of-the-box, application delivery and management platform for modern microservice architectures, which has now entered the CNCF Sandbox. Serverless Devs relies on Kubevela to achieve serverless multi-environment management capabilities. Through the resource management provided by Kubevela, Serverless Devs can provide developers with a richer and more complete serverless experience, making up for the disadvantage of some FaaS platforms that lack the ability to divide multiple environments;
Kubernetes: In order to improve the integration capabilities of Serverless Devs and other cloud-native projects, Serverless Devs has launched the Serverless Devs Kubernetes Controller component. This Kubernetes-oriented control plane component makes it easier for users of the container ecosystem to enjoy Serverless services;
buildpacks: Serverless Devs has supported container image packaging and deployment to some mainstream FaaS platforms. The packaging process supports image building through Buildpacks;
CloudEvent: Serverless Devs currently supports some event products that conform to the CloudEvent specification at the component level, such as EventBridge from some cloud vendors;
Serverless Workflow: Serverless Devs currently supports some workflow projects that conform to the Serverless Workflow specification through the component level, including but not limited to the creation and deployment of workflows.
Necessity of Serverless Tool Construction]: With the continuous development of cloud computing and the increasing popularity of cloud native, the serverless architecture has gradually shown a strong momentum of development in recent years. On the one hand, it has attracted the attention of more developers On the other hand, the market adoption rate is also rising steadily. However, the serverless services provided by cloud vendors and open source frameworks are not the same in terms of capability scope, product form, and user experience. This leads to a serious problem of serverless architecture vendor binding, which cannot be ignored. In the first "Cloud Native User Survey Report" released by the China Academy of Information and Communications Technology in October 2020, it clearly shows that about 24% of users directly consider the improvement of the tool chain when using the serverless
architecture, and more than 50% of users are indirectly concerned about the improvement of the tool chain. This also fully demonstrates the demand and desire of the majority of Serverless developers to improve the tool chain. But at present, there are not many tool projects for developers, especially in the serverless field, so we hope that we can make some contributions to this field through Serverless Devs, a tool for full life cycle management of serverless applications without vendor lock-in. More developers can not only use the serverless architecture, but can use the serverless architecture "faster, better, and more convenient".
-[Make Serverless more accessible to the public]: I hope that through CNCF, we can get more cooperation opportunities and innovation opportunities in the serverless field, so as to ensure that the Serverless Devs community can continue to provide developers with more convenient and scientific serverless applications. Life cycle management capabilities, providing developers with a touching and creative serverless development experience, so that serverless technology can benefit the public better and faster;
[Building a long-term, stable and creative serverless developer ecosystem]: At present, the developer community in the serverless field is relatively small. It is hoped that through the appeal of CNCF, we can create active and active serverless developers together with Serverless Devs. community, and provide a platform for more serverless enthusiasts to discuss, learn, and innovate. It is hoped that through CNCF, more scientific and rich community operation experience and richer community operation resources can be obtained, and it is hoped that this will further improve the serverless developer community ecology;
[Leading more Chinese developers to participate in the construction of the Serverless community ecosystem]: We hope that through CNCF, we can lead more Chinese developers to play with the Serverless architecture, and we hope to work together with CNCF to build the CNCF Serverless China line under Meetings and Events.