ABCDEFGHIJKLMNOPQRSTUVWXYZAAAB
1
TimestampProject NameProject DescriptionCode repository (URL)Website (URL)Project Roadmap (URL): Code of conduct (URL)I understand that if accepted, the project will be required to follow the CNCF IP Policy (https://github.com/cncf/foundation/blob/master/charter.md#11-ip-policy)I understand that I am donating all project trademarks and accounts to the CNCFPlease explain how your project is aligned with the cloud native computing ecosystem.Please list similar projects in the CNCF or elsewhereGuidelines/help for project contributors (URL)Explanation of alignment/overlap with existing CNCF projects (optional)Existing project overview presentation (optional)Email AddressWhy do you want to contribute your project to the CNCF? What would you like to get out of being part of the CNCF?Maintainers file (optional)Link to project artwork: (optional)
2
1/30/2022 6:19:44ClusterpediaThis name Clusterpedia is inspired by Wikipedia. It is an encyclopedia of multi-cluster to synchronize, search for, and simply control multi-cluster resources.

Clusterpedia can synchronize resources with multiple clusters and provide more powerful search features on the basis of compatibility with Kubernetes OpenAPI to help you effectively get any multi-cluster resource that you are looking for in a quick and easy way.

And it already supports the synchronization and search for custom resources.
https://github.com/clusterpedia-io/clusterpediahttps://clusterpedia.io/https://github.com/clusterpedia-io/clusterpedia#roadmaphttps://github.com/clusterpedia-io/clusterpedia/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptWith the development of Multi-Cluster, there are Cluster API for cluster lifecycle management and Karmada, Clusternet and OCM for multicloud application management in the community.
Clusterpedia is built on top of these multi-cloud management platforms to provide users with complex retrieval of resources in multi-cluster scenarios.

And Clusterpedia is also compatible with Kubernetes OpenAPI, allowing users to use Clusterpedia directly with existing tools (kubectl, client-go).
Nonehttps://github.com/clusterpedia-io/clusterpedia/blob/main/CONTRIBUTING.mdNonecaiwei95@hotmail.comWith the development of multiple clusters, it is also very practical to perform complex retrieval of resources within multiple clusters.

I hope more people can find and join this project, and together we can make clusterpedia more powerful and better solve the problem of collecting and searcing resources in multiple clusters, and at the same time make this project can help people in need

I believe that joining the CNCF, an open, vendor-neutral organization, will make clusterpedia a better place and future.
https://github.com/clusterpedia-io/clusterpedia/blob/main/docs/images/clusterpedia.png
3
2/1/2022 8:14:04TurnbuckleProvides an extensible constraint policy model including inter-workload constraints and scheduling based on those constraints. A patch to the descheduler project is also provided to enable workload eviction when constraints are violated.https://github.com/ciena/turnbucklehttps://github.com/ciena/turnbucklehttps://github.com/ciena/turnbuckle/blob/main/ROADMAP.mdhttps://github.com/ciena/turnbuckle/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptThis project brings an aspect of extensible constraint policy based scheduling to the CN ecosystem, including cross workload constraints, leveraging existing aspects of the CN community.I am not aware of similar projects in CNCF. https://github.com/ciena/turnbuckle/blob/main/CONTRIBUTING.mdThis project leverages the scheduler projects, including the sccheduler-plugin project. Additionally, this project provides a patch to the descheduler project to enable pods to be evicted when constraints are violated. Lastly, this project plans to integrate with the sig-multicluster (KubeFed) to enable constraint processing and scheduling across clusters.https://github.com/ciena/turnbuckle/blob/main/README.mddbainbri@ciena.comWe would like to contribute this project to CNCF to get broader community input on the concepts and direction of this project including ideas (implementation) about inter-workload constraints and inter-workload/cross cluster constraints. Additionally, this project brings in the capability to program (configure) underlay networks to provide on-demand capability.

Specifically what we are looking to gain from CNCF contribution is greater visibility and feedback from the CN community.
N/Ahttps://github.com/ciena/turnbuckle/tree/main/assets
4
2/1/2022 18:07:16OpenCost OpenCost is an open source cost allocation model that provides visibility into current and historical Kubernetes workload spend and resource consumption. This provides cost transparency in containerized environments with multiple tenants and allows teams to view costs across all aggregation-levels, including namespace, label, and controller all the way down to the individual pod and container. Organizations can provide unified cost monitoring, showback, real-time alerting, chargeback and more across projects and teams with this visibility. This project is the core engine for Kubecost, which is used by thousands of enterprises to monitor billions of total spend in environments with 10,000+ nodes.

- Summary of features enabled by this cost model:
- Real-time cost allocation by Kubernetes service, deployment, namespace, label, statefulset, daemonset, pod, and container
- Dynamic asset pricing enabled by integrations with AWS, Azure, and GCP billing APIs
- Supports on-prem k8s clusters with custom pricing sheets
- Allocation for in-cluster resources like CPU, GPU, memory, load balancers, network costs, and persistent volumes.
- Easily export pricing data to Prometheus and Alertmanager (learn more)
https://github.com/kubecost/cost-model https://github.com/kubecost/cost-model https://github.com/kubecost/cost-model/blob/develop/ROADMAP.md https://github.com/kubecost/cost-model/blob/develop/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptAs Kubernetes becomes the central place for teams to deploy and share underlying infrastructure, visibility into cost and resource usage is becoming business critical. Our project provides this needed visibility into Kubernetes cost and deeply integrates with other cloud native projects such as Prometheus, Cortex, Thanos and others.
None that we are aware of.https://github.com/kubecost/cost-model/blob/develop/CONTRIBUTING.mdThis project is purpose built for measuring cost in Kubernetes clusters. It’s also designed to integrate with Prometheus and other PromQL solutions. It’s being used by hundreds of enterprises with Thanos, Cortex and other durable storage solutions. Additionally, we allow users of Argo workflows to monitor cost and Alertmanager for cost alerting.N/Aoss@kubecost.comWe believe the growing adoption and scale of cloud native and Kubernetes in the enterprise creates a need for more cost visibility across the community. Our hope is that by contributing to the community we can create more awareness, help standardize cost measurement, and create a more collaborative environment around cost and resource optimization.
N/AN/A
5
5/5/2022 19:31:53Aeraki MeshAeraki Mesh allows you to manage any layer-7 traffic in a service mesh.

While service mesh becomes an important infrastructure for microservices, many service mesh implementations mainly focus on HTTP protocols and treat other protocols as plain TCP traffic. This makes it very hard for users to manage the traffic of other widely-used layer-7 protocols in microservices. For example, in a microservices application, we usually have the below protocols:
* RPC: HTTP, gRPC, Thrift, Dubbo, Proprietary RPC Protocol …
* Messaging: Kafka, RabbitMQ …
* Cache: Redis, Memcached …
* Database: MySQL, PostgreSQL, MongoDB …

HTTP is just a part of the problem we need to solve. Aeraki Mesh is created to provide a non-intrusive, highly extendable way to manage any layer-7 traffic in a service mesh.

Aeraki [Air-rah-ki] is the Greek word for 'breeze'. We hope this breeze can help Kubernets and Istio sail further in the cloud native adventure.

Aeraki Mesh is an vendor-neutral open source project first open-sourced by Tencent Cloud, which is the second largest cloud service provider in China, contributed and used by a dozen of companies/orgnizations, including:
* Baidu: the world's second largest search engine
* Tencent music: one of the leading online music entertainment platform in China
* Yangshiping: the online video streaming service of China Central TV - protected the streaming services for 2022 winter Olympic games streaming
* And others: Alauda Cloud, Tencent iGame, Yonghui super market, etc.

Tencent Cloud, Baidu, Tencent Music and a number of other companies have committed to continue working on Aeraki Mesh going forward, since they have used or plan to use Aeraki Mesh in their products.

More about Aeraki Mesh can be found here:
https://docs.google.com/presentation/d/1TLkGIboRFYdbYkvMRFQZG2P8ZBjb_wNwL-1Fh_umoQI/edit?usp=sharing
https://github.com/aeraki-meshhttps://www.aeraki.net/https://www.aeraki.net/blog/2022/roadmap/https://www.aeraki.net/community/#code-of-conductI accept the CNCF IP PolicyI AcceptAeraki Mesh focus on supporting none-HTTP open-source and proprietary/private protools in service meshes.Most of the Service Meshes mainly focus on HTTP area (Linkerd, Envoy, Istio, etc.). No similar projects working on non-HTTP/private protocols as far as I know.https://www.aeraki.net/community/Aeraki Mesh is complementary to CNCF’s existing projects and the service mesh ecosystem. It extends current service mesh projects and adds none-HTTP protolos support. The data plane(MetaProtocol Proxy) is built on top of Envoy(which is a CNCF project). The control plane(Aeraki) currently work with Istio(which is proposed to be a CNCF project), we plan to explore the possiblity to stretch to other service meshes.https://docs.google.com/presentation/d/1TLkGIboRFYdbYkvMRFQZG2P8ZBjb_wNwL-1Fh_umoQI/edit?usp=sharingzhaohuabing@gmail.comThe Aeraki Mesh community wants to donate Aeraki Mesh to the CNCF primarily to ensure a neutral, relevant home for the project. We hope this will further Aeraki Mesh's goal to be an open, welcoming community for everyone, thereby fostering collaboration with a broader community. As Aeraki Mesh continues to mature, we hope to graduate from the Sandbox and benefit from CNCF’s services for projects.

People may ask why Aeraki Mesh is not just a part of Istio or Envoy? There are a couple of reasons:

1. Donating to CNCF allows us to get touch with a broader cloud native ecosystem and reach to more potential contributors/users of Aeraki Mesh, which we value the most.

2. Aeraki Mesh is a full-functional service mesh solution for non-HTTP protocols, including control plane(Aeraki), data plane(MetaProtocol Proxy) and third-party service registry integration(x2Istio). So it can’t be just a SDK/component of a control plane or data plane project.

3. Non-HTTP protocols support is not in the scope of Istio. Istio project clearly stated that Istio will not put effort on support Dubbo, Thrift and other proprietary/private protocols.(Disussion can be found here). Even though Envoy does support some non-HTTP protocol, non-HTTP protocols are not its first priority, and the progress can’t keep up with the urgent needs of the users of Aeraki Mesh.

4. Aeraki Mesh currently works with Istio. In theroy, it could work with other service mesh which also runs envoy as data plane proxies, like OSM. Now this is not the top priority for Aeraki Mesh, but we plan to explore that potential after the urgent/important features are done. As Aeraki Mesh may work with multiple other service mesh implementations, it's better to be a standalone project.

5. Envoy is already a CNCF project, currently, Istio is proposed to be a CNCF project. As a project close to these two projects, it’s natural to also propose it to be a CNCF project.
6
2/10/2022 18:30:18Curve"Curve is a distributed storage system designed and developed independently by NetEase, featured with high performance, easy operation, cloud native. Curve is composed with CurveBS(Curve Block Storage) and CurveFS(Curve FileSystem). CurveBS supports snapshot, clone, and recover, also supports virtual machines with qemu and physical machine with nbd. CurveFS supports POSIX based on Fuse.

Curve is widely used in Netease, and is also test by other users. Curve provides storage service for core business of YouDao, YanXuan, Music, Lofter, YouXi in Netease. In the past 2 years, a single Curve cluster has stored tens of thousands of volumes, with a storage capacity of PB."
https://github.com/opencurve/curvehttp://www.opencurve.io/https://github.com/opencurve/curve/wiki/Roadmaphttps://github.com/opencurve/curve/blob/master/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI Accept"Cloud native storage is very important to the development of cloud native technology, and the storage projects in the CNCF are not performance and convenient enough to support
middleware, computing separation and other scenarios. The join of Curve will help CNCF accelerate the evolution of native storage systems.

The Curve project was originally designed to be cloud-native, and is currently available to users through Kubernetes within NetEase. In the future, the Curve team will develop more convenient operation and maintenance tools, allowing users to more easily use and operate the Curve cluster through Kubernetes in any cloud environment. Curve will also support more cloud-native features, including full lifecycle (storage backup,failure,recovery), deep insights (metrics, alerts, log processing and workload analysis), and auto pilot (horizontal/vertical scaling, auto config tuning, abnormal detection, schedule tuning)."
cephhttps://github.com/opencurve/curve/blob/master/CONTRIBUTING.mdThere is currently no project with the same positioning as Curve.https://github.com/opencurve/curve-meetup-slides/tree/main/2021storage_mgm@163.com"Curve provides high-performance block storage and file storage. It hopes to be used as storage for middleware applications such as mysql, and hopes to promote the separation of storage and computing for more middleware systems that require performance.

CNCF is an open source community. we hope to create neutral, active, standardized cloud native storage project. CNCF can accelerate the community development of Curve to support more applications."
7
2/15/2022 13:18:25palletThis project introduces into the Kubernetes environment the ability to identify a set of pods and delay their scheduling until a trigger is fired. Additionally, while the set of pods is delayed a scheduling plan can be created such that the node selection considers all pods in the set of pods when making selections, thus plans that optimize across the whole set of pods can be leveraged.https://github.com/ciena/pallethttps://github.com/ciena/pallethttps://github.com/ciena/pallet/blob/main/ROADMAP.mdhttps://github.com/ciena/pallet/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptThis project extends the capability of Kubernetes, using standard mechanism, to enable the optimized scheduling of a set of pods. We believe that enabling optimized scheduling of a set of pods, as opposed to pod by pod scheduling, can provide better utilization of resources both across a single cluster as well as across multiple clusters (future).Unknownhttps://github.com/ciena/pallet/blob/main/CONTRIBUTING.mdThis project aligns with another proposed sandbox project (turnbuckle). The turnbuckle project allows for inter-workload constraints to be specified. This project enables planners to be constructed that leverage these cross workload constraints to optimize the placement (scheduling) of a set of pods as a whole.dbainbri@ciena.comExposure and input from the CNCF community. Additionally, it would be interesting to understand if the community would like to continue this feature (scheduling a set of pods) as an independent "extension" to kubernetes or if some of these concepts should be implemented as features of the default scheduler. Lastly, contributions by the community are welcome.https://github.com/ciena/pallet/blob/main/assets/pallet_64x64_r_bg.png
8
2/18/2022 12:37:06OpenFeatureOpenFeature is an open standard for feature flag management, created to support a robust feature flag ecosystem using cloud native technologies. OpenFeature will provide a unified API and SDK, and a developer-first, cloud-native implementation, with extensibility for open source and commercial offerings.https://github.com/open-featurehttps://open-feature.github.io/https://github.com/orgs/open-feature/projects/1https://github.com/open-feature/.github/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptThe CNCF’s mission to make cloud native computing ubiquitous aligns with our vision for OpenFeature. By standardizing a feature flag SDK and providing a cloud native implementation, OpenFeature will help enable more organizations to release higher quality software while reducing risk.There are no similar projects in the CNCF, but there are a number of independent, open source projects. The most popular open source feature flags projects are Flagsmith and Unleash. While both are great tools, they don’t utilize cloud native technologies such as OpenTelemetery, requiring each vendor to build their own telemetry. This increases the complexity of building and maintaining SDKs in every major language. It also means similar telemetry data may be collected multiple times by different tools, increasing complexity and application overhead.https://open-feature.github.io/home/participateOpenFeature would align well with OpenTelemetery, OpenMetrics, and CloudEvents. Leveraging these projects within OpenFeature would provide enhanced observability and interoperability within cloud native environments.openfeature@dynatrace.comWe would like to contribute OpenFeature to the CNCF because it would fill a gap within the existing ecosystem and would benefit from the support provided by the CNCF community. There are a number of organizations that have expressed interest in collaborating and it’s important that the project lives under an independent foundation. Being a part of the CNCF will provide additional credibility to the project, help attract additional contributors, and expedite adoption. Last but not least, the project heavily relies on CNCF projects (e.g. Kubernetes, Helm, etcd, OpenTelemetry). Being a part of the foundation would help to collaborate with these projects and, possibly, facilitate adoption of feature flags within the CNCF ecosystem.https://github.com/open-feature/community/blob/main/MAINTAINERS.md
9
2/23/2022 6:41:11kubewardenKubewarden is a Kubernetes Policy Engine which leverages WebAssembly to define, distribute and run its policies. Policy authors are now empowered to write Kubernetes policies using their favourite programming languages as long as they can be compiled to WebAssembly. With support for Rego built-ins, they can also leverage and reuse existing Open Policy Agent or Gatekeeper policies for securing their Kubernetes clusters.

Policies are distributed using regular container registries and can be signed and verified using Sigstore.
https://github.com/kubewardenhttps://kubewarden.iohttps://github.com/orgs/kubewarden/projects/2https://github.com/kubewarden/.github/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptSecurity is a key aspect of Cloud Native platforms. Kubewarden aims to lower the barrier to write, distribute and maintain Kubernetes policies.

By embracing WebAssembly, Kubewarden policies can be written using traditional programming languages. There’s no new DSL to learn, policy authors can leverage their knowledge and be immediately productive. They can pick their favorite programming language and reuse the libraries and tools of that ecosystem to create their policies.

Within an organization, each team has the freedom to choose the programming language that fits best. Writing policies becomes like writing microservices. The same coding conventions, toolboxes (linters, testing frameworks, ...), delivery pipelines... adopted to write applications can be used also to write, review, maintain and distribute policies.

However, in certain scenarios, DSL can have advantages over regular programming languages. Kubewarden allows policy authors to write policies using Rego, an established query language used by other CNCF projects such as Open Policy Agent and Gatekeeper.

Kubewarden policies are distributed using regular container registries which can be signed and verified using sigstore. This offers operations teams the flexibility to integrate Kubewarden into their existing infrastructure and processes for managing Cloud Native artifacts without having to build or learn anything from scratch.

With a diversified approach towards solving the challenge of writing Kubernetes Policies as compared to other Policy Frameworks within the CNCF, Kubewarden enables more individuals and organizations to embrace the philosophy of Policy as Code, ultimately leading to a more secure ecosystem. Keeping things as flexible and as simple as possible, Kubewarden also aims to provide a Universal Policy platform so that policies can be distributed and enforced in a uniform manner irrespective of the languages they are written in.
Open Policy Agent

Gatekeeper

Kyverno
https://github.com/kubewarden/.github/blob/main/CONTRIBUTING.mdRight now, there are other Kubernetes Policy Frameworks inside of CNCF: Open Policy Agent and Kyverno.

Both the projects are written in their respective domain-specific languages. While this does offer benefits, there is an accompanied steep learning curve that might prove to be a challenge for users adopting these projects.

Kubewarden offers a flexible approach towards solving this problem. Policies can be written using Turing complete languages, which eliminates the steep learning curve imposed by other frameworks. At the same time, it also allows policy authors to leverage Rego to write compact policies. Kubewarden aims to be a “Universal Policy Framework”.

Finally, Kubewarden distributes policies using regular container registries. All the infrastructure, tooling and processes used to distribute container images can be reused also for policies.
https://www.youtube.com/watch?v=4a9aBTKKvzA kubewarden@suse.deBy donating Kubewarden to CNCF, we would have a vendor neutral home to collaborate with other individual contributors and companies. This would eliminate the possible concerns of vendor lock-in, fostering the adoption of Kubewarden by different organizations.

Being part of CNCF would also allow us to integrate with other projects that are already part of the foundation. For example, we would love to integrate with Artifact Hub and make Kubewarden policies one of the available artifact types offered by the website. We think security is one of the key aspects of the Cloud Native ecosystem, hence having ready to consume security policies discoverable on Artifact Hub would be highly beneficial for the whole community.

Moreover, Kubewarden is built on top of WebAssembly. This is an emerging technology that, thanks to its advantages, can have a significant impact inside of the Cloud Native ecosystem. By donating Kubewarden to the CNCF, we aim to increase the awareness of WebAssembly’s potential among the Cloud Native community, and inspire others to experiment with it.
All our repositories inherit this file: https://github.com/kubewarden/.github/blob/main/CODEOWNERS The file is eventually overwritten inside of certain repositories to provide more granular information. https://raw.githubusercontent.com/kubewarden/kubewarden.io/main/static/images/icon-kubewarden.svg
10
2/27/2022 1:51:52HidraHidra is a software that allows you to monitor end-to-end scenarios, perform functional tests, generate Prometheus metrics and everything just by declaring simple scenarios in a sequential way.https://github.com/hidracloud/hidrahttps://hidra.cloudhttps://hidra.cloud/roadmap/https://github.com/hidracloud/hidra/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptHidra generates metrics in Prometheus format, and would be a great monitoring tool for all sites deployed in the cloud. Also, we're monitoring a wide range of website with Hidra, and we have write high quality Grafana dashboards.Site24x7, ¿Behat?, JMeterhttps://github.com/hidracloud/hidra/blob/main/CONTRIBUTING.mdhola@josecarlos.meI believe that Hidra is a project that integrates very well with the rest of the CNCF projects, and I consider that there is none similar within the project. I would like the project to gain visibility and maturity thanks to the CNCF, and to be able to have a quality tool that can replace services like Site24x7.
11
5/22/2022 23:47:35DevStreamDevStream is an open-source DevOps tool manager that aims to create a customized DevOps toolchain in 5 minutes with only one command.https://github.com/devstream-io/devstreamhttps://www.devstream.io/https://github.com/devstream-io/devstream/blob/main/ROADMAP.mdhttps://github.com/devstream-io/devstream/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptDevStream aims to manage and set up DevOps tools and integrate them. Most of the tools are actually open-source and even CNCF. Most tools deploy in Kubernetes or a cloud environment.kubesphere (not exactly the same but it's an example)https://github.com/devstream-io/devstream/blob/main/CONTRIBUTING.mdtiexin.guo@merico.devWhen we started this project, we had the "open-source, donating to some foundation" goal in mind. We believe open-source, community work is the future. Since our tool is more related to DevOps/Cloud, we think CNCF is the perfect choice (compared to Apache, for example).

We'd like to get more interactions from the community, to get to know their concrete needs and feedbacks, so that we can make the product better.
12
2/28/2022 14:44:18Hexa Policy OrchestrationMulti-cloud and hybrid adoption create new challenges with identity fragmentation for enterprises. As organizations move to the cloud, they pick up new identity silos that come with each cloud platform. Due to technical and organizational reasons, it is impossible to consolidate identity into a single identity system. Consequently, multi-cloud now means multi-identity.

Identity is fragmented across multiple clouds and across the technology stack. This makes it difficult to manage access and to understand who has access to applications and data. Further, this fragmentation introduces security risk from complexity and inconsistency. Organizations need to provide zero trust, identity-based access but lack a way to align identity and access across the stack.

Hexa Policy Orchestration seeks to make managing distributed identity and access across multiple clouds and across the stack consistent, secure and scalable. Hexa Policy Orchestration is open-source software that manages access policy for applications and data running on multiple clouds and across your tech stack. Hexa translates and orchestrates Identity Query Language (IDQL) policies into the native policies for your application, platform, data, and network systems to unify access policy management.

IDQL is a new Policy Orchestration format that defines access control policies in a declarative way. IDQL policies are distributed and orchestrated, via Hexa, across heterogeneous systems at the application, platform, data, and network planes. Use IDQL to abstract policy from the underlying access control system; (e.g. AWS Identity, GCP BeyondCorp Identity, Snowflake, Versa Networks, F5 Networks Volterra, Nginx and others).
https://github.com/hexa-org/policy-orchestrator https://hexaorchestration.org/preview/ https://github.com/hexa-org/policy-orchestrator/blob/main/ROADMAP.md https://github.com/hexa-org/policy-orchestrator/blob/main/CODE_OF_CONDUCT.md I accept the CNCF IP PolicyI AcceptCloud-first and cloud-native business and infrastructure systems require cloud-native ways to address identity and policy in order to meet the needs of fast-moving, modern enterprises. Hexa addresses this need to orchestrate identity and access policy across a wide range of disparate environments - an area that is not being met by any other project today.

In addition, the Hexa project utilizes software from a number of CNCF projects to support its infrastructure and ecosystem:
> Kubernetes clusters to run various Hexa components: administration UI, orchestrator and demonstration application
> Open Policy Agent is used for runtime access control within the demonstration application (instructions for installing/building demo app here: https://github.com/hexa-org/policy-orchestrator/blob/main/README.md)
> Harbor provides the container registry for all the Hexa components
> Contour is used as the ingress service
> Cert Manager handles setting up TLS certificates for communication security between Hexa components
> The Kyverno project is focused on managing policies for Kubernetes environments, but is exclusive to Kubernetes. Hexa/IDQL takes a much broader perspective to unify policies across multi-cloud environments plus across the stack to include application, platform, data and network policies. It is anticipated that a connector/integration can be built between Hexa/IDQL and Kyverno systems.
> Open Policy Agent (OPA) is a runtime authorization engine for cloud native systems, whereas Hexa/IDQL is an administrative process for orchestrating policy across cloud platforms and applications.
> We are not aware of any similar projects outside of CNCF.
https://github.com/hexa-org/policy-orchestrator/blob/main/CONTRIBUTING.md Hexa’s IDQL is a new specification created for universal access Policy Orchestration and governance. Rego is the policy syntax format used by OPA. Rego and IDQL are complementary because OPA’s Rego and IDQL do different things. IDQL is for Policy Orchestration for administrators, giving you control, through managing native policies, across distributed cloud systems (including OPA policies), using a simple declarative universal policy. OPA policies, expressed in Rego, are focused on authorizing specific transactions (like in an API flow) and for Kubernetes access at runtime.

To further validate the alignment between Hexa and OPA, the demonstration business application included in the Hexa repo implements OPA for its internal access control. Further, the demonstration administration UI in the Hexa repo is able to discover, translate, and orchestrate Rego policies in the demo application.
https://github.com/hexa-org/policy-orchestrator/tree/main/docs/img gerry@strata.ioLike many other projects, Hexa needs a safe and vendor-neutral forum to grow and thrive in its mission to solve the multi-vendor/cloud identity and policy orchestration gap in the industry. CNCF is the organization best suited to provide the framework and support for the Hexa project, and to help it grow via exposure to a larger audience.

The association with CNCF addresses many of the critical concerns that potential contributors may have, which is intellectual property ownership and open source license model. Hexa will benefit from any additional guidance that CNCF can provide, even at the sandbox level.
https://github.com/hexa-org/policy-orchestrator/blob/main/MAINTAINERS.md https://github.com/hexa-org/policy-orchestrator/tree/main/docs/img
13
3/1/2022 9:36:06KonveyorTHIS IS A RESUBMISSION AFTER FEEDBACK FROM TAG-APP-DELIVERY

The Konveyor Project is an open community of tools for modernization to Kubernetes and cloud-native technologies. These tools may be used either together or individually, depending on the needs of the migration team. The tools within the community provide the ability to:
1. Assess and analyze applications to determine which modernization strategy to utilize (rehost, replatform, or refactor).
2. Automate the rehosting, replatforming, and refactoring of applications.
3. Measure the impact of modernization on software delivery performance.

Konveyor plans to solicit existing and new migration tools within the industry to join our community. Current tools include:
Crane - Migrates applications between Kubernetes clusters while also automating previously manually deployed applications into Continuous Delivery.
Forklift - Automated the migration of virtual machines to KubeVirt.
Tackle - Assessment and Analysis of applications to understand risks and changes that will be required to run applications in containers on Kubernetes
Move2Kube - a tool that uses source files such as Docker Compose files or Cloud Foundry manifest files, and even source code, to generate Kubernetes deployment files including object YAML, Helm charts, and operators.
Pelorus - a tool that helps IT organizations measure the impact that modernization of applications has had on their ability to improve their software delivery performance. It does this by gathering metrics about team and organizational behaviors over time in some key areas of IT that have been shown to impact the value they deliver to the organization as a whole including:
Lead Time for Change
Deployment Frequency
Mean Time to Restore
Change Failure Rate
https://github.com/konveyorwww.konveyor.iohttps://docs.google.com/presentation/u/2/d/1qldUlHcjAvrWiU3z7FwCUm2CAbvwvIyuBs3UiEhxTIE/edit#slide=id.phttps://github.com/konveyor/community/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptThe Konveyor projects align with the application definition and image build area. The projects are focused on assessment and analysis of applications and recommendations on what to do to rehost, replatform, or refactor the applications to Kubernetes, Rook, CNI, CSI, and several other CNCF projects.

TAG App Delivery reviewed the Konveyor application and identified a few areas for better alignment, all of which are in progress now in the Konveyor project. First, some non-APL code is being relicensed. Second, some artifacts are are specific to OpenShift are being moved to another Github organization. With these changes, we feel ready to resubmit.
There are no current CNCF projects that are similar to Konveyor, and we hope that any new projects in the future will be interested in joining the open Konveyor community.

Outside the CNCF, there are a small number of tools that may overlap with the Konveyor projects. There is ongoing collaboration with the Velero and OpenRewrite communities from the Konveyor community. They would also be welcome to join Konveyor if they wanted to come to the CNCF.

OpenRewrite
Kasten (kanister is open-source)
Velero
Portworx Libopenstorage / Stork
https://github.com/libopenstorage/stork/blob/master/pkg/migration/controllers/migration.go
https://github.com/konveyor/mig-controller/blob/master/HACKING.mdWhile it's the eventual intention of Konveyor to support migration from older stacks to all cloud native technologies, the ones we already support include:
Kubernetes: Konveyor's core purpose is to enable migrating applications to Kubernetes.
KubeVirt: Konveyor Forklift helps users migrate VM-based software directly to KubeVirt.
Helm: Move2Kube generates Helm charts from other deployment tools
Rook, CSI: Konveyor Crane supports migration from older storage to multiple cloud-native storage providers
https://docs.google.com/presentation/d/1xcwBPHb8MI5m2TU7TFk3f18KlkU_M6XavdTYDlYAPc8/edit#slide=id.gbce498966b_0_219tsanders@redhat.comWhile the Konveyor project produces migration tools, the primary purpose of Konveyor is to build an inclusive community of practitioners around migrating to Cloud Native in order to build tools to support the practitioners on their journey. It should be noted that an inclusive community has been the intention of the Konveyor community members for some time. There is a slack channel (#konveyor) on Kubernetes slack (https://slack.k8s.io). All meetups are open to public participation. Planning sessions are open to all. Sprint demonstrations are published to youtube for public viewing as well. The CNCF is the best place for this, both because it provides a vendor-neutral nexus for collaboration, and because of the CNCF's strong end user community. In short, joining the CNCF will allow Konveyor to fulfill its vision of becoming the place for people to discuss, share, and contribute to solutions for moving to Kubernetes and Cloud Native.

It is the belief of the community members of Konveyor that there is a gap in open source tools to rehost, replatform, and refactor applications to utilize CNCF technologies, most notably Kubernetes. We would like to address this gap and provide the tools necessary. The community hopes to garner additional usage, feedback, and ultimately contribution from additional software developers to the project. We believe that contributing the project to the CNCF will be mutually beneficial.
14
3/21/2022 16:02:19TrousseauTrousseau is a Kubernetes Key Management Service (KMS) provider plugin that acts as a broker between the Kubernetes API server and a KMS to perform in-flight encryption of Secrets using a transit key engine or similar.
Trousseau enables separation of duties and zero-trust security model for development, application, and infrastructure teams to safely store secrets with an encrypted payload within Kubernetes etcd using the native Secret Management API without the need to leverage specific KMS tooling or KMS API subsets.
https://github.com/ondat/trousseauhttps://trousseau.iohttps://github.com/ondat/trousseau/blob/main/ROADMAP.mdhttps://github.com/ondat/trousseau/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptTrousseau is a container-based solution to support a full shift left to deploy and maintain cloud native applications consuming Kubernetes Secret (and more) by leveraging the Kubernetes KMS provider API construct without additional CLI tooling or the need to learn a new set of skills and off-kubernetes API subset.Deprecated: https://github.com/oracle/kubernetes-vault-kms-plugin
Limited to AWS KMS: https://github.com/kubernetes-sigs/aws-encryption-provider
https://github.com/ondat/trousseau/blob/main/CONTRIBUTING.mdAlign with Kubernetes KMS provider definition (https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/) as best practices to secure secrets (https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) and with etcd to secure the key-value contents.https://github.com/ondat/trousseau/wiki rovandep@trousseau.ioBy contributing the project to the CNCF, we believe that the Kubernetes user community can benefit from Trousseau to ease their Secret management the native way while securing Kubernetes them within etcd . We also hope that the CNCF community contributors will come up with new suggestions improving the community experience.https://github.com/ondat/trousseau/blob/main/MAINTAINERS.mdhttps://github.com/ondat/trousseau/tree/main/assets
15
3/25/2022 7:10:23YAMYet Another Application Model - The Simpler Way of Continuous Delivery.

YAM Project provides flexible and powerful way of CD, separates the concerns of application owner and infrastructure owner. The project releases the VS Code extension, CLI, Container Image, NPM Package, it could be easily integrated to any existing CD platforms like ArgoCD, Jenkins etc.

VS Code Extension: https://marketplace.visualstudio.com/items?itemName=code2life.yam-engine
NPM Package: https://www.npmjs.com/package/yam-cli

Moreover, the powerful Plugin Mechanism leverages both JavaScript and Golang ecosystem, extending the ability of the YAM engine is simple and clear.

Documents: https://yam.plus/#/get-started (WIP)
https://github.com/Code2Life/yamhttps://yam.plushttps://yam.plus/#/roadmaphttps://github.com/Code2Life/yam/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptThis project focuses on better CD experience of Kubernetes, which is a critical pain point for large scale CD.

When you need to release hundreds of applications into more than 20 multi-cluster Kubernetes environments (the case I encountered at ZOOM), all existing tools like Helm, Kustomize are not working for DevOps team because of too much efforts on maintaining Kubernetes manifests and variables.

Helm/Kustomize are great tools for simple cases, while OAM is the Right CD methodology for complicated cases. YAM is another brand new implementation that inherited the good parts of OAM/KubeVela.
KubeVela: https://github.com/oam-dev/kubevela
cdk8s: https://github.com/cdk8s-team/cdk8s
https://yam.plus/#/contributeThe project is inspired by KubeVela, the functionalities are similar to KubeVela, Helm, cdk8s, but the Methodology, Architecture, Workflow are Totally different from these CNCF projects.

Here is a detailed comparison document:
Refer: https://yam.plus/#/compare-to
https://yam.plus/#/tutorialscode2life@ustc.eduWhy contribute ? The Background and Reasons:
I'm an architect at ZOOM. This project originated from an internal CD project I developed at ZOOM, for delivering hundreds of backend applications in more than a dozen of regions globally in Cloud Native way.
The internal project was highly inspired by KubeVela. After 1 year, I think it's time to feedback to open source community, so I re-designed the plugin mechanism and re-implemented the core features in recent months, now the project is beta released and has totally different implementation from KubeVela. The engine core features are accomplished. However, as for the ecosystem, namely various of plugins that extend the CD ability, it Definitely needs the help of CNCF and the whole open source community to make it better.


By joining as a part of CNCF, the project might be used by more potential users and companies, and the ecosystem could be more thriving. If it could be accepted as Sandbox project, it would lay better foundations for moving to Incubating stage in future.
16
3/25/2022 8:42:19ArmadaArmada is an application to achieve high throughput of run-to-completion jobs on multiple Kubernetes clusters.

It stores queues for users/projects with pod specifications and creates these pods once there is available resource in one of the connected Kubernetes clusters.
https://github.com/G-Research/armadahttps://armadaproject.io/https://github.com/G-Research/armada/pull/830https://github.com/G-Research/armada/blob/master/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptArmada is a Kubernetes-based project, first and foremost. As k8s has become the focal point for what is considered 'cloud native' today, Armada is a natural fit to be included in this ecosystem.Volcano
kube-queue
MCAD
https://github.com/G-Research/armada/blob/master/CONTRIBUTING.mdWhile Volcano and kube-queue exist and have some traction with certain organizations that require batch scheduling, they are currently limited to single-cluster situations. Armada, in contrast, is a meta-scheduler that federates multiple Kubernetes clusters to allow organizations to scale their batch-processing workloads across an even wider estate.https://youtu.be/9-lEtvmWeEA?t=187dave@gr-oss.ioBatch-processing is an increasingly important subject to CNCF members. While other batch-processing projects exist, we feel our multi-cluster approach is an important advantage that the community will be excited about. We fully embrace the cloud-native community and hope this contribution will make it stronger.https://github.com/G-Research/armada/tree/gh-pages/assets/img/brand
17
4/12/2022 17:22:28KubeRayKubeRay is an open source toolkit to run Ray applications on Kubernetes, and provides several tools to improve running and managing Ray's experience on Kubernetes.

- Ray Operator
- Backend services to create/delete cluster resources
- Kubectl plugin/CLI to operate CRD objects
- Data Scientist centric workspace for fast prototyping (incubating)
- Native Job and Serving integration with Clusters (incubating)
- Kubernetes event dumper for ray clusters/pod/services (future work)
- Operator Integration with Kubernetes node problem detector (future work)
github.com/ray-project/kuberayhttps://docs.ray.io/en/master/cluster/kuberay.htmlhttps://github.com/ray-project/kuberay/issueshttps://github.com/ray-project/kuberay/blob/master/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptThis project enables Kubernetes support for Ray; and attempts to make cloud native computing ubiquitous for machine learning and data-intensive applications. Ray enables its users to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.There are no similar projects that enable Kubernetes support for Ray; however, other similar efforts for data preprocessing include the Dask Kubernetes operator.https://github.com/ray-project/kuberay/blob/master/CONTRIBUTING.mdpaige@anyscale.comThe Ray open-source project would be very interested in having a closer partnership with the Kubernetes community; to ensure that we are aware of any duplicate or similar projects; and to better advertise our project to potential contributors.N/A
18
4/14/2022 16:19:37Open Zero Trust security platform Open Zero Trust is a Full Lifecycle Container Security platform, delivering end-to-end security for modern container infrastructures. It offers a cloud-native Kubernetes security platform with end-to-end vulnerability management, automated CI/CD pipeline security, and comprehensive run-time security, including the container firewall to block zero days and other threats. With the Open Zero Trust (OZT) platform, DevOps, DevSecOps, and Security teams have the tools they need to secure the entire container pipeline, from Build to Ship to Run, automatically.

The Open Zero Trust (OZT) security platform includes many enterprises ready security functions:

An enterprise-grade layer 7 firewall which will enforce advanced network security for container environments: egress/ingress L7 control, DPI/IDS/IPS, packet capture, WAF, DLP, micro-segmentation and built-in threat detection.

Container behavior protection based on network, file system and process profiling, which could be efficiently protecting against zero-day attacks, crypto-mining or any malicious behaviors inside of the running containers.

A network topology graph which shows live traffic between pods. The graph also indicates namespaces, groups, L7 protocols, network sessions, payloads, security warnings/incidents etc... It can also be used to capture network forensics or define security policies.

A security dashboard which will automatically do assessment in cluster and report security risk scores for both runtime and images, also provide mitigation guidance to improve this security score. Plus, a full set of REST APIs are available to retrieve and manage OZT backend services for security automation or customization needs.

Enhanced Kubernetes admission controller which could secure container deployments with security policies: namespace, user role, volume, CVEs, labels etc...

A compliance framework with built-in templates for PCI-DSS, GDPR, NIST, HIPAA. Can also be customized for any container compliance needs.

A Kubernetes security policy CRD which is being used to centrally manage security policies for Kubernetes. This security policy as code capability is used by many users to achieve security automation. It also has a built-in PSP alternative policy to check pod security postures.

Multi cluster security posture management function is neutral to platforms that it can manage security across any different Kubernetes, private cloud, or public clouds.

A lightweight vulnerability scanner which can scan CVEs on worker nodes/host, running pod, images in the container registry as well as CI/CD pipeline. Integrations and plugins for popular CI/CD tools like Jenkins, Splunk, CircleCI...

Supports HA (High Availability) setup by default, support SSO, RBAC integration, configuration backup/export/import, SIEM integration, user profile management.
https://github.com/openzthttps://github.com/openzt/openzerotrusthttps://github.com/openzt/openzerotrust/blob/main/ROADMAP.mdhttps://github.com/openzt/openzerotrust/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptOZT platform is fully containerized, and can be deployed, managed, updated through any standard Kubernetes platforms or API driven automation pipelines. OZT has two major security function groups to protect cloud native workloads: 1) continuously vulnerability management and compliance auditing in container life cycle and 2) real-time zero-trust enforcement include layer 7 network security for running containers. The unique security policy as code capability will meet Kubernetes security automation needs. All these naturally fit in the cloud native computing ecosystem and help secure the cloud native computing ecosystem. - Falco Open Source Security: different approaches that focused only on risk assessment and threat detection.
- Open Policy Agent: for policy integration and management. Can be an integration point for OZT.
- Notary and TUF: focus on cross platform trust and content delivery. Can be a good integration point for OZT.
- Cilium: focus on secured cloud native network connections. Transparent to OZT’s security functions, we have common users using two solutions together.
https://github.com/openzt/openzerotrust/blob/main/CONTRIBUTING.mdgary.duan@suse.comNeuVector was acquired by SUSE which is one of the leading open-source companies. As part of the growth plan and strategy, we would like to open-source NeuVector, plus, contribute our core technology which we have named as Open Zero Trust (OZT) project.

Our objective in open sourcing our NeuVector technology is to further the creation of an open, interoperable standard for security integration for all container management platforms, including SUSE Rancher, Red Hat OpenShift, VMWare Tanzu, EKS, AKS GKE etc. Open source enables this through accelerated user adoption and industry wide collaboration and contributions. This is important for the entire container community. Additionally, our mission, unlike our competitors, is to maximize adoption of container security without locking you into a particular container management platform. As such, NeuVector will continue to support 3rd party CMPs in addition to SUSE Rancher.

We think cloud native security is essential and we want more people to know there is a strong solution to secure their environment. The OZT platform is already being used in production by many enterprise users, and it is being validated to be able to scale in different environments as well as crossing multiple clouds. OZT provides a robust platform to do just that. We would like to work with others to extend the platform to handle more situations and provide even better coverage. The CNCF provides a vendor neutral place to work with others on making the cloud more secure. We would like to openly collaborate with others on the further development of OZT. The CNCF provides a great venue to do that. We would also like to work with a wide array of end users to understand their needs and help improve OZT to solve them. The CNCF end-user community looks like an invaluable resource in furthering that mission.
We are cleaning up and moving the source codes over to OpenZeroTrust project step by step. Please feel free to contact us if you want to see the codes earlier. It's all open sourced under NeuVector's repository right now. Thank you!https://github.com/openzt/openzerotrust/blob/main/OpenZeroTrustLogo_Green2.png
19
4/18/2022 23:12:30SREWorksSREWorks: Alibaba Cloud Big Data SRE team's cloud-native operation and maintenance (O&M) platform was born out of nearly a decade of business precipitation, using the thinking of "Big Data and AI" for O&M work(we call it DataOps and AIOps), to help more practitioners use DataOps and AIOps to do a efficient O&M work.https://github.com/alibaba/sreworkshttps://github.com/alibaba/SREWorks/blob/main/ROADMAP.mdhttps://github.com/alibaba/SREWorks/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI Accept1. Complete cloud-native O & M techniques and construct closed-loop end-to-end development O & M procedures for cloud-native clusters using the Open Application Model.
2. Deploy different big data solutions for O&M platform using cloud-native orchestration capabilities.
Tencent BlueKinghttps://github.com/alibaba/SREWorkskubevela: the Open Application Model Definition
helm: Application Definition
dataops@service.alibaba.comBecause the CNCF cloud native computing ecosystem lacks O & M solutions for large-scale applications, SREWorks is supplied with an open-source O & M platform. Similarly, we intend to leverage CNCF to assist people learn more about cloud-native O & M methods, as well as the concept of "DataOps and AIOps".
20
4/19/2022 18:32:56BumbleBeeBumbleBee brings a Docker-like experience to eBPF. In a nutshell, BumbleBee simplifies building eBPF tools and allows you to package, distribute, and run them anywhere. Just focus on the eBPF portion of your code and BumbleBee automates away the boilerplate, including the userspace code.https://github.com/solo-io/bumblebee/bumblebee.iohttps://github.com/solo-io/bumblebee/blob/main/ROADMAP.mdhttps://www.contributor-covenant.org/version/2/0/code_of_conduct.htmlI accept the CNCF IP PolicyI AccepteBPF has drawn a lot of interest in the cloud-native ecosystem in the few years, as eBPF can provide networking layer security enforcement and/or metrics/traces/logs from Linux Kernel. A few challenges have risen due to the strong interests:

- How can users quickly build/run/distribute an eBPF program?
- How can users provide customized eBPF programs on top of what the cloud-native ecosystem provides?
- How does eBPF interact with service mesh and when to choose one or the other or both?

BumbleBee is intended to simplify building eBPF tools and allows you to package, distribute, and run them anywhere. With BumbleBee, cloud-native users can feel comfortable about eBPF just as how they love containers and Docker today, which helps them to make an informative decision on eBPF and service mesh too.
- Docker (not for eBPF)
- https://github.com/cloudflare/ebpf_exporter (focus specifically on Prometheus exporter for eBPF metrics)
https://github.com/solo-io/bumblebee/blob/main/docs/contributing.mdhttps://www.youtube.com/watch?v=4js-blTUV1Qlin.sun@solo.ioCNCF’s mission is to make cloud-native computing ubiquitous. BumbleBee is well aligned with this initiative by not only enabling users to easily build eBPF tools but also to package, distribute and run them anywhere in users' Kubernetes or VM environments.

We have presented Bumblebee to the Observability TAG due to strong interest from the TAG. We believe that the CNCF provides the proper home for this due to its commitment to the promotion and development of open, vendor-neutral projects. Additionally, the wide breadth of the CNCF members will provide the feedback necessary to ensure Bumblebee isn't too limited in its scope and appeals to as many constituents of the cloud-native community as possible.
EItanya, lgadban, yuval-k, linsunhttps://github.com/solo-io/bumblebee/blob/main/logo.svg
21
5/10/2022 21:57:52CloudNativePGCloudNativePG enables and integrates PostgreSQL on Kubernetes and other parts of the cloud native ecosystem. PostgreSQL is one of the most popular open source databases, and as such this project helps users move existing applications and deploy popular application frameworks in a Kubernetes environment both quickly and safely. Similar to Vitess, CloudNativePG strives to bring a battle-tested relational database to the modern cloud-native world.

The main component of the project is the CloudNativePG operator. The project also delivers the operand images for all the supported versions of PostgreSQL, as well as Helm charts.

CloudNativePG defines a Kubernetes resource representing a PostgreSQL cluster made up of a single primary and an optional number of replicas. CloudNativePG orchestrates the full life cycle of a PostgreSQL cluster, from bootstrapping and configuration, through high availability and connection routing, to backups and disaster recovery.
CloudNativePG relies on PostgreSQL’s native streaming replication to distribute data across pods, nodes, and zones, using standard Kubernetes patterns. Replicas can be scaled up and down in a Kubernetes native manner, and the operator will automatically and safely reconfigure replication as appropriate.

Applications access CNPG-managed PostgreSQL databases using a Kubernetes Service which is managed by the operator. Users do not need to worry about their connections in case of planned or unplanned failovers; the operator automatically routes all connections to the current primary.

Disaster recovery capabilities are implemented by integrating the open source Barman project to orchestrate PostgreSQL’s native backup and continuous transaction log archiving functionality with support for all major object storage implementations. At restore time, CNPG supports declarative access to PostgreSQL’s point-in-time restore functionality.

Designed by a multidisciplinary team with both PostgreSQL and Kubernetes expertise, CloudNativePG targets any community supported version of both Kubernetes and PostgreSQL, and it is suitable for private, public, hybrid, or multi-cloud environments. It was derived from a proprietary project, EDB Cloud Native PostgreSQL, that has been deployed widely over the past eighteen months.

CloudNativePG adheres to devops principles and concepts such as declarative configuration and immutable infrastructure. True to its name, CNPG is designed to integrate as seamlessly as possible with Kubernetes and provide a “native” experience for users familiar with modern cloud-native technologies but less adept with PostgreSQL.
https://github.com/cloudnative-pg/cloudnative-pghttps://cloudnative-pg.iohttps://github.com/orgs/cloudnative-pg/projects/1https://github.com/cloudnative-pg/cloudnative-pg/blob/main/CODE_OF_CONDUCT.mdI accept the CNCF IP PolicyI AcceptCloudNativePG falls in the database category, currently overseen by the Storage Technical Advisory Group. The project can be compared to both Vitess (MySQL database) and TiKV (KV database), even though the underlying engine is different (in our case PostgreSQL).

CloudNativePG relies on the Kubernetes API server to make decisions that involve the state of the Postgres cluster (one primary, zero or more replicas), including automated failover, switchover, self-healing, configuration changes, rolling updates, and routing through Kubernetes services. We are also looking forward to enhancements in the cluster federation space, so that failover and switchover operations can span over multiple Kubernetes clusters.

CloudNativePG also facilitates infrastructure monitoring, through a native and customizable exporter for Prometheus, and log management, rewriting PostgreSQL’s line-oriented log format to JSON on-the-fly for easy consumption by cloud native tooling such as Fluentd or Open Telemetry.

CloudNativePG supports affinity and anti-affinity rules, tolerations, and resource management. It is also storage agnostic, via storage classes and PVC templates.

CloudNativePG provides by default TLS encrypted connections, with self-generated CA and certificates or the possibility to pass custom ones (including integration with cert-manager).

It can be installed via YAML manifests, or Helm chart, and provides a plugin for kubectl.

Finally, based on the benchmarks and tests that have been done, we as maintainers strongly believe that relying on Kubernetes provides the best high availability and self-healing experience of PostgreSQL. We also believe that there’s a lot of room for improvement in this area, and we’d like to be part of this ecosystem.
In the “databases” category of the CNCF, the most similar projects are Vitess and TiKV, since there are no PostgreSQL-based systems in CNCF. More broadly, there are several operators of varying maturity in the PostgreSQL-on-Kubernetes space:

Crunchy Data Operator
Zalando Operator
Stackgres
Kubegres
Stolon

None is a community- or foundation-governed project.
https://github.com/cloudnative-pg/cloudnative-pg/blob/main/CONTRIBUTING.mdCloudNativePG relies heavily on Kubernetes. As a result, our priority is to be constantly aligned with new releases of Kubernetes and capabilities.

Moreover, managing many PostgreSQL databases in large Kubernetes clusters requires integration with existing projects for monitoring. CloudNativePG currently implements a native exporter for Prometheus, with customizable queries via configmaps or secret. This can be extended on the alerting side. While we believe that the project has very good foundations, we also understand that there’s room for improvement in this area, especially UI, and we are looking forward to working together with CNCF to improve our posture.

Similarly, on the storage and CSI drivers side, we look forward to deeper cooperation with CNCF projects.

There is no direct overlap with other CNCF projects, as there are currently no PostgreSQL operators in CNCF. From a high level functional point of view there is overlap with Vitess and TiKV, as previously discussed.
https://cloudnative-pg.io/pdf/cloudnative-pg-intro.pdfgabriele.bartolini@enterprisedb.comThe adoption of Kubernetes and modern cloud native technologies is still lagging among database experts, and vice versa. No single vendor can solve this.

We believe that contributing this project to the CNCF is an opportunity to address this problem and attempt to solve it, by gathering multiple PostgreSQL vendors and users that can support each other around a common project, as members of a vendor neutral and openly governed community.

We as maintainers have strong skills in both PostgreSQL and Kubernetes and experience working in Open Source projects. We think that this contribution is beneficial for both PostgreSQL and the Cloud Native ecosystem, given how popular and innovative this database management system is.

Our goal is for this project to be the home for Postgres in Kubernetes within CNCF.

From the CNCF we’d like to get support on the community side and the necessary multi-perspective feedback for Postgres to stretch its capabilities and excel in Cloud Native deployments too. This kind of input from the CNCF community and user base can only improve Postgres.

We also seek further cooperation and enhanced integrations with other CNCF projects like Prometheus, Fluentd, Helm, Open Policy Agent, cert-manager, and so on. Given the criticality of the storage component in a database, we are looking forward to cooperation with projects like Longhorn, OpenEBS, and Rook, to cite a few.
https://github.com/cloudnative-pg/cloudnative-pg/blob/main/CODEOWNERS (this is for the operator project, the main project - each subproject will have its own CODEOWNERS - as defined in the GOVERNANCE)https://github.com/cloudnative-pg/artwork
22
4/26/2022 8:59:37Konveyor project
The Konveyor project is an open community of tools for modernization to Kubernetes and cloud-native technologies. These tools help enterprises overcome the top barriers for bringing existing workloads into Kubernetes: cost and time constraints. Like the Argo project, these tools may be used either together or individually, depending on the needs of the migration team. The tools within the community provide the ability to:
Assess and analyze applications to determine which modernization strategy to utilize (rehost, replatform, or refactor).
Automate the rehosting, replatforming, and refactoring of applications.
Measure the impact of modernization on software delivery performance.

Current tools in the Konveyor project, all of which work with core Kubernetes, are:
Crane - Migrates applications between Kubernetes clusters while also automating previously manually deployed applications into Continuous Delivery.
Forklift - Automated the migration of virtual machines to KubeVirt.
Tackle - Assessment and Analysis of applications to understand risks and changes that will be required to run applications in containers on Kubernetes
Public issue on relicensing Tackle coming soon
Move2Kube - a tool that uses source files such as Docker Compose files or Cloud Foundry manifest files, and even source code, to generate Kubernetes deployment files including object YAML, Helm charts, and operators.
Pelorus - a tool that helps IT organizations measure the impact that modernization of applications has had on their ability to improve their software delivery performance. It does this by gathering metrics about team and organizational behaviors over time in some key areas of IT that have been shown to impact the value they deliver to the organization as a whole including:
Lead Time for Change
Deployment Frequency
Mean Time to Restore
Change Failure Rate
*Mig
This project is in the Konveyor GitHub organization but will be removed if accepted into the CNCF sandbox, as it is dependent on a vendor product.

Konveyor is actively working with the community to bring in new migration tools. Part of the incubation process would be to migrate workloads into the Kubernetes ecosystem in a standardized manner to include: assessment and analysis of applications and recommendations on what to do to rehost, replatform, or refactor the applications to Kubernetes, Rook, CNI, CSI, and several other CNCF projects.
https://github.com/konveyorwww.konveyor.io
https://docs.google.com/presentation/u/2/d/1qldUlHcjAvrWiU3z7FwCUm2CAbvwvIyuBs3UiEhxTIE/edit#slide=id.p
https://github.com/konveyor/community/blob/main/CODE_OF_CONDUCT.md
I accept the CNCF IP PolicyI Accept
The primary goal of the Konveyor project is to simplify migration of legacy and cloud native workloads to Kubernetes, thus supporting end users in the cloud native computing ecosystem in their modernization efforts.

Migration tools within the Konveyor project align with the application definition and image build area. The tools are focused on assessment and analysis of applications and recommendations on what to do to rehost, replatform, or refactor the applications to Kubernetes, Rook, CNI, CSI, and several other CNCF projects.
The Konveyor Project is similar to the Argo Project in that they both contain a collection of common tools. That said, the Konveyor Project is focused on open source migration tools to enable legacy and cloud native workloads to be migrated into Kubernetes.

Outside the CNCF, there are a small number of tools that may overlap with the Konveyor project. There is ongoing collaboration with the Velero and OpenRewrite communities from the Konveyor community. They would also be welcome to join the Konveyor project if they wanted to move their projects to the CNCF.

Similar migration tools outside the CNCF:
OpenRewrite
Kasten (kanister is open-source)
Velero
Portworx Libopenstorage / Stork
https://github.com/libopenstorage/stork/blob/master/pkg/migration/controllers/migration.go
https://github.com/konveyor/mig-controller/blob/master/HACKING.md
The Konveyor project is an open community of tools with the goal of supporting the migration from older technology stacks to all cloud native technologies. The community is continually growing and already aligns with:
Kubernetes: Konveyor's core purpose is to enable migrating applications to Kubernetes. All Konveyor projects are compatible with mainline Kubernetes.
KubeVirt: Konveyor Forklift helps users migrate VM-based software directly to KubeVirt. This is a Konveyor project instead of a KubeVirt project to leverage the standardized processes and technologies the Konveyor project provides.
Helm: Move2Kube generates Helm charts from other deployment tools.
Rook, CSI: Konveyor Crane supports migration from older storage to multiple cloud-native storage providers.
CSI: Konveyor Crane will leverage CSI snapshot support, in addition to providing filesystem level transfer via rsync for NFS and other storage providers not yet using CSI. We are core contributing members of upstream projects based around state transfer including Velero, and have donated our platform agnostic state transfer logic to Backube/pvc-transfer (a shared library consumed by several projects, including VolSync). In the future, we intend to collaborate on VolumePopulators to help increase our ability for fluid state replication across clusters.
https://docs.google.com/presentation/d/1xcwBPHb8MI5m2TU7TFk3f18KlkU_M6XavdTYDlYAPc8/edit#slide=id.gbce498966b_0_219
kangell@redhat.com
The Konveyor project was created to help lower the barriers to migrating existing workloads to Kubernetes at scale for enterprises as well as smaller end users. While the Konveyor project produces migration tools, the primary purpose of Konveyor is to build an inclusive community of practitioners around migrating to Kubernetes and cloud native applications in order to build tools to support the practitioner's journey.

Creating an inclusive community has been the intention of the Konveyor community members for some time. There is a slack channel (#konveyor) on Kubernetes slack (https://slack.k8s.io). All meetups are open to public participation. Planning sessions are open to all. Sprint demonstrations are published to youtube for public viewing as well. The CNCF is the best place for this, both because it provides a vendor-neutral nexus for collaboration, and because of the CNCF's strong end user community. In short, joining the CNCF will allow the Konveyor project to fulfill its vision of becoming the place for people to discuss, share, and contribute to solutions in a standard and simplified manner for moving to Kubernetes and cloud native workloads.

It is the belief of the community members of the Konveyor project that there is a gap in open source tools to rehost, replatform, and refactor applications to utilize CNCF technologies, most notably Kubernetes. We would like to address this gap and provide the tools necessary. The Konveyor community hopes to garner additional usage, feedback, and ultimately contribution from additional software developers to the project. We believe that contributing the project to the CNCF will be mutually beneficial.
23
4/28/2022 2:35:21dbpack
DBPack means a database cluster tool pack. It can be deployed as a sidecar in a pod, it shields complex basic logic, so that business development does not need to rely on a specific SDK, simplifying the development process and improving development efficiency.
https://github.com/CECTC/dbpackhttps://cectc.github.io/dbpack-doc/#/https://github.com/CECTC/dbpack/blob/dev/ROADMAP.md
https://github.com/CECTC/dbpack/blob/dev/CODE_OF_CONDUCT.md
I accept the CNCF IP PolicyI Accept
Create an Operator to manage database topology, manage database traffic, and allow users to customize database traffic routing through hints. All this is achieved by defining CRD rules and detecting HINT in SQL.
no such project
https://github.com/CECTC/dbpack/blob/dev/CONTRIBUTING.md
digitalkosmos.sl@gmail.com

In order to attract more developers to participate in the project, and to form a standard in the field of database traffic governance, to prevent companies from launching their own different database mesh products.
24
5/2/2022 3:20:31Carina
Carina is a standard Kubernetes CSI plugin which provides raw disk performance for stateful applications and has ops-free maintaining. The core function of Carina is grouping local disks of each node and provides LVM or RAW disks via different storage classes. For each PVC, Carina tries to find the most suitable node to setup data volume. Carina also has lots of useful features, for example:

* device registration
* PVC resizing
* IO throttling
* volume topology
* PVC auto-tiering
* pod failover to other nodes
* RAID management
* SMART-aware PVC

Besides of rich functionalities, Carina also is an ops-free CSI, for example:

* Installation by yaml or helm. No dependencies for each node.
* Using battle-tested kernel modules and sits beside of the core IO path.
* If disk fails, just plugin a new one and that's done.
https://github.com/carina-io/carinawww.opencarina.io
https://github.com/carina-io/carina/blob/main/docs/roadmap/roadmap.md
https://github.com/carina-io/carina/blob/main/CODE_OF_CONDUCT.md
I accept the CNCF IP PolicyI Accept
Carina is a standard kubernetes CSI plugin. Users can use standard kubernetes storage resources like storageclass/PVC/PV to request storage media. As more and more stateful applications shifting into cloud native world, those apps typically already has solved data HA themselves and prefer extremely low latency, for example databases. When starting carina project, the key considerations of carina also includes:

* Workloads need different storage systems. Carina will focus on cloud native database scenario usage only.
* Completely kubernetes native and easy to install.
* Using local disks and group them as needed, user can provision different type of disks using different storage class.
* Scanning physical disks and building a RAID as required. If disk fails, just plugin a new one and it's done.
* Node capacity and performance aware, so scheduling pods more smartly.
* Extremely low overhead. Carina sit besides the core data path and provide raw disk performance to applications.
* If nodes fails, carina will automatically detach the local volume from pods thus pods can be rescheduled.
* Middleware runs on bare metals for decades. There are many valuable optimizations and enhancements which are definitely not outdated even in cloud native era. Let carina be an DBA expert of the storage domain for cloud native databases!


In short, Carina strives to provide extremely-low-latency and noOps storage system for cloud native databases and be DBA expert of the storage domain in cloud native era!
OpenEBS: OpenEBS is a popular storage system for kubernetes, which has multiple storage engines for different scenarios.

Longhorn and CubeFS are distributed storage system which replicates the volume across multiple nodes.

Piraeus is cloud native storage system which using DRBD and LVM as core building blocks. Similar with Longhorn and CubeFS, it also supports multiple replicas for each PV."
https://github.com/carina-io/carina/blob/main/CONTRIBUTING.md

OpenEBS/Longhorn/CubeFS/Piraeus are kind of generic storage system which places data chunks on multiple nodes. Carina is trying to eliminate any latency(including network latency) and provide extremely low latency for applications. Besides that, carina also aims to bring the traditional best experience of running databases into cloud native era. In short, Carina is a dedicated storage system for low latency scenario.
https://docs.google.com/presentation/d/1kmK_HhiTWUeQ53IM_oDBvRQO-WIOn0kUsLXtiVwWxkE/edit#slide=id.p1
zhangzhenhua@beyondcent.com
Nowadays, more and more stateful applications are shifting into cloud native world. But there are still many databases that need really high business continuity and performance are still deployed in traditional infrastructures, especially for production deployment. Carina aims to bring those workload into kubernetes with extremely low latency、battle-tested building blocks、well-known DBA experience and free of maintaining cost. CNCF is the most influential organization in cloud native world and we believe carina will draw more attention and have more developers by being an CNCF project. Also, we hope to build an active community and accelerate adoption under CNCF's governance.
https://user-images.githubusercontent.com/88021699/130732359-4e7686a9-3010-4142-971d-b65498d9c911.jpg
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100