| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | ID | Title | Speakers | Type | Track | Abstract | Project Tag | Comments/Concerns | ||||||||||||||||||
2 | 22794 | Leveraging Airship as an integration and QA enabler | Collab Discussion | Forum | Airship is an OSF incubator project for taking infrastructure from bare metal to container orchestration to cloud, using a uniform set of declarative YAML documents. Airship can be used to enable integration testing in several ways: Airship easily stands up infrastructure for CICD itself - e.g. we use it for Jenkins+OpenStack VM worker environments Airship easily stands up configurable combinations of software that can be integrated Airship can be used for OpenStack cross-project validation and testing This forum will flesh out how Airship can be used to enable testing within OpenStack Infra, for operator-specific testing, and other scenarios. The goal is also to generate any needed requirements or new use cases that would allow Airship to be a boon to the QA space. | Airship | This reads a bit like a presentation -- may want to make sure they are prepared for a collab session. (-Jim Blair) | |||||||||||||||||||
3 | 22787 | Discussion on the current state of volume encryption | Fishbowl | Forum | With the increased use and adoption of Barbican volume encryption is becoming a more widely used feature by OpenStack users. There are however several limitations with the current implementation that are holding back wider adoption. In this session I'd like to gather feedback on the current implementation, detail any limitations (lack of key rotation, use of dm-crypt, use of asymmetric keys as passphrases etc) and agree to a way forward in S/T. | Barbican | ||||||||||||||||||||
4 | 22796 | Defining bare metal provisioning capabilities within Airship | Collab Discussion | Forum | Airship is an OSF incubator project for taking infrastructure from bare metal to container orchestration to cloud, using a uniform set of declarative YAML documents. Airship in turn has a YAML-driven, pluggable bare metal provisioning API called Drydock. The initial driver for Drydock uses MaaS for provisioning, but there was substantial interest at the PTG in adding an Ironic driver as well. In addition, there was a desire to enhance Drydock to natively support "bring your own provisioned OS" type scenarios. The purpose of this forum is to 1. Discuss the capabilities/use cases of Drydock using an Ironic plugin, and talk through any anticipated challenges 2. Guage interest in additional bare metal provisioning technologies, e.g. Luna 3. Understand the range of operator needs around declarative bare metal provisioning, see whether Airship is a good fit, and invite additional folks to collaborate | Airship / Bare Metal | ||||||||||||||||||||
5 | 22807 | Extending Blazar reservations to new resource types | Collab Discussion | Forum | Blazar currently provides reservation of compute resources, i.e. whole hosts (hypervisors) and instances. The Blazar project would like to support reservation of other types of resources that users may require, such as: reservation of SR-IOV resources / NUMA-aware resources (Nova / Neutron) reservation of networking resources such as floating IPs, VLANs used for provider networks, bandwidth (Neutron) reservation of GPUs, FPGAs, etc. (Cyborg) reservation of storage volumes (Cinder) These projects would need to provide mechanisms to enforce resource reservation that Blazar can leverage. An additonal topic of discussion is whether a generic approach can be used for projects that leverage the placement API. | Blazar | ||||||||||||||||||||
6 | 22793 | Cells v2 updates | Fishbowl | Forum | Work is ongoing to enhance support for multiple cells in nova. This session will cover some of the bigger changes being made in Stein like handling a down cell and cross-cell resize, and will also allow for operator questions about upgrading to cells v2 and/or supporting multiple cells, as well as performance strategies to be aware of when using multiple cells. | Cells | ||||||||||||||||||||
7 | 22819 | Cinder Usage Outside OpenStack | Collab Discussion | Forum | In the last OpenStack summit we help a forum session (https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21734/standalone-cinder-introduction) on standalone Cinder where we discussed the current status and future possibilities ((https://etherpad.openstack.org/p/YVR18-Standalone-Cinder-Intro).In this session we'll take a look at Cinder in the bigger picture. Presenting how Cinder, with its various deployment options, can currently be used for OpenStack and non OpenStack deployments to provide volumes for baremetal, VMs, and containers. | Cinder | ||||||||||||||||||||
8 | 22821 | Creating a Cinder Data Service | Fishbowl | Forum | The idea of splitting out I/O intensive operations to a separate data service has been brought up on numerous occasions in the past. Depending on the storage backend in use it is possible that some operations, like migration, can consume notable resources on the node where Cinder Volume is running. Given that that node is often the control node, it can impact overall performance of the control plane when there are numerous I/O intensive operations underway. For customers that are encoutering this situation we are considering creating a separate 'Data Service' that would allow I/O intensive processes to be passed off to a separate node configured to avoid any performance impacts on the control plane. | Cinder | ||||||||||||||||||||
9 | 22822 | Cinder User Survey Response Time | Fishbowl | Forum | We are working on putting together a blog or something similar to answer questions around general themes from the User Survey. This would serve as an opportunity for the Cinder development team to share thoughts on the feedback we received and hopefully directly answer follow-up questions and concerns with users and operators. | Cinder | ||||||||||||||||||||
10 | 22823 | Cinder and its Role at the Edge | Collab Discussion | Forum | At the PTG in Denver we started discussing potential impacts fo Edge Computing upon Cinder. We would like this session to be a collaborative discussion between the Cinder Development team and Operators that need Cinder at edge sites. We are hoping to understand what type of backends might be used at the edge and what level of HA might be required at the edge. | Cinder | ||||||||||||||||||||
11 | 22750 | Expose SIGs and WGs | Fishbowl | Forum | Moderators: Rico Lin, Tobias Rydberg There are some started discussion in ML http://lists.openstack.org/pipermail/openstack-sigs/2018-August/000453.html , In PTG sessions http://lists.openstack.org/pipermail/openstack-dev/2018-September/134689.html . The basic concept for this is to allow users/ops get a single window for important scenario/user cases or issues into traceable tasks in single story/place and ask developers be responsible (by changing the mission of government policy) to co-work on that task. SIGs/WGs are so desired to get feedbacks or use cases, so as for project teams (not gonna speak for all projects/SIGs/WGs but we like to collect for more idea for sure). And project teams got a central place to develop for specific user requirements, or give document for more general OpenStack information. So would like to have more discussion on how can we reach the goal by actions? How can we change in TC, UC, Projects, SIGs, WGs's policy to bridge up from user/ops to developers. | Community | ||||||||||||||||||||
12 | 22817 | OpenStack: Better expose what we produce | Fishbowl | Forum | The OpenStack community produces a complex landscape of services and other deliverables. Presenting those to the rest of the world in a way that is comprehensive, makes sense and is not overwhelming has been a constant challenge. A number of recent efforts (the OpenStack map, the revamped project navigator driven by YAML files in the openstack-map git repository) improved the situation, but there is still a long way to go. In this session we will: Review our current web properties solving a part of the puzzle (releases.openstack.org, openstack.org/software, openstack.org/marketplace/drivers/) Discuss potential Project navigator improvements and getting PTLs and project teams involved in updating the content Specifically discuss how to represent deployment tools Consider the creation of tags to show upgrade support in deployment tools as an additional information point Specifically discuss how to present client/SDK tools Discuss the visibility of drivers (mainline and 3rd-party) | Community | ||||||||||||||||||||
13 | 22820 | Community outreach when culture, time zones, and language differ | Collab Discussion | Forum | As the community has grown to cover the globe, we have attempted to maintain the same tools, mechanisms, and ultimately practices for communication. Some of these things seem to work across some cultural, time zone, and language barriers (with translation). Some tools and practices tend to work better for some parts of the community. But no one solution will cover everyone and everything. This is not a discussion of IRC veruses various side-channel communication methods, but a discussion of how we can build bridges and better enable asynchronous communication, specifically with contributors in countries like China. If you built a bridge: What helped the most? What hindered your efforts and how can we establish that context to help others? How do we best memorialize and ensure discussions can be followed up a week, month, or even year later? How do we make ourselves available for bridge building? | Community | ||||||||||||||||||||
14 | 22838 | Making the Contributor Portal More Useful | Collab Discussion | Forum | The Contributor Portal[1] is an incredibly important tool for new community members. There are tons of resources and information about how to communicate and engage with the community in tons of ways depending on how someone wants to contribute. What can we do to make this page even more useful? Is there anything missing? We welcome any feedback you have on how to make this page better! [1]https://www.openstack.org/community | Community | ||||||||||||||||||||
15 | 22841 | User Community Leadership Planning Session | Collab Discussion | Forum | Join Ambassadors, User Group Leaders, and some UC members and Foundation staff in a discussion around collaboration among community leadership. If there is a specific topic or idea you’d like to discuss, please add it to this etherpad: https://etherpad.openstack.org/p/User_Community_Leadership_Planning_-_Berlin_2018 | Community | ||||||||||||||||||||
16 | 22840 | Concurrency limits for service instance creation | Fishbowl | Forum | What's is the expected SLA for the total number of concurrent instances that OpenStack can spawn? Do we have testing coverage for this use case? What are the main factors impacting this scalability limit? For those of us with growing clouds, what are the current limitatoins? | Concurrency | ||||||||||||||||||||
17 | 22832 | Containerized Applications' Requirements on Kubernetes Cluster at the Edge | Collab Discussion | Forum | More and more OpenStack Clouds are running Kubernetes Clusters on OpenStack; either via OpenStack Magnum or via their own Kubernetes deployment tools. This Forum session discusses requirements for these Kubernetes Clusters based on the deployment use case and the requirements from the Containerized Applications being run on these Kubernetes Clusters. E.g. Anticipated topics include requirements for Kubernetes Authentication and Authorization, Multi-Tenancy Support, Local Registry, Local Registry Authentication and Authorization, Helm support, Container Runtimes (e.g. docker, kata, etc.), Persistent Storage for containers, Networking (Container Networking Interfaces (flannel, calico, weave, etc.), Multiple Interface support for containers, DPDK-Accelerated CNIs, Ingress Load Balancing Solutions), etc. . | Containers | ||||||||||||||||||||
18 | 22789 | Developing a Standard Deployment Tools Comparison | Collab Discussion | Forum | There was a good discussion at the Denver PTG OPs Meetup around clarifying the differences between different deployment projects to allow deployers to more easily make informed choices amoung them. This session will continue that discussion an dwork on action items from the previous session in Denver (https://etherpad.openstack.org/p/ops-denver-2018-deployment-tools-comparison): Get answers from deployment tools teams for https://etherpad.openstack.org/p/ops-denver-2018-deployment-tools-comparison Update https://docs.openstack.org/rocky/deploy/ Add above questions to 2019 user survey and share on SuperUser or somewhere else Take the questions for operators (about deployment tools) to the community | Deployment Tools | ||||||||||||||||||||
19 | 22808 | Deployment tools feedback | Fishbowl | Forum | Deployment tools are one of the more fundamental things that allow OpenStack to become very accessible, however, a lot of them share many of the same common challenges. This forum session invites all of the users of OpenStack to come in and share their stories/experiences with deployment tools and their pains so that we can take that feedback and make the OpenStack deployment story easier. This session is proposed by the OpenStack Ansible team, looking to share the feedback with everyone else. However, we'll likely have members of other deployment tools present. | Deployment Tools | ||||||||||||||||||||
20 | 22777 | Designate - Feedback Session | Fishbowl | Forum | Session for Designate operators / users to feedback on bugs / features needed in the next 2 cycles | Designate / Ops | ||||||||||||||||||||
21 | 22778 | Designate - Shared Zones | Collab Discussion | Forum | Design and spec the implementation for sharing zones between tenants/projects. | Designate | ||||||||||||||||||||
22 | 22809 | MVP (Minimum Viable Product) architecture for edge | Fishbowl | Forum | In the Edge Workshop at the Denver PTG we drafted a minimum viable solution for edge cloud infrastrucutre with the aim to create reference archtectures. In this session we would like to get feedback on the designed architecture and give an update on the progress of the derrived requirements. As the MVP architecture is intended to serve as a basis for solutions used for edge use cases we would like to discuss with users and operators how it fits into their current deployments and planned scenarios. As part of this discussion we would like to revisit the user stories we captured in Denver to make sure they fit the current needs and see what new user stories we should focus on with the developers currently working on these items. This is a collaborative session between the Edge Comptuing Group, StarlingX and representatives from adjacent communities on site. | Edge WG / Starling X | ||||||||||||||||||||
23 | 22810 | Edge use cases and requirements | Fishbowl | Forum | It is a collaborative session between the Edge Computing Group and StarlingX to revisit use cases and requirements that we have collected so far. We would like to prioritise and define focus points to ensure we are addressing current pain points with our design and development activities happening in relevant projects. Beyond this we would also like to ensure we are capturing new use cases and help newcomers to these groups to join the activities either on the working group and/or project team level. | Edge WG / Starling X | ||||||||||||||||||||
24 | 22761 | Fenix - Rolling Maintenance and Upgrade | Fishbowl | Forum | Fenix project builds a framework for infrastructure admin to make rolling upgrade and maintenance. On top of this, there can be interaction with the application manager (VNFM) and also there is notification to tell in infrastructure level, that host is down for maintenance. This session is for discussing the current approach and the next steps. All input from community and ops are very welcome. There is also cross-project work to be done and as first real use case would be rolling OpenStack upgrade, it certainly has a connection point to each project on how it will be achieved. Discussion in Stein PTG sessions:https://etherpad.openstack.org/p/upgrade-sig-ptg-steinhttps://etherpad.openstack.org/p/blazar-ptg-steinSIG https://etherpad.openstack.org/p/self-healing-sig-stein-ptghttps://etherpad.openstack.org/p/AirshipPTG4 Etherpad for the session: https://etherpad.openstack.org/p/fenix-forum-stein | Fenix | ||||||||||||||||||||
25 | 22753 | Autoscaling Integration, improvement, and feedback | Fishbowl | Forum | To discuss integrating autoscaling within both Heat and Senlin, in terms of current status, actions, improvement, design plan, and long-term goal. Also, collect feedback from users/ops if any. Etherpad for collect any information: https://etherpad.openstack.org/p/autoscaling-integration-and-feedback Moderators: * Rico Lin <rico.lin@easystack.cn> (Heat PTL) * Duc Truong <duc.openstack@gmail.com> (Senlin PTL) | Heat / Senlin | ||||||||||||||||||||
26 | 22833 | Integrating IOT Device Management with the Edge Cloud | Collab Discussion | Forum | As the OpenStack Cloud moves to the edge of the network, besides managing server-based applications on general purpose servers close to the end devices, cloud admins are looking to also leverage this edge cloud to manage the edge devices themselves (and the applications running on those edge devices). Today this is typically done with Edge Cloud Applications for managing IOT Devices. But there are benefits to integrating parts or all of the IOT Device Management into the Edge Cloud infrastructure services itself. We'll discuss the characteristics of these end IOT Devices; e.g. size, compute resources, IPMI capabilities or not, pre-loaded software or not, etc. . And then we'll discuss some initial services that the Edge Cloud infrastructure could provide for these IOT Devices; e.g. bare metal booting, application management, remote persistent volumes, load balancing services, DNS services, alarm collection & reporting, log collection, etc. . | IOT | ||||||||||||||||||||
27 | 22781 | Ironic Operator Feedback | Fishbowl | Forum | OpenStack ironic needs feedback from the community to understand current operator needs. This is vital as the ironic community is operator focused and without feedback we can't work together to fix issues and help solve common problems. This past cycle brought a number of new features, and we're planning even more for the current cycle. BIOS setting interfaces? Ramdisks? Conductor Grouping? Management interface enhancement? Power fault recovery? All of these things are new, and if your seeking something not on this list, we're happy to discuss it! | Ironic / Ops | ||||||||||||||||||||
28 | 22790 | Hardware Inventory with Ironic | Collab Discussion | Forum | Oath and CERN have identified a need for a hardware inventory system that integrates with Ironic. We've started the sardonic project within OpenStack and are building out something that meets our needs. We'll briefly show what we've done so far, and hope to get feedback on the direction and features from other Ironic operators. | Ironic | ||||||||||||||||||||
29 | 22816 | Baremetal at the Edge | Collab Discussion | Forum | The Ironic community has been working to better facilitate the management of baremetal hardware resources in an edge deployment. We would like to discuss the the current state and future features that we are planning to help enable remote deployment and management. At the same time, we wish to collect feedback and use case information from operators that are present as to their edge baremetal automation needs. Topics up for discussion include, but are not limited to: Conductor grouping, DHCP-less deployments, HTTPClient booting, federation of ironic deployments, and securing deployment infrastucture. | Ironic | ||||||||||||||||||||
30 | 22782 | SmartNics, Ironic, and Neutron | Collab Discussion | Forum | One of the upcoming topics of discussion is how to enable network management via network cards that are smart. In short, they are running a fully fledged operating system inside the network card... And somehow we need to program it! The conundrum is that leveraging this technology and enabling the out of band management of it requires changes to ironic and neutron. We even recently discussed this at the Stein PTG, but we determined that this was an even more complex problem than we had originally anticipated. As such, we hope to have a specification in hand to work through the possible issue points, and ensure that we have a solid plan to help enable this technology. | Ironic / Neutron | ||||||||||||||||||||
31 | 22834 | What's stopping you from starting a Kata proof-of-concept today? | Collab Discussion | Forum | Have you considered evaluating Kata containers and run into roadblocks? Come share your experience with Kata experts and other community members. Are you a Cloud Service or Container-as-a-Service provider looking to use Kata in a multi-tenant scenario? Or maybe an enterprise devops looking to deploy pre-production (untested) and production workloads with ease and efficiency? Or maybe a telco wanting to run a VNF workloads in containers but nervous about not having VM isolation? Are you running microservices behind a high traffic load balancer and have concerns about latency? Are you worried running database workloads in containers due to data security concerns? Come share your challenge with us. Proposed moderators: Anne Bertucio (OSF), Tao Peng (HyperHQ Inc.), Eric Ernst (Intel) | Kata Containers | ||||||||||||||||||||
32 | 22835 | Exactly how much is more container isolation worth to you? | Collab Discussion | Forum | There are a number of different ways to mitigate container security concerns. In general, the more isolation you want, the more it's gonna cost in terms of incremental footprint, boot times and latency as compared to runC containers. Share your own use case requirements for density, boot time, latency, etc. and if you've thought about how much you'd be willing to trade those off for increased security. What are the current challenges facing your organization as it pertains to container security? Has GDPR or other data security / privacy regulation up'd the bar and created barriers to container use? Proposed moderators: Anne Bertucio (OSF), Graham Whaley (Intel), Xu Wang (HyperHQ Inc.) | Kata Containers | ||||||||||||||||||||
33 | 22836 | Kata Containers Kubernetes integration: CRI, OCI, and containerd shimv2 API | Collab Discussion | Forum | "All problems in computer science can be solved by another level of indirection." Containerization is such an indirection level that helps to build agile applications. However, look into the details of a kubernetes node, CRI and OCI set up two level of indirections, and there is a shim for each container process by design, for runtimes like Kata Containers, we had to add a fourth level of indirection for the compatibility with runC command line. Fortunately, the new containerd v2 shim API returns some powers to runtimes and we could have a much better trade-off between overhead and maintenance. This forum will discuss the different solutions of kata-k8s integration. Proposed moderators: Xu Wang and Tao Peng from HyperHQ; Eric Ernst and Sam Ortiz from Intel | Kata Containers | ||||||||||||||||||||
34 | 22812 | Kayobe user feedback & roadmap | Fishbowl | Forum | An opportunity for the Kayobe developers and users to meet and discuss their experience with the project. Expect to cover: Feedback from kayobe users on their experience with the project, including usability, configurability, bugs, feature requests. Discussion of the project's short and medium term roadmap. Open discussion. Anyone considering using Kayobe is welcome to attend to learn about the project and meet other users and developers. | Kayobe | ||||||||||||||||||||
35 | 22791 | Keystone as an Identity Provider Proxy | Fishbowl | Forum | The keystone development team went into the PTG in Denver with a goal to discuss improvement to federated identity, especially since it's the most requested enhancement as of the latest user survey results. In this session, we're going to sharing an approach for making keystone a fully-functional identity provider, including the ability to act as a proxy identity provider. We like to have operators present, especially if they have experience with deploying federated identity. Developers would like to see if there are areas for improvement with the current approach, or if we should make adjustments to various work items. This session is going to apply to any deployment using, or planning to use, federated identity (e.g., private, public, hybrid, or edge deployments). | Keystone | ||||||||||||||||||||
36 | 22792 | Keystone Operator Feedback | Fishbowl | Forum | Keystone developers would like to invite operators, users, and deployers to come and share their feedback directly with the team. We use this opportunity to listen to users about issues they are having and collect ideas for what we should work on next. | Keystone / Ops | ||||||||||||||||||||
37 | 22824 | Kolla user feedback | Collab Discussion | Forum | OpenStack operators meet with kolla developers to share their feedback regarding the project, as for example, user requirements, issues, operational uses, scalability improvements, etc. | Kolla / Ops | ||||||||||||||||||||
38 | 22827 | Kolla-ansible for the edge cloud | Collab Discussion | Forum | This forum topic will discuss how OpenStack kolla-ansible can be used for edge cloud uses cases, what are the operator requirements and what features are missing in kolla-ansible. | Kolla-ansible | ||||||||||||||||||||
39 | 22828 | Kolla-ansible for NFV | Collab Discussion | Forum | In this forum topic will be discussed with operators how kolla-ansible can be used for NFV environments, what features and services are missing for a fully operational NFV deployment out the box with kolla-ansible and containers, as well as what operational limits and issues are found by users. | Kolla-ansible | ||||||||||||||||||||
40 | 22830 | Setting the compass for Manila RWX cloud storage | Fishbowl | Forum | Let's talk about where the Manila developers think the project is going. And then calibrate that direction in light of cloud operators' and users' actual needs. Manila provides file system shares as a service to end users in the classical OpenStack multi-tenant model. Equally, it provides a stable programming interface for cloud applications. Since the API is presented over the network, and since file shares are provided over the network without hypervisor intervention, Manila is well positioned to serve applications and platforms beyond just OpenStack compute instances. See https://etherpad.openstack.org/p/manila-berlin for a list of current developer work items. Add your thoughts in the etherpad and better yet come to this Forum session to let us know your priorities and to help influence the direction of this project going forwards. | Manila | ||||||||||||||||||||
41 | 22747 | NFV/HPC Pain Points | Collab Discussion | Forum | As with previous forums, we're going to examine the current state of art in NFV land and figure out what's working, what needs to improve and what's completely missing/broken. Discussions on recent changes in the space, such as vGPU support and NUMA-aware vSwitches, should complement discussions on upcoming changes, like FPGA support (possibly with Cyborg) and bandwidth-aware scheduling. | NFV/HPC | ||||||||||||||||||||
42 | 22786 | Boot from volume (BFV) improvements | Fishbowl | Forum | For several cycles the Nova team has talked about improving our BFV experience to a point where we could remove duplicate ephemeral storage logic from some of our virt drivers. At the recent S PTG a number of tasks were agreed to for the current cycle but additional feedback from the community is still lacking. In this session we will recap the current set of tasks for S, gather feedback from ops/users on their experiences with the current BFV flows and hopefully agree on additional steps to improve the experience in both S and T. | Nova | ||||||||||||||||||||
43 | 22829 | OpenDev (nee Infra team) feedback and missing features | Fishbowl | Forum | The OpenStack Infrastructure team is embarking on a new journey under the moniker "OpenDev" to better serve the broader set of communities managed by the OSF and F/LOSS projects at large. The original announcement (under placeholder codename "Winterscale") at http://lists.openstack.org/pipermail/openstack-infra/2018-May/005957.html has background on the basic plan and proposed governance. In service of this new charge, some priorities need to be reassessed. Let's try to cover at least the following... We have selected a real name and a domain now Community challenges to expect when renaming and rebranding shared services Services we should be looking at white-labeling for communities Project namespacing changes and related renames Any need for further streamlining Git repository creation Polish for our task/defect tracker specific to this new arrangement Community-managed mirroring to proprietary source browsers Anything big we might be missing | OpenDev / Infra | ||||||||||||||||||||
44 | 22726 | The Contributor Guide: Ops Feedback Session | Collab Discussion | Forum | The First Contact SIG is looking to improve the experience of integrating operators into the OpenStack community. We had an initial discussion about this at the Denver PTG (https://etherpad.openstack.org/p/FC_SIG_ptg_stein), but would like to get direct feedback from Operators. How is the Contributor Guide working from an Ops perspective? In what areas can we make improvements to help the Ops Community? Are there critical pieces missing? https://docs.openstack.org/contributors/operators/index.html | Ops | ||||||||||||||||||||
45 | 22784 | Deletion of project and project resources | Fishbowl | Forum | As an operator today you have no unified way to delete all resources of a project. Today all operators basically writes their own tools to manage this, and the end result is often that we end up with a lot of orphans in our clouds. In a public cloud environment a lot of projects are being created and deleted. We feel that being able to delete all resources to a project is hygiene factor and an operation that should exist in all openstack projects and of easy access for operators and end users. Some tools exists that target this, like ospurge - does it work? Does it meet the requirements? What is the best solution for this problem? How should we treet shared resources? A lot of outstanding questions exists here. Since all operators spend time on creating there own solutions, lets bring that effort into a workable solution that make OpenStack as a product better. We encourage PTLs, cores and operators of both public and private clouds to join this session and discussion. | Ops | ||||||||||||||||||||
46 | 22842 | Getting Operators' bugfixes upstreamed | Collab Discussion | Forum | Part of expanding OpenStack adoption *and* participation is the community effort to better integrate the user communities with the developer communities. A long stnding gap in this integration is getting operator bugfixes submitted for review and eventually merged into the code base. With the use of storyboard, First Contact SIG, Public Cloud SIG, the User Commitee and the project teams, we should be able to figure out a straightforward process for bugs submitted with fixes can be highlighted for "low hannging fruit" and used to help new contributors to learn the process. There are certain requirements that might be considered: the bug submitter should be available for questions and consultations The developer should have access to mentoring by the dev community the fix may be rejected for technical reasons .....? Can we make this happen? | Ops | ||||||||||||||||||||
47 | 22760 | Bug triage: Why not all the community ? | Fishbowl | Forum | Yes, we have bugs. When operators or users see them, they create a bug number. When developers want to help the service, they look at bug numbers. The problem with that is while operators know a lot about bugs, not a lot of developers are looking at Launchpad. That's a bit sad because we're all the same community, right? So, how could we make sure that we look at the bugs ? For example, imagine some working group helping to triage the bugs. Would that help ? What's missing to the operators to help them prioritize the bugs and at least just triage the bugs by setting tags ? | Ops / Community / Triage | ||||||||||||||||||||
48 | 22751 | Orchestration Ops/Users feedback session | Fishbowl | Forum | To collect users/ops feedbacks and requirements for Orchestration services (generally is Heat, but we're more than willing to learn more on how cross-project orchestration works for you). To also give some key information that users/ops might need to be aware of. Anything good/bad/ needs to be noticed/for share, we 're there for you. | Orchestration / Ops | ||||||||||||||||||||
49 | 22783 | OpenStack Passport Program - feedback and next step | Fishbowl | Forum | The OpenStack Public Cloud Passport Program have now been around for a year, it is a unified way for users to access free trial accounts from OpenStack public cloud providers around the world. The initial goals have been to promote OpenStack public cloud market and to visualize the global OpenStack footprint that exists. During the last cycle we tried to come up with ways to extend and improve the program, but we think that it is time to gather all interested parties and take a step back and together find out what a good next step is. Come and share your ideas on how we can make OpenStack Passport Program a success, how we can make it easier for users to try out OpenStack and to have a better first experience with OpenStack. We encourage all public clouds to have a representative at this session. | PCWG | ||||||||||||||||||||
50 | 22785 | Change of ownership of resources | Fishbowl | Forum | As a public cloud operator we often get the question of changing the ownership of resources, especially VMs. With that said, changing ownership of a VM is of course possible today, with snapshot. End users does not seam very happy with that answer, and they would like to have a more "cloudy way" of doing that. So, seamlessly moving resources would be the preferable option. And that doesn't only apply for VMs, full heat stacks and its resources is on the table. Also preferable is some kind invite/approve process. What are the hurdles to implement such a feature? What would be an good enough solution? During the PTG in Denver we had discussion regarding this, mostly with Nova, and we would like to continue those discussion together with more operators and more teams represented. | PCWG | ||||||||||||||||||||
51 | 22798 | Far From Done: Public Clouds Needs You | Fishbowl | Forum | Since the Summit in Boston we have had this appreciated forum session. We are now at a level where we have a list that's been prioritised at launchpad [1] together with comments from operators and openstack project cores. In this session we would like operators to speak their minds and rank the most important issues at the list, as well as bring potential new issues to it. We are looking for end users, public and private cloud operators and PTL:s or active contributors to be part of this discussion.Related links:https://bugs.launchpad.net/openstack-publiccloud-wg | PCWG / Ops | ||||||||||||||||||||
52 | 22780 | Update on placement extraction from nova | Collab Discussion | Forum | If you haven't heard, the placement service code is being extracted out of the nova repo in Stein. This is going to require manual intervention during the upgrade to copy the placement-related data from the nova_api database (if you're not using the placement database configuration which was new in Rocky) and setup a separate placement config file. Some of that work is being done for upstream CI jobs using devstack and grenade. Once that is working, we need deployment projects like TripleO and OpenStack-Ansible to work on the same upgrade tooling. There are two goals for this session: a) Make operators aware of the coming change if they were not at the PTG to hear about it sooner and give them a chance to ask questions. b) Go over the latest status between the nova/placement/deployment teams to see what still needs work. | Placement | ||||||||||||||||||||
53 | 22806 | Python bindings for the placement API | Collab Discussion | Forum | Born out of the Nova project, the placement API service is used to track resource provider inventories and usages, along with different classes of resources. Many OpenStack projects are now pursuing integration with the placement API, including Cinder, Cyborg, Neutron, and Blazar. Each project is implementing its own client code, since the osc-placement project only provides a CLI without Python bindings. Does it make sense to have all these projects rely on a common Python library to reduce duplicated code? Having a common library could also encourage more projects to consume the placement API. | Placement | ||||||||||||||||||||
54 | 22788 | Users / Operators adoption of QA tools / plugins | Fishbowl | Forum | QA team missions is to make test frameworks (Tempest, Patrole etc) as portable as possible. They should be able to run against any OpenStack cloud. Many tools hosted by the QA program are suitable for consumption in downstream CI systems and possibly other use cases as well. Many QA tools have plugin mechanism and it is good to know whether they fit into user requirements/expectation or not. Since QA is not included in the user survey, we'd like to use the forum as an opportunity to reach out to people in the Users and Operators community that use OpenStack QA tools and learn How they are being used What we could do better What other QA tools or type of tests exists downstream that the whole community could benefit from. | QA / Ops | ||||||||||||||||||||
55 | 22837 | Using Rally/Tempest for change validation (OPS session) | Fishbowl | Forum | Making changes to your OpenStack environment can lead to unpredictable problems. How can you be sure that everything works after a change? Rally is an OpenStack testing tool, and Tempest is a set of integrration tests for OpenStack. Using these will get you a good coverage of testing, but it's not trivial to set up. I think this topic needs some shared commiseration. | Rally / Tempest | ||||||||||||||||||||
56 | 22804 | You don't know nothing about Public Cloud SDKs, yet | Fishbowl | Forum | SDKs are very important for OpenStack Public Clouds to be able to support customers' various PaaS solutions, however nowadays there is no way to show definitely certain SDK is verified to properly work with certain PaaS platform. This session is proposed to discuss the possibility of a new SDK certification mechanism in OpenStack community via OpenLab's best practices.In the session we will show the related project navigator update , as well as the first milestone: go-sdk stage 1 report from openlab (start from compute).We are also welcoming contributors to join the effort to jointly make a great SDK ecosystem for OpenStack Public Clouds. | SDKs / Public Clouds | ||||||||||||||||||||
57 | 22752 | SIG-K8s Working Session | Fishbowl | Forum | In this forum session, we will have open discussion of the current efforts of SIG-K8s. Topics will include but are not limited to: cloud-provider-openstack features state of Cinder and Manilla CSI drivers sig-cluster-lifecycle implementation and status sig-cloud-provider integration documentation and sig-docs collaborations Planning etherpad: https://etherpad.openstack.org/p/sig-k8s-berlin | SIG-K8s | ||||||||||||||||||||
58 | 22831 | "Ask Me Anything" about StarlingX | Collab Discussion | Forum | Many OpenStack Users and OpenStack Contributors are seeking to better understand the new larger-scope projects such as StarlingX. I.e. Understand the context, use cases and requirements that the StarlingX project addresses; Understand the key features and capabilities provided by StarlingX; Understand the operational model for using StarlingX. Without this information it is difficult for Users to understand how they can leverage StarlingX in their particular use case. This Forum session will provide a very brief 5-10 minute overview of StarlingX, and then open the floor for a Q & A discussion between StarlingX core team members and potential Users of and/or Contributors to StarlingX. | StarlingX | ||||||||||||||||||||
59 | 22839 | StoryBoard Migration: The Remaining Blockers | Fishbowl | Forum | At this point, about a third of the OpenStack project teams are using StoryBoard for their task tracking and management of work, and the rest are at least tangentially using it via other teams (e.x tracking of release goals). This session would focus on action plans for the remaining blocking items for the migration of projects and any other feedback users and potential users have for us. | Storyboard | ||||||||||||||||||||
60 | 22814 | T series community goal discussion | Fishbowl | Forum | During this session we will discuss proposals for community-wide goals for all project teams to work on during the T development cycle. We will need input from deployers, users, and contributors to make good decisions about the goals we choose. See https://governance.openstack.org/tc/goals/index.html and https://etherpad.openstack.org/p/community-goals for details. | T Release | ||||||||||||||||||||
61 | 22818 | "Vision for OpenStack clouds" discussion | Fishbowl | Forum | Join the OpenStack Technical Committee to discuss the proposed Vision for OpenStack Clouds that will help guide future OpenStack development. The current draft of the vision is available here: https://review.openstack.org/592205 We will be discussing any open issues and seeking feedback from everyone in the OpenStack community. | TC | ||||||||||||||||||||
62 | 22825 | Technical Committee Vision Retrospective | Collab Discussion | Forum | In early 2017, the Technical Committee came together to compose a vision what 2019 might look like for the OpenStack community. It consisted of four central themes that were written as if a person was looking back over the prior two years. This had two purposes. The first being to help people have a common vision of the future by which individual direction could be guaged against. The second being so that the community uses a common set of words when discussing, planning, or acting to try and implement something inline with the overall vision of the future. It has been a year and a half since that vision was written, and the Summit it speaks of is figuratively around the corner in April 2019. As such, it seems appropriate to have a retrospective prior to the next vision being created. Of course, visions are supposed to be scary! But fear not, this will be easy! - What went well? - What needs improvement? - What should the next steps be? | TC | ||||||||||||||||||||
63 | 22802 | TripleO Edge undercloud architecture early input and feedback | Collab Discussion | Forum | The intent of this topic is to discuss the different approaches for deploying the undercloud on edge deployments and to try to arrive to a consensus on where the direction we want to take is. e.g. shared-nothing vs distributed conductors vs some kind of federation | TripleO | ||||||||||||||||||||
64 | 22813 | Getting OpenStack users involved in the project | Fishbowl | Forum | There are a lot of OpenStack users out there, and only a small share of them are getting involved upstream. It is a traditional issue for open source projects, but on our side are we really doing all we can to attract those contributions and making candidates successful ? A lot of OpenStack processes were built around full-time contributors coming from a development background. What can and should we change ? In this session we'll discuss: Culture changes to better support part-time contributors: how to make them more successful, provide value and grow in the community despite limited involvement How to create a more welcoming environment Setting up a more progressive involvement ladder: "join the mailing-list" might be too big of a first step Success or failure of the SIGs to attract users into contributing around communities of practice and shared interest | Upstream Contributors | ||||||||||||||||||||
65 | 22815 | Cross-technical leadership session (OpenStack, Kata, StarlingX, Airship, Zuul) | Fishbowl | Forum | The OpenStack Foundation now supports pilot projects beyond OpenStack, and those projects each have their own technical leadership structure. While those are distinct and independent bodies, it is still very important that those projects work well together and paint a clear overarching Open Infrastructure story. This session will allow this dialog around synergies, complementarity between projects to happen in Berlin. It will be the occasion to compare the initial governance structures for the pilot projects and cover pain points. Members of the OpenStack, Kata Containers, StarlingX, Airship, and Zuul communities all welcome ! | xCommunity | ||||||||||||||||||||
66 | 22797 | Cross-project forum: securing containerized infrastructure | Collab Discussion | Forum | This forum will be a a cross-project discussion focused on security of containerized infrastructure. There are a number of different containerized infra projects in the OpenStack domain that focus on different use cases, and there are opportunities to share experience and best practices to benefit the full ecosystem. Airship Kata Containers Kolla OpenStack Ansible OpenStack-Helm LOCI Magnum ... Let's bring our learnings and our challenges, and enable high security across OpenStack. | xProject / Containers | ||||||||||||||||||||
67 | 22805 | Cross-project Open API 3.0 support | Fishbowl | Forum | As we know, Open API 3.0 was released on July, 2017, it is about one year ago. Open API 3.0 support some new features like anyof, oneof and allof than Open API 2.0(Swagger 2.0). Now OpenStack projects do not support Open API. Based on Open API 3.0, it can bring lots of benefits for OpenStack Community and does not impact the current features the Community has. For example, we can automatically generate API documents, different language Clients(SDK) maybe for different micro versions, and generate cloud tool adapters for OpenStack, like ansible module, terraform providers and so on. Also we can make an API UI to provide an online and visible API search, API Calling for every OpenStack API. 3rd party developers can also do some self-defined development. | xProject / Open API 3.0 | ||||||||||||||||||||
68 | 22795 | Reusable Zuul Job Configurations | Fishbowl | Forum | We've had almost a year with Zuul v3. Many of the new features have delivered as promised. Multinode testing is now a first class feature, job configuration changes can be tested before they merge, jobs can use secrets to consume external services, and so on. One of the goals of the v3 rewrite was to make the jobs themselves reconsumable so that you and I and everyone else could use the same job to run python unittests with tox. Unfortunately, I'm not sure we have a good grasp of how well this feature has turned out. Let's sync up at the forum and discuss which aspects of job reusability have worked and which we need to improve. We'll use this forum session to gather feedback from Zuul users outside of the OpenStack Infra team, and using that information brainstorm solutions to the problems faced with reusing job configurations. | Zuul | ||||||||||||||||||||
69 | 22811 | A marketplace for sharing Zuul jobs and roles | Collab Discussion | Forum | Successful SW communities have usually means to share their source code and artifacts between each other. For example, the Python community has a pypi.org to find and share software between Python developers worldwide. For Zuul, we missed a way to do the same for Zuul jobs and Ansible roles. We want to enable our Zuul users to find and share those "building blocks" easily inside BMW. Thus we created "Zubbi", the "Zuul Building Blocks index". Zubbi is designed so that it has a common core and a possibility to extend it for a given context. In our company for example, we want not only show the jobs and roles with the description, but we also want that users can see which other projects are already using it and which don't. We believe Zubbi could be of interest for Zuul upstream and for other Zuul users for indexing their jobs and roles. We would be happy to provide the "Zubbi core" to the community for this reason and discuss further development and ideas. | Zuul | ||||||||||||||||||||
70 | 22826 | Discussion of the Current State of Volume Encryption | Fishbowl | Forum | Fishbowl discussion with users around the current implementation of volume encryption, possible improvements and plans in S/T. Ops/User feedback on current implementation, limitations etc. Feedback on a possible new feature around key/secret rotation (allowing for oschown tenant to tenant usecases) Removal of dm-crypt support, are we ready to deprecate in S and remove in T? Migration from asymetric keys in Barbican to passphrases, do ops/users need the abaility to manually unlock volumes? Do *any* volume backends intend to support backend volume encryption? If not should 'control_location' be removed and/or default to `front` AKA n-cpu vs 'backend' AKA c-vol. Removal of the encryption type classpaths from os-brick in S. Multiattach RO support? Missing CI/Tempest coverage. (LM etc) Missing documentation | Volume Encryption | There were 2 submittals of the same topic which I flagged inthe track chair portal. Looks like you caught them here | |||||||||||||||||||
71 | ||||||||||||||||||||||||||
72 | ||||||||||||||||||||||||||
73 | ||||||||||||||||||||||||||
74 | ||||||||||||||||||||||||||
75 | ||||||||||||||||||||||||||
76 | ||||||||||||||||||||||||||
77 | ||||||||||||||||||||||||||
78 | ||||||||||||||||||||||||||
79 | ||||||||||||||||||||||||||
80 | ||||||||||||||||||||||||||
81 | ||||||||||||||||||||||||||
82 | ||||||||||||||||||||||||||
83 | ||||||||||||||||||||||||||
84 | ||||||||||||||||||||||||||
85 | ||||||||||||||||||||||||||
86 | ||||||||||||||||||||||||||
87 | ||||||||||||||||||||||||||
88 | ||||||||||||||||||||||||||
89 | ||||||||||||||||||||||||||
90 | ||||||||||||||||||||||||||
91 | ||||||||||||||||||||||||||
92 | ||||||||||||||||||||||||||
93 | ||||||||||||||||||||||||||
94 | ||||||||||||||||||||||||||
95 | ||||||||||||||||||||||||||
96 | ||||||||||||||||||||||||||
97 | ||||||||||||||||||||||||||
98 | ||||||||||||||||||||||||||
99 | ||||||||||||||||||||||||||
100 |