A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | AA | AB | AC | AD | AE | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Feature Name | Feature Description | Date added | Author / Contact person | Affected OpenStack Components | Comments | Added to Launchpad / link | Reference specs | Interested Parties | Status | Who's working on this? | BOS Forum Priority | PTL Input | PublicCloud WG Pike Priority | |||||||||||||||||
2 | Self-service sign up | It should be possible for users to sign up to an OpenStack cloud using Horizon directly. This should have some kind of middleware support for payment providers etc | (kfox1111) I know keystone is trying to get out of the authen buisness. maybe something like Dex https://github.com/coreos/dex might help? I know it use to have a self service signup function, though I'm not sure if it made the 1.x->2.x transition. (adriant) Catalyst started an opensource project for plugable business logic APIs and workflow. One of the features we use this for is signup: https://github.com/openstack/adjutant | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771580 | DataCentred, INAP, SysEleven, Catalyst Cloud | N | Y | ||||||||||||||||||||||||
3 | Domain-level quota managements | Quotas should be assignable at the domain level such that all projects in a domain are re-allocated their quota from the domain level. This allows public cloud providers to allocate quota at the customer level and then customer is responsible for redistrbuting those allowances across their projects. Multi-region support as well - if that exist. | Rocky PTG Verified | Howard Huang | Keystone, Nova, and every project that implements quotas | (Henry Nash): In fact, no change needed in Keystone. Domains are actually represented in keystone as projects with a special attribute (is_domain=True). Hence all that is needed is that services that have quotas to handle the "root" of a project hierarchy (which is a project representing the domain) as being able to hold a quota. Q (tobberydberg): So subprojects of "domain" project cannot allocate more resourses in total than the quota of the "domain" project?Well, that would depend on the quota model chosen (i.e. do you support over-commit or not). There is a cross-project spec that is in the works which would make keystone the common store of quoto limits, but the actual algorithms and calcaulations will remain in the individual services. (Tim Bell) This is very close to the nested quota model which the scientific working group has been encouraging (https://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html). There is clearly further work to be done after the quotas (such as customers can upload an image which is available to projects lower in the tree) but quotas is the first step. | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771581 | https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/unified-limits.html https://etherpad.openstack.org/p/BOS-forum-quotas | Catalyst Cloud, OVH (4/10) | Y | Y | ||||||||||||||||||||
4 | Deletion of entire project | Deletion of projects/users - possibility to block request if resources exists, AND possibility to escalade request to delete all underlaying ressources | Rocky PTG Verified | Horizon, Keystone | (Tomas Vondra): Is ospurge not enough? (Nick Jones): Last time I checked ospurge required direct DB access. An API would be preferable IMO. JG: Set the quota to zero might help (samueldmq): What if each service implemented an admin API call to delete resources for given project ID? (Nick Salvisberg): @samueldmq: like neutron purge? (samueldmq): @Nick if that is a REST API call, yes, like that. (adriant): We (Catalyst) are working on a user triggerable (or more destructive admin triggerable) workflow in Adjutant (https://github.com/openstack/adjutant) that will delete a project, and all it's resources. (see the referenced gerrit review) (tobberydberg) I like the idea of API call to something like neutron purge. If that can be fired from a single keystone API call to all underlaying components, that better. (mnaser): https://github.com/openstack/ospurge | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771582 | Adjutant project termination: https://review.openstack.org/#/c/477707/ | Catalyst Cloud, OVH (7/10) | N | Y | |||||||||||||||||||||
5 | Extended domain support Keystone | Better support for domains in general. I.e Domain admin support - a domain admin has admin capabilities inside the domain. Tokens auth agains multiple projects under the same domain - will provide more flexibility. | Rocky PTG Verified | Mohammed Naser | Keystone (and All) | Domain Admin being admin to everything within the domain: (Henry Nash): this is of course possible today (at least for keystone resources) by assigning an inherited role on the domain (samueldmq): It is not possible to auth against multiple projects at the same time. One project per token. | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771584 | Related nova spec: https://review.openstack.org/#/c/433037/ | OVH(3/10) | N | N/A | ||||||||||||||||||||
6 | Multiple storage backends support | Possibility to use different Ceph cluster for ephemeral storage within the same aggregation for example | Rocky PTG Verified | Maciej Jozefczyk (OVH) | Multiple ceph cluster support but could be different storage types as well 16:15 < akijak> in our case, the support of multiple ceph clusters means that one compute host could use different ceph clusters | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1752102 | INAP, OVH(3/10), CityCloud | N | N/A | ||||||||||||||||||||||
7 | Support tool | Service/API that allow customer advocates/support team to work on service without direct infra (admin) access that gives specific view for infrastructure | Rocky PTG Verified | Mohammed Naser (VEXXHOST) | we implemented a hackish way to have some of that (INAP), the same case for OVH (mrhillsman) - could this be summed up/thought of as an impersonate api?; I want to impersonate a user with credential expiration and audit trail (akijak) - we use read-only access to give our support a way to see our customer's list of VMs, list of actions etc. (pasuder) - keystone domain and sudo stuff | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771585 | OVH(4/10) | N | N/A | ||||||||||||||||||||||
8 | Multi tenant message bus for Trove | Information about deploying multitenant trove (RabbitMQ, encrypted messages now) | Rocky PTG Verified | Maciej Jozefczyk (OVH) | Trove | (mjozefcz): Catalyst Cloud is working on solution with Zaqar (flwang) - HTTP calls. Similiar use-case is in Octavia (using HTTP calls to Amphora Instance). Dedicated RabbitMQ and securying it with some Rabbit ACL's is pain in the ass - its really slow. We need to exchange our needs and knowledge with Octavia Team. (pasuder): Mohamed mentioned that Octavia approach might work here | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1752103 | INAP, OVH (9/10), Homeatcloud, Catalyst Cloud, CityCloud | N | N/A | |||||||||||||||||||||
9 | Function as a Service | Support AWS Lambda like service | To be verified | (adriant): A side project by one of the devs at Catalyst to solve this that may prove interesting: https://github.com/openstack/qinling (see the video from Lingxian's talk this morning) (kfox1111) k8s/cncf has a lot going on in this space. maybe an openstack compatable api is added on top of something? | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1752103 | Catalyst Cloud, OVH (7/10), CityCloud | N | ||||||||||||||||||||||||
10 | MFA | Multi factor authentication support | Rocky PTG Verified | Keystone, Horizon | (robcresswell) Horizon happy to support this but doesn't currently. Seems like there is a push to get the remaining API support in Keystone finished (adriant) keystone does support MFA but in a very difficult to use. This is going to be worked on, and I'll be posting some alternatives and work arounds we can support before Keystone can support the missing features. | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771586 | Adjutant management of TOTP credentials: https://review.openstack.org/#/c/477680/ (WIP) | Catalyst Cloud, OVH (4/10), Homeatcloud | N | ||||||||||||||||||||||
11 | Spot/preemptible instance | To support feature similar to preemptible instance in AWS. The development goal is to have Nova exposes the API so that other systems could build on top of it | Rocky PTG Verified | Theodoros Tsioutsias CERN | Nova, Blazar | There was also a BOS forum session discussing this, although hosted by Scientific WG: https://etherpad.openstack.org/p/BOS-forum-advanced-instance-scheduling (ttsiouts) http://openstack-in-production.blogspot.ie/2018/02/maximizing-resource-utilization-with.html | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771587 | old nova spec at https://review.openstack.org/#/c/104883/, there is a new spec John Garbutt working on at https://review.openstack.org/#/c/438640/. (mriedem): Queens PTG notes: https://etherpad.openstack.org/p/nova-ptg-queens (L273) | General community wide attention, OVH(3/10), CERN | Y | CERN | N | Y | ||||||||||||||||||
12 | Dedicated Host | Providing a dedicated host for the user for better performance. Domain support. | Rocky PTG Verified | Tobias City Network | Nova, Blazar | There were discussions around this feature, and one approved blueprint, but the code wasn't get merged. It is also unclear if using Host Aggregates and AggregateMultitenantIsolation scheduler filter could help solve it (mriedem): One issue is the tenant string for that aggregate metadata is restricted to 255 (so you can't put many tenants in the aggregate); could we use a parent project id for nested projects to solve this? Although that likely depends on nova understanding hierarchical quota/domain stuff from Keystone (unified limits)? | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771523 | old approved nova spec at https://blueprints.launchpad.net/nova/+spec/whole-host-allocation, several attempts could be found: https://blueprints.launchpad.net/nova?searchtext=dedicated+host Blazar provides a 'host reservation' feature. It may satisfy this requirement. <https://docs.openstack.org/blazar/latest/> | General community wide attention | Y | N | Y | |||||||||||||||||||
13 | Update existing VM keypair | Update keys rather than delete and then create with the same name again which will cause temporary outage. | Rocky PTG Verified | Jean Daniel (OVH) (mentioned by Paweł Suder (OVH)) | Nova | There are some issues with updating existing keypairs, such as leading a user to think that the updated keypair will be injected into the existing guest VM, which it won't. We also don't have a PUT on flavors for similar reasons. (mriedem): 2017-09-28: see topics discussed for Queens: http://lists.openstack.org/pipermail/openstack-dev/2017-September/122566.html - need end user/operator feedback on direction here (mriedem): 2017-11-08: Related ML thread: http://lists.openstack.org/pipermail/openstack-dev/2017-October/123071.html | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771589 | Related: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/rebuild-keypair-reset.html | OpenStack-infra team, OVH (5/10) | N/A | N | Y | |||||||||||||||||||
14 | Neutron port consistency on compute host reboot | Bug in Neutron which misleads other OpenStack services. Example: Nova events in VM boot process. | Rocky PTG Verified | Maciej Jozefczyk (OVH) | Neutron | When compute host reboots then ports in Neutron DB and Nova network cache info stays in state 'ACTIVE'. But neutron agent still hasn't reconfigure all ports yet. This Neutron DB information is source of data for Nova events how to handle VM boot process. When port state is 'ACTIVE' it's an information for Nova that there is no need for waiting for port configuration so VM can boots without properly configured port. (mriedem): The _heal_instance_info_cache will rebuild the instance network info_cache: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525 - eventually calls this code to rebuild the info cache based on latest neutron info for the VM: https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356 As for the hard reboot VM issue, I wonder if https://review.openstack.org/#/c/400384/ and https://review.openstack.org/#/c/400384/ help? (mjozefcz): I think the problem still exists - when 'cron' action is being executed it uses local-cache while reading ports to build_info_cache. Please look on this: _heal_instance_info_cache -> _build_network_info_model (port_ids=None, networks=None) -> _gather_port_ids_and_networks() https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1392 I'll create a bug describing the whole problem with how-to reproduce it on devstack. (mjozefcz): This bug still exists, description: https://bugs.launchpad.net/nova/+bug/1751923 | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1752104 | I just created bug describing it: https://bugs.launchpad.net/nova/+bug/1751923 | OVH (8/10) | OVH | |||||||||||||||||||||
15 | nova VM state inconsistency when compute host reboot | Bug in Nova on compute host reboot | Rocky PTG Verified | Maciej Jozefczyk (OVH) | Nova | Nova has some option how to deal with VM on host reboot - restore previous state (which use hard-reboot procedure) or shut it off. Both of those options are not the best. Shutoff VMs means that somebody has to bring back previous state (customers has their service down). Hard-reboot VMs means that there is a situation when VM state in DB is 'ACTIVE' but for the time while nova-compute tries to bring back VMs there is inconsistency of Nova DB VM state (active) and real state of VM on host. And Nova hard-reboot process expects that network port is already configured and don't go through 'pause VM' -> 'wait for info about network port build success from Neutron' -> 'start VM'. So VM starts then there is no network so DHCP won't get IP so VM boots without network configured. (mjozefcz): Possible solution (reference bugfix) needs to be verified in our case (OVH) | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1752106 | Does this fix part of the problem? https://review.openstack.org/#/c/400384/ | OVH (8/10) | OVH | |||||||||||||||||||||
16 | Scheduler issue and Nova resources claim | Bug in Nova resource claim | Rocky PTG Verified | Maciej Jozefczyk (OVH) | Nova scheduler | TODO: https://review.openstack.org/#/c/520024/ needs to be updated (remove the WIP flag) | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771590 | https://review.openstack.org/#/c/520024/ https://bugs.launchpad.net/nova/+bug/1729621 | OVH (8/10) | OVH | |||||||||||||||||||||
17 | Nova resize filter | Filter/map that will limit resize between specific flavors/will allow resize between specific flavors. It may happen that flavors are specific to some hardware and cannot be resized to other | Need to be checked, confirmed, clarified, updated. | Paweł Suder (OVH) | Nova | Resize is cold migration under the hood, which means you can limit things using host aggregates, so why not tie the flavors to host aggregates and let that existing framework handle this for you? (mriedem): Unclear on the problem / use case... (pasuder): Confirm, clarify, do not use force during manual migration (resize) | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771591 | OVH (5/10) | |||||||||||||||||||||||
18 | Extended Nova flavor quota | Operator may be interested to allow customer to use specific flavor but not the other flavors regardless of resources. Example: I want to customer be able to run VM with specific hardware and 4VCPU and not standard instance with 2VCPU | Need to be verified. | Paweł Suder (OVH) | Nova | maybe use private flavors and share it with specific tenants when necessary is kind of solution Alternative possible solution: create a role in keystone and a policy rule in nova where public flavors are checked against policy, so you could modify the policy rule to say, "all users except those with role X can use public flavors"? | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771592 | OVH (8/10) | |||||||||||||||||||||||
19 | Service resources | Ability to create a service VM with a tenants credentials but keep it isolated from the tenant. Ability to provision stoarge in a similar manner. | Rocky PTG Verified | Multiple | (mjozefcz): Similiar use-case in Trove and Octavia. We want to 'hide' a VM in other tenant and charge it in different way. (mnaser): This has been discussed in the ML a while back - http://lists.openstack.org/pipermail/openstack-dev/2014-April/031952.html | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771593 | John Garbutt nova spec: https://review.openstack.org/#/c/438134/ | OVH (8/10) | |||||||||||||||||||||||
20 | GraphQL API | Provide GraphQL HTTP API endpoint for « core » projects. | Rocky PTG Verified | Multiple | http://openstack.10931.n7.nabble.com/Re-publiccloud-wg-Public-Cloud-Feature-List-Hackathon-Day-2-td145256.html Having such features would allows usefull capabilities such as introspection of the API allowing to get an easier way to document a project API/Capabilities or would give a way to maintainer to accelerate development leveraging the versionless nature of graphql. That would also add the capability to request for highly specific information quickly without having to consum multiple network call. (mriedem): Sean Dague is familiar with GraphQL and how OpenStack does APIs and the issues you'd have doing something like this in OpenStack, but Sean isn't really involved in OpenStack anymore. (mnaser): https://github.com/openstack/searchlight could be a base for that? | https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1771594 | General community | ||||||||||||||||||||||||
21 | |||||||||||||||||||||||||||||||
22 | |||||||||||||||||||||||||||||||
23 | |||||||||||||||||||||||||||||||
24 | Fuzzy, not well specified items | Fuzzy items / need more specification | |||||||||||||||||||||||||||||
25 | |||||||||||||||||||||||||||||||
26 | Orphans | Orphans - default in each component how to manage this. Preferable configuarable how this should be treated. | All | there is an option to delete all orphans Could mistral listen for user/project delete notifications and then run a workflow to cleanup resources in the other projects using some kind of service user token (so it's not an admin)? | Catalyst Cloud, OVH | N | N/A | ||||||||||||||||||||||||
27 | Ceilometer performance | Ceilometer (or replacement) must have performance to handle data from thosands of tenants and data stored for at least a couple of months | To be checked and updated | Paweł Suder (OVH) | Ceilometer | (pasuder): Issue with sending data from ceilometer to Gnocchi. Need to confirm where exactly it stucks | Catalyst Cloud, OVH (5/10) | N | N/A | ||||||||||||||||||||||
28 | IO and network quota | Quota support for IO and network usage. Both quota for total usage over time and max usage performance. Domain level, project level, per VM level. | Keystone, Neutron, Cinder | (mriedem): How is this different from the bandwidth I/O flavor extra specs? https://docs.openstack.org/admin-guide/compute-flavors.html#extra-specs (mnaser): Per-GB quotas - https://review.openstack.org/#/q/project:openstack/cinder+topic:bp/capacity-based-qos | N | N/A | |||||||||||||||||||||||||
29 | Tenant seperated logging | Complete logging of all API requests (from all modules) seperated by tenant. Public queryable via API. | Rocky PTG verified | Tobias City Network | All | (mriedem): The project_id is already logged from the request context, what else is needed? (tobberydberg): Public queryable via API is the key here I believe. Needs better specification (mriedem): Hit the ElasticSearch API? mnaser suggested the os-instance-actions API in nova, which allows you to get the list of actions on a server (need to use microversion >= 2.22 to look up actions on deleted servers). | CityCloud, Platform9 | N | N/A | ||||||||||||||||||||||
30 | |||||||||||||||||||||||||||||||
31 | |||||||||||||||||||||||||||||||
32 | |||||||||||||||||||||||||||||||
33 | Not valid, old not valid anymore, fixed items below | ||||||||||||||||||||||||||||||
34 | |||||||||||||||||||||||||||||||
35 | Tenant provider networks | Full support for tenant specific provider networks. | Neutron | what does "full support" means? Doesn't RBAC for network achieve it? | Homeatcloud | N | N/A | ||||||||||||||||||||||||
36 | Volume muti-attach | Allow cinder volume to be attached to multiple VMs. Feature waiting approval... | Cinder,Nova | mriedem 2017-11-08: this is in-plan for nova in queens This is now in Queens Demo: https://www.youtube.com/watch?v=hZg6wqxdEHk | https://review.openstack.org/#/c/373203/ https://review.openstack.org/#/q/topic:bp/cinder-new-attach-apis (mriedem): Note that ^ does not get you multiattach yet, it's foundation work. https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/cinder-volume-multi-attach.html | Y | John Griffith Ildiko Vancsa Steve Noyes Matt Riedemann John Garbutt | Y | |||||||||||||||||||||||
37 | Neutron VPNaaS integration | Integrate VPNaaS in Neutron core. Very important feature when onboarding customers from legacy systems | Neutron | (Tomas Vondra): I would not necassarily say "Neutron core", but the project should not be left to die.(mattjarvis) I think this now has a maintainer again -- There is a session happening tomorrow regarding VPNaaS, please be there! (tobberydberg) - Maintained again- more specific reports if features are missing or bugs etc | Wed session https://www.openstack.org/summit/boston-2017/summit-schedule/events/18771/the-future-of-vpn-as-a-service-vpnaas | Catalyst Cloud | N | Y | |||||||||||||||||||||||
38 | Keyboard char mapping for novnc client when creating VM | Globalization support | Solved (by updating novnc to newer version) | Nova | This came up at Queens PTG: https://etherpad.openstack.org/p/nova-ptg-queens (Tomas Vondra) L605 (mriedem): Fixed in novnc version 1.0.0: https://github.com/novnc/noVNC/commit/99feba6ba8fee5b3a2b2dc99dc25e9179c560d31 Nova docs update: https://review.openstack.org/#/c/547985/ | Huawei, INAP, OTC, OVH (5/10) | N/A | N | Y | ||||||||||||||||||||||
39 | Nested projects | Support for multi-level (nested) projects inside a domain. | Keystone | (samueldmq): This is currently supported. This only has Keystone as an affected project. (rockyg) I think with more analysis, you will find more. Nested quotas is a subset and prereq for this (ln3); Proj del must also consider nesting (ln4) (adriant): While Keystone does support HMT it lacks a lot of usability and sensible scoping for how this is managed. There are plans in Keystone to support this better, but there are also issues with doing so in a single domain since the names must be unique. The best solution in the future is to give each customer a domain since then the name problem goes away. An alternative that works for a single domain is... yet another plugin/feature we are working on in Adjutant which handles management of projects through it in keystone and enforces certain contraints to avoid the issues that would be done with letting full access to keystone's support for this. (tobberydberg): Moved to "Fuzzy" section - a little bit related to the domain level quota item - and we should consider recreating this topic as a of what is missing. | Adjutant HMT support: https://review.openstack.org/#/c/515629/ https://review.openstack.org/#/c/516836/ | Catalyst Cloud, OVH(3/10) | N | N/A | |||||||||||||||||||||||
40 | Common freemium sign-up | Common freemium sign-up process for new customers. | suggest to move below | Addressed by the OpenStack Passport (tobberydberg): Might be out of scope for this document, but as commented above, adressed by PP and not at all out of interest | https://review.openstack.org/#/c/511252/1/doc/source/passport_program.rst | INAP, OVH (4/10) | N | N/A | |||||||||||||||||||||||
41 | Resource migration | Standardized tool to migrate everything from one region to another, or one deployment to another | suggest to move below | At first glance, this is a huge problem space that would need to be broken down. This does not seem to be in a place for deep dive discussion at this point. 16:09 < akijak> blake: what was your approach to this problem? 16:11 < blake> akijak: Lift & shift. Something similar to https://github.com/eglute/copystack, but we were also attempting to preserve UUIDs of resources | INAP, OVH (8/10), CityCloud | N | N/A | ||||||||||||||||||||||||
42 | |||||||||||||||||||||||||||||||
43 | |||||||||||||||||||||||||||||||
44 | |||||||||||||||||||||||||||||||
45 | |||||||||||||||||||||||||||||||
46 | |||||||||||||||||||||||||||||||
47 | |||||||||||||||||||||||||||||||
48 | |||||||||||||||||||||||||||||||
49 | |||||||||||||||||||||||||||||||
50 | |||||||||||||||||||||||||||||||
51 | |||||||||||||||||||||||||||||||
52 | |||||||||||||||||||||||||||||||
53 | |||||||||||||||||||||||||||||||
54 | |||||||||||||||||||||||||||||||
55 | |||||||||||||||||||||||||||||||
56 | |||||||||||||||||||||||||||||||
57 | |||||||||||||||||||||||||||||||
58 | |||||||||||||||||||||||||||||||
59 | |||||||||||||||||||||||||||||||
60 | |||||||||||||||||||||||||||||||
61 | |||||||||||||||||||||||||||||||
62 | |||||||||||||||||||||||||||||||
63 | |||||||||||||||||||||||||||||||
64 | |||||||||||||||||||||||||||||||
65 | |||||||||||||||||||||||||||||||
66 | |||||||||||||||||||||||||||||||
67 | |||||||||||||||||||||||||||||||
68 | |||||||||||||||||||||||||||||||
69 | |||||||||||||||||||||||||||||||
70 | |||||||||||||||||||||||||||||||
71 | |||||||||||||||||||||||||||||||
72 | |||||||||||||||||||||||||||||||
73 | |||||||||||||||||||||||||||||||
74 | |||||||||||||||||||||||||||||||
75 | |||||||||||||||||||||||||||||||
76 | |||||||||||||||||||||||||||||||
77 | |||||||||||||||||||||||||||||||
78 | |||||||||||||||||||||||||||||||
79 | |||||||||||||||||||||||||||||||
80 | |||||||||||||||||||||||||||||||
81 | |||||||||||||||||||||||||||||||
82 | |||||||||||||||||||||||||||||||
83 | |||||||||||||||||||||||||||||||
84 | |||||||||||||||||||||||||||||||
85 | |||||||||||||||||||||||||||||||
86 | |||||||||||||||||||||||||||||||
87 | |||||||||||||||||||||||||||||||
88 | |||||||||||||||||||||||||||||||
89 | |||||||||||||||||||||||||||||||
90 | |||||||||||||||||||||||||||||||
91 | |||||||||||||||||||||||||||||||
92 | |||||||||||||||||||||||||||||||
93 | |||||||||||||||||||||||||||||||
94 | |||||||||||||||||||||||||||||||
95 | |||||||||||||||||||||||||||||||
96 | |||||||||||||||||||||||||||||||
97 | |||||||||||||||||||||||||||||||
98 | |||||||||||||||||||||||||||||||
99 | |||||||||||||||||||||||||||||||
100 |