Platform CF - Day 1

Sunday, September 08, 2013 3:01 PM

Registration •

& Lunch ○

Marz Garcia - Inside Sales Manager, pistoncloud.com ▪

Focus □

Mentoined they aren't focused on large eneterprises □ Looking for companies larger than startup, but smaller than large corporation ▪ Provides DevOps capabilities supporting OpenStack •

Welcome - James Watters ○ Why Cloud Foundry? ○ Why Right Now? ○

A lot of wealthy enterprises that have controlled economoic systems for a long time, but that is changing due to software

○ Legacy way of development - the organizational maze ○

Small teams are developing daily/weekly ▪ Referenced Paul Graham ○

VM on-demand is not the way of powering the agile team ▪ Needs OS speciality - nothing to do with writing code ○ PaaS - Application centric layer of Cloud stack ○ Cloud Foundry built for application developer agile teams ○ Scalable software is beginning to become the norm ○

New OSS Intefaces for Thousands of Servers ▪ Hadoop is an example of this ○

Stats ▪

Seconds to Create An Application Container - .02 sec ▪ Seconds till a new app is available - .02 sec ▪ Downtime to Add capacity or Update - 0 ○

Scaling the Community ▪ Getting together every 6 months •

Moderator - Andy Schaeffer ○

Helped to found PuppetLabs ○ Organized DevOps Days in US ○ #PlatformCF, #CloudFoundry ○ Wifi - SSID: Hyatt, Access Code: cloudfoundry ○ Sessions will be recorded •

From Zero to Factory - Jonathan Murray ○

EVP & CTO at Warner Music Group, privately held ▪ Private owner has helped them to be flexible in pursuing these technologies ○ Made a long term commitment to Cloud Foundry 2 years ago ○

Composable Enterprise ▪

Every technology evolution didn't change anything - just made it more complicated/complex

□ Results in software development lifecycles that span long periods of time ▪ Creating organizational models to take advantage of changing requirements, constraints, etc.

Lightbulb moment ▪

If you can transform a business as base as a copper mining company, what can



technology do for other companies?

○ The ability of innovation within a company runs at the speed of IT ○

Pouring cement into the organization - all of IT over the last 20-30 years ▪ Difficult to change ○

Architectural perspective - system on top of system on top of system, etc... ▪

No standards ▪ New layers are focused on systems integration instead of delivering value ○ Architectural perspective - should be more like standarizing IT service framework ○

Need to delivery IT Services ▪

Faster time to value □

Likely to be the single metric most CIOs will be measured against □

The answer to failing budgets - no one questions a budget that is generating orders of manitute

▪ At lower cost ▪

Enables innovation □ Fast fail cycles ○

Revolution is coming ▪

Corporate IT looks the same everywhere - architectural dig, all layers of history still there

At some point, backward compatibility/building on the past, is no longer something you can afford

You'll have to make the jump to a blank sheet of paper - figure out how to re- implement what you have on that greenspace

Expects partners of Pivotal to put this in a box (COTS it) □ Will de-risk the decision for the customer ○

Building an IT Service Factory ▪

Separate silos -> Single Platform ▪ Systems Focus -> Biz. Engagement ▪ Break/Fix -> Continuous Improvement ▪ Maintaining Past -> Enabling Future ▪ Craft has its place, but not in order to facilitate rapid delivery ○

Factory Platform Strategy ▪

Decouple Infrastructure □

Can't care about what infrastructure you're running on - unimportant □ Build to PaaS □ Avoid IaaS Dependencies □ Abstract through tools like BOSH ▪

Make Data a Service □

Plumbed into different databased over years, spending money to make it available to developers

□ Need a standard service interface for all developers □

Challenge DBMS Assumptions ◆ Need to ensure that no use-cases in enterprises □

Embed Semantics In Services ◆ Not in the database ◆

Stored procedures need to be strangled at birth ◊ Don't put business logic into database □

Avoid 'Classic' MDM ◆

Do not need to do classic massive data management ◆ Build lightweight MTL services on service layer □

Migrate Data on Demand ◆ Don't eat the data elephant at the same time ▪

Decompose Applications



No □ more monolithic blobs of code □ Aim for Velocity - Plan for Re-work ◆ Understand re-work will be needed and planned for □

Minimize Code Footprint ◆ Do not want applications to build duplicate services □

Ease Service Discovery ◆ JSON, etc. □

Simplify Developer On-Boarding ◆

Have to treat the Enterprise Platform as any other platform - have a Developer program

◊ Portals ◊ Access to tools ◊ Etc. ▪ Automate Everything □

Build Culture of TDD ◆ Fix it upsream not downstream □ Implement Continusous Integration ◆

Foreign concept in most IT organizations ◊ Most are still in waterfall design processes □

"You Break It You Fix It" ◆ If you break it, you don't get to check in more code □ Deploy Continuously ○

An Enterprise Journey ▪

Reduce risk through incubation □ Prove technology, de-risk to business ▪

Build a software + services culture □ Built what very much looks like a software company that has clients ▪

Prioritize new capability delivery □

In order to prove out the model □ Selected items with latent demand in the business - not the largest item ▪

Replace, don't migrate □ Figure out on a priority bases ○

Few Lessons Learned ▪

Building an aircraft in flight is hard □ Let alone the engines ▪ Open Source is your friend □

If some capbility is missing, first place to turn is GitHub ◆ Find which projects has the best velocity, quality, etc. □ Can build entire platform on OSS ▪

Agile is a mindset not a process □

Get people to think in an Agile kind of way □ Find the problem, hilight it, get everyone together to solve it, then move on ▪

Automation is not automatic □

Natural tendency is to apply human resource to fix a problem when something is broken - instead of trying to automate the process in the future (deeper thinking)

○ Fail-Learn-Adapt-Repeat ○

@adamalthus, adamalthus.com, adamalthus@gmail.com ▪ Looking for good people •

Continuous Delivery with Cloud Foundry - Andrew Crump ○

Applying concepts of Continuous Delivery ○ Anti-paterns to be aware of when working on Continuous Delivery with Cloud Foundry ○

Apply the same principals/practices from application to infrastructure cod



Founder ○

of CloudCredo in London ▪ Also hiring ○

Deployment pipeline ▪

Developers work on small discreet changes, and those changes are tested in various gates to determine whether build is broken

▪ Then build up to larger, more complex tests ○

Why are we doing this? ▪

Feedback - quickly go through pipeline and obtain feedback from customer quickly ▪ Alternative - batching changes and increasing cycle time ▪ Shooting to avoid the worry of complexity ○

Cloud Foundry ▪

Very clear relationship between writing code and knowing how to share it ▪

Consistent interface for deployment □

cf push □

Same command line across different stacks (java, ruby, etc.) and different environments

Avoiding making breaking changes that delay feedback ▪ Ensure software is always in a deployable state ○

Feedback ▪

cloudfoundry.com □

Pushed application in afternoon, sent email with URL to client. □

Client came back and said it's rubbish - wrong theme, wrong validation ◆ Point was that feedback was obtained straight-away ▪

Decomposed the application into services □ Loosely coupled, talking to each other ▪ Custom release ▪ Multiple platforms, multi-region ▪ 600K transactions in 7 hours ▪ 500 user journeys every second ▪ 10K call center operators ▪ All enabled with Cloud Foundry ○

Production - like environment ▪

Need to rebuild application as it moves through different stages (between environments)

Usually an indicator that something is embedded in the application that has a dependency on it

Environments ▪

Build and scale new environments repeatably □

Can be driven through code □ Once it's in code, it can be done repeatably ▪

Perform load testing within your pipeline □

Perform testing after each change - instead of building those changes up □ Scale up briefly to perform testing, then tear it down again ▪

Externalizing Configuration □

Don't embed config in application - need clean separation between code and config

□ Bind to services - not IP addresses/hostnames ▪ Reproducible Environments ▪

BOSH □

PaaS on Cloud Foundry □

Deployment manifest allows reproducability in deploying to environments ◆ All in source control ▪

Artifact repositories



Vendor □ interdependencies - Ruby □ GemProxy - Ruby, BlobStore - BOSH ▪

Blue/Green Deployments □ Pattern to deploy application easily and remap front-end ○

Things to watch out for ▪

Manual application changes □

Need to ensure any changes being made are going through those changes established earlier

Branching Deployments □

JS Humble - Continuous Delivery Book □

Against long-lived branches ◆ Guards against getting changes from the mainline ○

Adding infratructure coe to the pipeline ▪

Yourservice □

Application Code □ Infrastructure Code □ Base OS ▪

BOSH □

App code □ Cloud Foundry PaaS □ BOSH ▪ If you change code in one part, everything has to be tested again due to dependenices ▪

Validation of BOSH Manifests □ Check for well-formed YAML ▪ Automate upload of BOSH releases □ Can be driven through Jenkins ▪

Automate updates from upstream releases, binaries □ Ensure all items that run on tom of a changed stack still work. ○

Key Takeaways ▪

Cloud Foundry makes building continuous delivery popelines easier ▪ Cloud Foundry makes it easy to Do the Right Thing ▪

PaaS abstraction allows you to grow from small-scale to large scale and maintain a steady heartbeat of delivery.

○ andrew@cloudcredo.com •

Future of PaaS and Cloud Services at Swisscom - Torsten Toettjer ○

Cloud Foundry as enabler for new ICT Services ○

Working on the swisscom cloud ▪ Placing every single service they offer onto it ○

Telco Dilemma ▪ Bandwidth directly operates on opposite measure of margin measures ○

Diversification through ICT Services ▪ Let's buy a system integrator □ IT Services and Mass Production don't equal the same thing ▪ We can do web services as well ▪ Hey, we are the cloud ○ Swisscom's cloud strategy ▪ Two-tier "channel" □

IaaS becomes a channel service, so channel partners are needed - not focusing on becoming their own IaaS

Hybrid delivery □

Using service based on shared infrastructure □ Addresses data security/housing issues ▪

Open Source



Go Within Cloud Foundry - Mike Gerhard •

Open ▪ for Partnerships ▪ Focus End User □

Great analogy - electric plug as a means of interface into commodity resource (electricity)

□ Adding content to "commodity" filesharing capability ○

Service delivery for cloud provider

Swisscom's "Marchitecture" ○

Build on standards ▪

Linux, Openstack, x86, CloudFoundry □ Liked CloudFoundry as it didn't require management of the infrastructure itself ▪ Swisscom Cloud ○

What we have done so far... ▪

Launched iO Communication service □ PSDN service in country, IP outside of country ▪ Build realtime billing interface for non-telco services ▪ Implemented an API Management Platform ▪

Started to develop a Cloud OS □ Not another CloudStack distribution ▪ Build out a collaboration hub for continuous innovation ○ From Boulder and gave presetation barefoot ○ Developing software for about 15 years ○

Go ▪ Gopher mascot ○

Why did they choose Go ▪

History

Enterprise Cloud Service Cloud Application Cloud

Clients Native Apps

Access Management

*DEA Access Management

Enterprise Software

Platform Appliances *Application Container

• Platform Appliances Service Management Service Management *Broker

• Dynamic Computing

• Elastic Computing Services *Elastic Computing Services Services

Domain Management Domain Management Domain Management

• Infrastructure Tier (Shared,

• Infrastructure Tier (Shared,

Infrastructure Tier (Shared, Private, Partner)

Private, Partner)

Private, Partner)

Enterprise virtualization Service Operation Platform as a Service

Front End Services Client Native Apps DEA

Back End Services Entperprise Software Applicances (OS & App) Applicatin Container

OS Virtual Server API API

Major □

shift from working with Infrastructure vendors who are focused on protocol standards

□ Identifying which business you don't want to develop on your own □

Become comfortable with the idea that you will need to partner with others to build this



Discussion □ about writing software that runs in massive computing arrays □

Started in 2007 within Google ◆ Robert Griesemer, Rob Pike, Ken Thompson □

2008 ◆ Russ Cox joins team □

Nov 10, 2009 ◆ Project is now OSS □

March 2012 ◆ Go 1.0 release □

Early 2013 ◆ GoRouter put into production ▪

Why □

Golang.org/doc/ - GO FAQ □

Frustration with existing languages ◆

The guys writing Go were writing Unix and even they were getting frustrated

□ Choose either efficient compilation, efficient execution, or ease of programming □ Moving to dynamically typed language □

Wanted ease of programming, efficiency and safety, networking and multicore support, large executable on a single computer

Go - garbage collector, small spec, fast - compiles and runs quickly ◊

Network (unix sockets) and multipcore built-in ◊ Referenced CSP paper from Gor/Core? □

Cannot be addressed well by libraries or tools; a new language was called for ◆ Nothing new since 94-95 ▪

Not a gigantic box of legos □ Get a kit - specific pieces of components that are available in the toolkit ▪

Two big projects being used today □

Logregator ◆

Warden containers within DEA write std in/out to Unix sockets - how log messages are obtained

◊ Apps run in Warden containers ◆ Same with router, cloud controller, service gateway, UAA ◆ Generic agent listens for content on std in/out and forwards it ◆ When log testing, - at full capacity only consumed 1/2 of 1% CPU usage ◆ As messages come out of the agents, they get sent to Loggregator router ◆

Then proxy determines which hash value is in use and forwards to a loggregator server partition ◆

Persistent storage will require something like Splunk - this is not an aggregation and analysis service - it's only a brokering service for logging ◆

3 months to write code that is now in production by engineers that had never written go code before

□ CLI ▪

www.packer.io □ Began writing go applications before Cloud Foundry ▪ Ref: tour.golane.com ○ @mikegehard ○ @pivotallabs •

Cloud Foundry and NTT Group - Yudai Iwasaki ○

Public Cloud Service "Cloudn" ▪

NTT Group overview □ Japanese telecom company - #1 telecom in world by revenue ▪

Cloudn service overview



Reliable □ low-cost cloud services with rich API □ Including PaaS based on Cloud Foundry □

Datacenters ◆

3 countries and 5 locations by 2013 Dec ◆ Users can choose prefered locations ○

Cloudn PaaS: Why we Chose Cloud Foundry ▪

Overview □

Launched last March □

Based on Cloud Foundry v1 ◆ With some backported v2 components □ Cloud Foundry CORE compatible □ Interated with other Cloudn services ▪

Why □

Requirements ◆

Portability of user applications ◊

Works on public & private clouds and in standalone environments ◊ Many OSS frameworks ◆

Extensible design ◊

Integration with Cloudn Services ◊ Loosely coupled components & APIs ◆

Scalability for public services ◊ From 1 to 500+

nodes ◆

24/7 reliable system ◊ Minimum SPOF ◆

Working code ◊ Ruby ▪ Development Timeline □ 1.5 years total □ Oct 2011 project launched □ Feb 2012 closed beta service started □ Dec 2012 limited commercial services ▪

Developed Extensions over 2 years □

User-friently web user interface ◆

Easy application management ◆ Using Cloud Controller REST API internally □

Persistent application Log Management ◆

Users can view, search, and download application longs on web UI ◆ Persistent in restartin instances ◆ Logger agent on each DEA □ Cloudn RDB Service Support ◆

Added a new service gateway ◆ Users can provision reliable MySQL clusters from the CLI □

Integrated Authentication Ssytem ◆

The wrapper to connect Cloudn IDs and CF internal IDs ◆ Provisions user IDs by calling Cloud Controller REST API ◆ Cloud Foundry v.2 uses UAA instead of Cloud Controller for provisioning □ ... ○ Conclusion ▪

Succeeded in launching service rapidly with Cloud Foundry ▪

Cloud Foundry is □ Extensible □ Portable



Scalable □

□ Reliable □ Mature •

Cloud Foundry 101 - Matthew Kocher ○

Director of engineering for Cloud Foundry ○

Runtime ▪

Responsible for keeping instances □ Running, scaling, HTTP component ▪ Core to applications, and developers ○

Steps for pushing an app ▪ Devs install CF Gem ▪ Cloud Controller □

Keeps track of the desired state of the world □ Responsible for upgrading apps □ Sinatra app, written in Ruby □ RMDB back-end ▪

UAA □

Java spring component that handles security □ Dev logs in via this ▪

Cloud Controller sends files □ Takes only deltas □ Stores in blob store ▪

DEA □

Responsible for "staging" ◆

Take code and run build pack on it ◊

Package JRE for Java ◊ Package Ruby interpreter ◆

Creates a tarball ◊

This is what gets passed around to execute ◊ Referred to as "droplet" ◆ Call start script that will begin running application □ Multipurpose client that runs code □ Uploads droplet up to Cloud Controller and starts app on that DEA's instance □ Cloud Controller will pick other instances to run your app on ▪

Users then use broswer to hit Router □

Written in "go" □ HTTP proxy layer □ Listens for advertisements of applications and updates its routing table □

Doesn't have any state on disk - doesn't serve up applications until it listens long enough to start routing things

□ Router then routes traffic to a port on a DEA container ▪

Health Manager □

Talks to Cloud Controller to find out what it wants to happen, then listens to the message bus to determine if that is actually happening

If it notices a DEA crashes and needs one more it'll then alert Cloud Controller and then start up another instance, if needed.

Also, if a DEA has been unplugged and during that time the app has been upgraded, it'll point that out to the Cloud Controller to kill old version then start new versions

Cloud Foundry BOSH ▪

Low-level tool that allows you to describe an entire cluster, and then change one thing in the cluster to enact that change.



Infrastructure ▪ agnostic ○

Cloud Foundry Services ▪

Allow you to take the stateless runtime and add state to it. ▪ Could be something like Redis, Mongo, Cassandra, or RMDB ○

Building apps for PaaS ▪

Break down large applications down to small applications and have them speak to each other over HTTP across the Router

How We Work ▪

Cloud Foundry is developed in tight collaboration with Pivotal Labs ▪ Agile Software Development ▪ 45 devs in San Francisco ▪ Uses TDD, each team has a PM that manages the backlog ▪ All OSS work done on GitHub ▪

Email out to Cloud Foundry mailing list □ Not always on the mailing list all day ○

What's in the Pipeline ▪

Cloudfoundry / cli - announced in Friday at 6pm □

Working on a new CLI □ Will be written in Go ◆

Produce a single statically linked binary ◆ Generated from same code base - no dependencies ▪

New Service Broker (not Gateway) API □

Easier way to integrate with legacy systems, persistence layers, higher layer services

□ Previously had to use their Ruby Code and write from scratch □ Info on writing your own service broker coming soon ▪

No Single Points of Failure □

Classified failure modes, and making them better □

If Router loses all connectivity to the message bus, it assumes the world is still OK and not gone to poop.

□ Should lose any one node and still continue functioning □

Health Manager 9000 ◆ New Health Manager written in Go ○

Story ▪

Outage three weekends ago with AWS ▪ Took out a large portion of run.pivotal.io ▪ Created a Google hangout used for production outages ▪ Lost their NAT box (single point of failure) ▪ Spent about 30 minutes assessing the situation ▪

Ultimately used BOSH to redeploy another availability zone □ 70 instances, and the apps hosted in the old location □ Just needed access to S3 storage and the SQL blobs ▪

Outage only took 1.5 hours to re-deploy and cut traffic over before Health Manager started bringing applications back online

Multi-Site Architecture - Michael Behrendt ○

Michael Behrendt, IBM ○ Senior Technical Staff Member ○

Objectives for multi-site ▪

Improve redundancyof apps and CloudFoundry itself ▪ Control location of apps and services - for latency, governance, etc. reasons ○

Components in scope ▪

All CloudFoundry - internal components