1 of 44

Midpoint Deliverable | Fall ‘25

PLG Insights & Strategies for Google Cloud

BSB x Google Cloud

November 11th, 2025

x

2 of 44

Agenda

Introductions

  • Meet the Team, Our Approach, Companies

Companies

  • CUA AI, Prompt Layer, Browser Base, Retool, & Shortwave

PLG Analysis

  • Context, Unique PLG Practice, & Life-Cycle Impact

PLG Results

  • Implementation Details & Risks/Trade Offs

Applying to GCP

  • Transferable Insights (Brief + Prose)

x

3 of 44

Introductions

Meet the Team

  • Project Managers, Senior Analysts, & Analysts

Our Approach

  • Analyst Education -> Large Players -> Small Players -> PLG Deep Dives

Companies

  • Selected Companies to Present

x

4 of 44

Meet The Team

Project Managers & Senior Analysts

placeholder image

Justin Choi

Project Manager

Jenny Dunn

Senior Analyst

Joshua Schiller

Project Lead

Alexandra Kowalczyk

Project Manager

Allison Dai

Senior Analyst

x

5 of 44

Meet The Team

Analysts

placeholder image

Andrew Madrigal

placeholder image

Kaden Huang

placeholder image

Richard Luo

placeholder image

Tanya Zhang

placeholder image

Telmunn Bayarkhuu

placeholder image

Amy Chen

placeholder image

Preiploy Omarrak

placeholder image

SJ Janolkar

placeholder image

Huong Le

x

6 of 44

Our Approach

Analyst Education -> Large Players -> Small Players -> PLG Deep Dives

1

2

3

4

Analyst Education : Understanding GCP’s Current PLG Playbook

Large Players : Understanding Broad PLG Strategies of Larger Players

Small Players : Deep Dives into PLG Practices of Smaller, Unique Players

PLG Deep Dives : Considering User Journey (technical vs non technical, etc)

5 Unique PLG Practices from

5 Smaller Players across both Cloud and Non-Cloud sectors

Assignment Phases

x

7 of 44

Companies

Selected Companies to Present

CUA AI

x

8 of 44

CUA AI

x

9 of 44

PLG Analysis - Context

CUA AI

What Capabilities

Who Benefits

Problems Solved

  • Provides an open-source developer platform
  • Allows builders to create, run, and deploy AI agents
  • Capable of operating full computer environments
  • Designed for engineers, developers, and builders
  • Allows audience to prototype and deploy intelligent agents quickly
  • Mitigates infrastructure setup and friction that developers/operators experience
  • Eliminates first-mile friction through plug-and-play workflow
  • Users can create ready-to-use virtual machine in minutes
  • Experience live agents running immediately
  • Later upgrade options to paid cloud instances

x

10 of 44

PLG Analysis - Unique PLG Practice

`

CUA AI

Onboarding Practices

  • Practice: Cua’s local-first to cloud expansion model
    • Developers start for free, using their local SDK, then transition to the cloud via earned runtime credits

  • This practice lets AI developers launch a working agent environment in minutes by using their credits to spin up a pre-configured cloud VM

  • A one-click sandbox
    • Guided trial linking Cua’s open-source, local SDK to its managed cloud service (on GCP)

x

11 of 44

PLG Analysis - Life-Cycle Impact

Onboarding

Runtime credits

Dashboard Analytics

Github

Blog

Retention

CUA AI

x

12 of 44

PLG Results - Implementation Details

What

What

What

What

CUA AI

UI/UX

Interactive User Dashboard Components:

  • Credits
  • Analysis
  • Quick-start Tutorials

Nudges

Automated Re-Engagement System:

  • Email to “Continue Your Building” Prompts when idle

Milestones

Personalized Achievement Settings:

  • User’s first VM launch triggers encouraging celebratory UI
  • Prompting Feature Exploration

Tiers

Multiple-Tiered Payment System

  • Personalized upon intent of users/subscriber

x

13 of 44

PLG Results - Risks & Trade Offs

The same frictionless design that drives activation also creates risks in cost control, scalability, and user quality, necessitating strong guardrails and targeting.

Abusing Free Credits

Churn Risk

Free credits can be exploited by bots or high-usage users

  • Creates unexpected cloud costs if not rate-limited

Onboarding can be made more engaging

  • Competing tools like LangFlow and Replit AI typically keep users engaged longer

Competitive Pressure

Larger companies offering built-in agent tools

  • e.g., OpenAI, Anthropic, and Hugging Face

Key Takeaway

CUA AI

x

14 of 44

Applying to Google Cloud - Transferable Insights - Prose Reference

  • GCP offers $300 in free credits, then pushes the user into a scary, empty console, which causes decision paralysis and cost anxiety.

Status Quo

GCP offers $300 in free credits, then pushes the user into a scary, empty console, which causes decision paralysis and cost anxiety.

PLG Practice to Adopt

Cua’s local-first SDK as a safe, sandboxed emulator defers the complexity of cloud configuration until after the user creates a functional build and achieved their first win.

Example: a “Vertex AI Prototyping SDK”

A local-first developer kit serving as the primary PLG entry-point for GCP’s AI and agent-building services.

Local-First Emulation: a downloadable SDK, emulating the core of Vertex AI and agent-building functionalities, allowing developers to build, train, and test basic agent/model workflows on their local machine, free of charge.

Guided, Gated Learning: the SDK is bundled with interactive tutorials in the IDE, turning the SDK into the onboarding flow.

Earned Cloud Runtime: instead of a flat $300 in credits, developers earn specific non-transferable Vertex AI runtime credits for each completed tutorial, gamifying the learning process and providing tangible rewards for activation.

One-Click Cloud Promotion: after earning first credits, a “Deploy to Vertex AI” button becomes active within the SDK. Clicking would use the earned credits automatically provision a pre-configured, basic Vertex AI project, seamlessly migrate their local working project to the new live cloud environment, and provide a next-step tutorial on how to scale their cloud-based project.

Status Quo

  • Cua’s local-first SDK as a safe, sandboxed emulator defers the complexity of cloud configuration until after the user creates a functional build and achieves their first win.

PLG Practice to Adopt

Example Workflow: a “Vertex AI Prototyping SDK”

  • Local-First Emulation: a downloadable SDK emulating the core of Vertex AI, allowing developers to build, train, and test basic agent/model workflows on their local machine, free of charge.
  • Guided, Gated Learning: the SDK is bundled with interactive tutorials in the IDE, turning the SDK into the onboarding flow.
  • Earned Cloud Runtime: instead of a flat $300 credit, developers earn non-transferable Vertex AI credits for each tutorial, gamifying learning and providing tangible activation rewards.
  • One-Click Cloud Promotion: after earning first credits, a “Deploy to Vertex AI” button becomes active within the SDK. Clicking would use the earned credits to automatically provision a pre-configured Vertex AI project, seamlessly migrate their local project to the new live cloud environment, and provide a next-step tutorial on how to scale their cloud-based project.

A local-first developer kit serving as the primary PLG entry-point for GCP’s AI and agent-building services.

CUA AI

x

15 of 44

x

16 of 44

PLG Analysis - Context

What Capabilities

Who Benefits

Problems Solved

  • Provides prompt management that tracks and evaluates certain AI products and certain LLM Prompts

  • Designed for startup developers, artificial intelligence engineers, or product level AI project creators

  • Increases ease-of-use through seamless onboarding process
  • Retention of stable customers
  • Accelerates production timeline of curating certain prompts (fulfill the ICP’s needs)

x

17 of 44

PLG Analysis - Unique PLG Practice

`

Onboarding Practices

  • Practice: PromptLayer’s Guided Iteration Cycle allows nontechnical users to achieve accessible prompt management in minutes through frictionless and hands-on experimentation

  • This creates a workflow that allows deploying certain interactions in one interface while doing a comparison analysis on different prompt versions for optimization

  • This solves the problem of numerous iterations of during certain production cycles which promotes efficiency
    • Also allows the actual users to understand the process of doing and iterating which therefore leads to retention and certain expansions

x

18 of 44

PLG Analysis - Life-Cycle Impact

Demographics

  • Which developer stage it targets: activation/retaining stage.
  • Primary/Secondary lifestyle stage it targets: Stage 3 – Iterations stage, retaining Stage.
  • Who it benefits most: AI engineers, Startup POCS, Indie Developers

Impacts

  • Key outcomes moved: [Time-to-First-Success ↓] [Activation ↑]

  • Strategic fit for GCP: Aligns to ease-of-use, onboarding speed
  • Example: Activation/Retaining to PromptLayer

Iteration

Instant Feedback Loop

Feedback Structure

Viral Confirmation Loop

Prompting Loop

Retention

x

19 of 44

PLG Results - Implementation Details

Instant Feedback Loop

Structure/Feedback

Viral Confirmation Loop

x

20 of 44

PLG Results - Implementation Details

Customer Reviews

Case Studies

Quantitative Data

x

21 of 44

PLG Results - Risks & Trade Offs

21

21

Highly sensitive information necessitates bulletproof security.

  • Accessibility and interactivity can lead to less robust security surrounding user information

The flexibility of seat-based pricing comes with the risk of undercharging, while scalability and security challenges arise from the UX and sensitive information.

Pricing Challenges

Security Challenges

Seat-based pricing could result in multiple team members on one account, while usage-based pricing can lead to unpredictable revenue.

  • Miscounts charging and results in uncertainty

Key Takeaway

Scalability Difficulties

UX needs to cater to both technical and nontechnical users.

  • Magnifies delicate balance between accessibility and specialization.

x

22 of 44

Applying to Google Cloud - Transferable Insights - Prose Reference

PLG Practice to Adopt

Status Quo

  • PromptLayer’s Guided Iteration Cycle (GIC) for “nontechnicals”:
    • Deploy and test different prompt versions
    • Conduct comparison analysis on different models and versions
    • Analyze results and tweak their prompts to get better at what they want to do
  • GCP is heavily developer-focused. With the rise of GenAI, “nontechnicals” are becoming increasingly more influential in driving the ideation and specification of AI-powered products.
  • No-Code Interface: A clean, simple, web-based UI like PromptLayer, allowing users to write, test, and iterate on prompts without setup.
  • Guided Comparison & Iteration: users can A/B test different prompt versions against Google models and UI would provide the GIC with clear comparison views, scoring, and version history.
  • Collab-First Expansion: the tool would be built for teams. Core PLG mechanics arise through “Collaboration Invites Sent” and seat-based pricing tiers.
  • Developer Handoff: the critical integration point. Once the user selects a “winning” prompt, a “Get API Snippet” or “Share with Developer” button would be the primary call to action.
    • Generates a production-ready code snippet for a developer to integrate into a Vertex AI-backed application.
    • Serves as the “pull-through” mechanic, driving the engineering team to the full Vertex AI platform to production.

A no-code, web-based tool: a new PLG entry point targeting PM’s, business analysts, and other “nontechnicals.”

Example Workflow: a “Vertex AI Prompt Hub”

x

23 of 44

x

24 of 44

PLG Analysis - Context

What Capabilities

Who Benefits

Problems Solved

  • Serverless platform that runs headless platforms at scale
  • Provides infrastructure for web automation and AI agents using clean APIs and SDKs

  • Designed for teams building AI agents, data automation, and web-integrated products
  • Allows organizations to focus on product innovation instead of infrastructure plumbing

  • Existing options force teams to DIY browser infrastructure or rely on limited headless tools
  • Competing offerings break at scale, lack observability, or require constant maintenance

x

25 of 44

PLG Analysis - Unique PLG Practice

Production & Scale Practices

  • Practice: Browserbase’s enterprise-ready reliability, scale, and security enables real-world automation, quickly.

  • Teams need automation that survives login walls, CAPTCHAs, changing frontend code, and traffic spikes; at production scale, stability beats hacks every time.

  • Browserbase keeps sessions human-like and durable, handles infrastructure invisibly, and scales to thousands of browsers in parallel so production agents keep working even when the web misbehaves.

x

26 of 44

Browserbase opens a headless browser

Actions the agent can take inside the browser

Actual browser window the agent opened

x

27 of 44

PLG Analysis - Life Cycle Impact

Onboarding

Simplified Onboarding

Retention

User Resources

Word-of-Mouth

x

28 of 44

PLG Results - Risks & Trade Offs

Browserbase sacrifices some local runtime speed to deliver stronger reliability to scale securely across large production workloads.

Higher Cost

DIY

Cheaper tools like Puppeteer or Playwright hosted in-house cost less

  • Exchange for convenience, stealth, and reliability

Heavy internal automation might resist giving up infra and control

  • Particularly if they believe they handle stealth well enough already

Latency

Local automation feel faster due to code runs in the browser

  • Introduces minor latency while improving reliability and scale.

Key Takeaway

x

29 of 44

Applying to Google Cloud - Transferable Insights - Prose Reference

Browser-as-a-Service

Add managed headless browsers as a GCP compute option for AI agents

Agent-ready interface enables AI systems to interact with the real web

Example: a Headless Stealth Browser

A new “Vertex Browser Service” lets AI agents launch secure, managed Chrome sessions directly on GCP, running specific tasks under developer preferences.

Integrates with Vertex AI and Cloud Run, handling proxies, CAPTCHAs, and session management automatically, executing AI agents to complete tasks.

Uses browser-hour or session-based billing, giving startups simple, scalable access to web automation that is uniquely tailored and priced to their needs.

Scales to multiple browsers simultaneously, allowing for repeatable, automated tasks across thousands of sessions, utilizing the capacity of cloud-computing.

  • Integrated & Stealthy by Default: headless browser would natively integrate with Vertex AI and Cloud Run, automatically handling proxies, CAPTCHAs, and session management
    • This allows AI agents to execute tasks successfully and human-like.
  • Scalable & Priced for Agents: by scaling to multiple browsers simultaneously, repeatable, automated tasks can be executed over thousands of sessions; it would use browser-hour or session-based billing to give startups easy and scalable access to web automation, which is bespoke and priced for their requirements.
  • Solving the DIY Problem: the new primitive removes the developer onus of building and maintaining fragile, in-house automation stacks and provides reliable plumbing for the AI agents, so developers can focus on agent logic instead of infrastructure.

PLG Practice to Adopt

Status Quo

  • Browserbase’s "Infrastructure Primitive" Utility:
    • Obtain reliable, production-grade utility for the emerging AI agent economy.
    • An SDK (Stagehand) as the main PLG motion.
  • GCP's services are "agent-blind”: built for hosting applications, not for agent-based web interaction. Developers must build and manage their automation stacks on top of generic compute. This DIY approach is high-friction and does not reliably scale.

Enabling AI agents to launch secure, managed Chrome sessions on GCP that run specific tasks under developer preferences, providing a serverless platform.

Example Workflow: “Vertex Browser Service”

x

30 of 44

x

31 of 44

PLG Analysis - Context

Many brands like amazon and Nike are using ai chatbots to handle customer service enquiries on their website sephora has taken this one step further with creating a AI chatbot hat is also a virtual artist delivering product recommendations and virtual AR supported product try-ons.

Olay, owned by P and G, designed their own skin advisor which segments customers and allows for better ad tageting.

A host of food retailers including Nestlé are using tastewise, an AI powered platform that mines social media to predict trends in consumer behavior.

What Capabilities

Who Benefits

Problems Solved

  • Provides pre-built UI components (tables, charts, forms) that connect seamlessly to databases and APIs
  • Developers can quickly prototype, test, and deploy fully functional applications (without worrying about boilerplate code or complex frontend setup)
  • Empowers technical teams and developers to focus on business logic rather than front-end scaffolding
  • Expedites a process that typically requires weeks of engineering
  • Helps ship internal tools 10x faster and allowing teams to iterate quickly

32 of 44

PLG Analysis - Unique PLG Practice

`

Onboarding Practices

  • Practice: Instant App Templates & One-Click Deployment
    • Retool allows users to instantly launch functional app templates, from CRM dashboards to ticketing systems & AI assistants

  • Automatic Configuration
    • This practice lets AI developers launch a working agent environment in minutes by using their credits to spin up a pre-configured cloud VM

  • Time-To-First Value
    • Retool reduces time from initial onboarding & signup to working apps from days to minutes

Automatically Generated Workflows

Retool Query Editor

x

33 of 44

PLG Analysis - Life Cycle Impact

Stage 2: Onboarding

Guided First Success Pre-wired templates connects live data

and visualizes results within minutes, reducing setup friction

Stage 3: Development

Continuous Usage Templates are easily remixed, and extended, enabling continuous usage across teams

Life -Cycle Stages

Retool mainly targets the Onboarding & Development Stages: Onboard→Activate→Retain

Onboarding & Development Practices

  • Practice: Retool’s pre-wired templates, connecting live data and visualizing results within minutes, reduces friction and enables a guided first success.

  • Continuous Usage Templates are easily remixed, and extended, enabling continuous usage across teams.

  • .

Onboarding

Guided First Success

Development

Continuous Usage Templates

x

34 of 44

PLG Results - Risks & Trade Offs

New users may experiment by launching many templates or connecting large production data.

  • Without guardrails, experimentation can lead to large, unexpected expenses.

The slick introductory experience that accelerates onboarding & activation also introduces security, retention, and cost trade-offs.

Security & Data Exposure

Unpredictable Costs

Users may unintentionally expose credentials without strong defaults or sandboxing.

  • Security is required when handling sensitive information.

Shallow Engagement

Out-of-the-box dashboards often miss Retool’s broader extensibility.

  • Convenience may limit activation-to-retention conversion

Key Takeaway

x

35 of 44

Applying to Google Cloud - Transferable Insights - Prose Reference

  • Full-Stack, Not Infra-Only: a new “Solutions” library wouldn’t just be a collection of Terraform scripts or infrastructure blueprints. Some examples:
    • “A Vertex AI Dashboard for Sentiment Analysis”, “A Cloud Run + Firestore + Gemini API”, or “An AI-Powered Customer Support Ticketing System.”
  • “Deploy & Fork” UI: after selecting an application, developers use one-click deployment then fork the source code.
    • One-click deployment: a “Deploy Now” button deploys the entire stack and automatically provisions the necessary GCP services.
    • Source code included: after deployment, the user is given the forkable source code for the full-stack application; this code already has the connection strings, API keys, and the like for the new infrastructure.
  • Solving the Stitching Problem: Jump Start Solutions v2 directly solves the mandate of pre-configured/pre-built recipes that “remove the onus” from the developer by stitching it together for them; the emphasis is now on iterating, customizing, and adoption instead of building, which is more engaging and retaining.

1. GCP’s ‘Jump Start Solutions’

3. “Deploy & Fork” UX

Provided pre-configured bundles, but developers still had to bring their own code to stitch infrastructure and app logic manually, delaying activation

One-click deployment: “Deploy Now” button that deploys the entire stack and automatically provision the necessary GCP services and deploy the pre-built app code

2. Full-Stack, Not Infra-Only

A new “Solutions” library with fully functional, full-stack applications (e.g. “A Vertex AI Dashboard for Sentiment Analysis”, “An AI-Powered Customer Support Ticketing System”, etc.)

4. Solving the “Stitching” Problem

Shift from Infra-as-Code → App-as-Infra. By bundling infrastructure, app logic, and UX into one deployable experience, GCP removes accelerates activation and encourages iterative adoption.

Problem

Mechanism

Solution

Impact

PLG Practice to Adopt

Status Quo

  • Sophisticated “Instant App Templates”:
    • Pre-configured, complete applications; NOT infra-as-code scripts or “Hello World” skeletons
    • Removes boilerplate code or complex frontend setup
    • Gives guided first success by connecting live data sources instantly
  • GCP previously attempted Jump Start Solutions using pre-configured bundles, but this required developers to still write out the application and stitch the infrastructure together.

“Bring the infrastructure to the code” by offering a library of pre-built, functional applications to developers.

Example Workflow: “Jump Start Solutions v2”

x

36 of 44

x

37 of 44

PLG Analysis - Context

What Capabilities

Who Benefits

Problems Solved

  • Shortwave is an AI-powered email client that instantly organizes Gmail into a fast, focused workspace.
  • Built for collaborative teams and professionals who need to process large volumes of messages quickly.
  • It removes the first-mile friction of inbox overload by automating sorting, follow-ups, and collaboration.

Clients

x

38 of 44

PLG Analysis - Unique PLG Practice

`

Instant-Value Onboarding Practices

  • Practice: Shortwave Instant AI-Inbox Flywheel achieves a single standout product-led mechanism through AI auto-bundles that instantly remove overload and built-in collaboration nudges via shared threads and team invites.
  • Automates value on Day 0 without users needing to configure labels or habits, and spreads naturally within teams through collaboration features embedded directly in messaging workflows.
  • This solves the problem of first-mile friction by eliminating manual sorting, rules, and tool-switching.
    • Users regain inbox control instantly without setup or training.

x

39 of 44

PLG Analysis - Life-Cycle Impact

Onboarding

Guided First Success

AI Inbox Organization

Reduce Load

Shared thread+Invite

Retention / Expansion

x

40 of 44

PLG Results - Implementation Details

Immediate Connection

Instant Login

UI Triggers

x

41 of 44

PLG Results - Implementation Details

Collaboration Triggers

Gating Tiers

Professional Reviews

x

42 of 44

PLG Results - Risks & Trade Offs

Summaries must be consistently reliable, where error risk will lose user confidence and trust.

  • Privacy concerns may slow enterprise adoption

Shortwave’s growth model is strong, yet lasting success depends on managing Gmail dependency, converting free users effectively, and maintaining trust through reliable, privacy-safe AI.

Platform Dependency

AI Trust & Accuracy

Platform usage depends on Gmail/Workspace, where API or policy changes could reduce value.

  • Limits expansion to other email ecosystems

Key Takeaway

Viral Loop Saturation

Team adoption can plateau once an org. is fully onboarded, so solo users generate little virality.

  • Requires continuous feature innovation to sustain spread

x

43 of 44

Applying to Google Cloud - Transferable Insights - Prose Reference

  • Context-Aware Nudges: Functions like “Summarizing Thread” & “Share Thread”
    • “Summarize this thread” appear exactly when needed, guiding the user at the right moment. Progressive disclosure keeps complexity hidden until the user has achieved early success, ensuring the experience never feels overwhelming.
    • Viral call-to-actions such as “Share Thread” are placed directly within the workflow, so collaboration and virality happen naturally. These patterns combine to create an interface that feels intuitive and alive.
  • Users consistently report reaching inbox zero faster and feeling genuine relief from overload
    • TechCrunch has highlighted these results, noting that Shortwave users achieve inbox zero 45% faster than Gmail’s native workflow.

PLG Practice to Adopt

Status Quo

  • ‘Share Result’ | BigQuery
    • A Share Button that generates a public or team-viewable link to a query result, including the SQL, metadata, and a data visualization
  • ‘Share Log View’ | Log Explorer
    • A button that generates a link to a specific, time-boxed, and pre-filtered log stream, allowing a developer to share a bug with a teammate instantly
  • In some of GCP’s features and products, there aren’t many great implementations of viral loops nor enablement of collaboration.
  • Invites are a functionality that should enable expansion of GCP usage throughout a team, and then, throughout an organization.

Concise popups/nudges engage users and reduce overwhelming feelings while promoting application awareness through a share function.

Example Workflow: “Disclosure Nudges for Cognitive Overload”

x

44 of 44

Q&A Session + Feedback

BSB x Google Cloud

November 11th, 2025

x