1 of 57

Turning Cost Models On Their Heads

David Seaver

Solutions Architect/Strategic Estimation Lead

General Dynamics Information Technology (GDIT)

david.seaver@gdit.com

First International Boehm Forum on COCOMO®

and Systems and Software Cost Modeling

November 2022

2 of 57

Required Dilberts

3 of 57

Metrics?

towardsdatascience.com

Cassie Kozyrkov from Google

Metric Design for Data Scientists and Business Leaders

In order to make good data driven decisions you need 3 things:

  1. Decision criteria based on well designed metrics
  2. The ability to collect the data those metrics are based on
  3. Statistics skills to calculate those metrics and interpret the results under uncertainty

4 of 57

Is it time to evolve our estimation practices to make better data driven decision?

Reasons Why?

    • I am estimating effort with my models not cost. B&P demands drive this.
    • Development practices evolving Agile… DevOps…DevSecOps
    • Maturation of Cloud based technology and the movement from stand alone servers to the cloud
    • Software teams no longer providing engineering effort for the IT infrastructure part of software project.. has moved to a managed service
    • Software Factories provide common tools for software teams installed and configured to support development within hours
    • The scope of those tools and the cross-tool integration is maturing rapidly.
    • We should take advantage of technology’s evolution to collect data faster better cheaper and more consistently
    • Scheduling and staffing have changed normal curves don’t fit how projects are staffed
    • We could do a better job communicating the how and whys of our estimates

5 of 57

6 of 57

7 of 57

The Beatles’ ‘Revolver’ Reissue Is Here—With a Little Help From A.I.

8 of 57

Value Streams: Organizing Around Value �

  • One of the core principles in the lean and Agile community is organizing around value.
  • A value stream is the set of actions that take place to add value to a customer from the initial request through realization of value by the customer.
  • The value stream begins with the initial concept, moves through various stages of development and on through delivery and support.
  • By optimizing the flow through the value stream steps and minimizing bottlenecks, teams can focus on delivering value with the shortest lead time.
  • There are two main types of value streams.
    • Operational value streams are the sequence of activities needed to deliver a product or service to a customer.
    • Development value streams are the sequence of activities teams use to develop and support the products used by operational value streams.

9 of 57

Operational Value Streams�

  • Examples of operational value streams include
    • building a product
    • fulfilling an order
    • admitting and treating a medical patient
    • providing a loan
    • delivering a professional service
  • Operational value streams can be further divided into four types

Type of Operational

Value Stream

Description

Fulfillment value streams

Steps necessary to process a customer request, deliver a digitally enabled product or service, and receive remuneration.

Manufacturing value streams

Convert raw materials into the products customers will purchase.

Software product value streams

Support for software products.

Supporting value streams

End-to-end workflows for various supporting activities, generally around internal policies.

10 of 57

Development Value Stream (Product and Service Delivery)�

  • Developmental value streams contain the people and services who develop the solutions used by the operational value.
  • By organizing teams around value streams, it enables long-lived, stable teams that focus on delivering value over focusing on task completion.
  • Teams can identify and visualize all the work necessary to produce solutions and can then identify delays, bottlenecks, and handoffs in the process and work to reduce those inefficiencies.
  • This in turn leads to smaller batch sizes of work with a faster time-to-market. Adopting value streams and a value-driven mindset also allows the team to focus on process improvement and continuous growth.

11 of 57

Software Project Value Streams

Features

Defects

Risks

Debt

12 of 57

Epics and Features

  • Epics An Epic is a container for a significant Solution development initiative that captures the more substantial investments that occur within a portfolio.
  • Due to their considerable scope and impact, epics require the definition of a Minimum Viable Product (MVP) and approval by Lean Portfolio Management (LPM) before implementation.
  • Portfolio epics are typically cross-cutting, typically spanning multiple value streams and Program Increments (PIs).
  • SAFe recommends applying the Lean Startup build-measure-learn cycle for epics to accelerate the learning and development process, and to reduce risk.��Features and Capabilities A Feature is a service that fulfills a stakeholder need. Each feature includes a benefit hypothesis and acceptance criteria and is sized or split as necessary to be delivered by a single Agile Release Train (ART) in a Program Increment (PI).
  • A Capability is a higher-level solution behavior that typically spans multiple ARTs. Capabilities are sized and split into multiple features to facilitate their implementation in a single PI.

  • https://www.scaledagileframework.com/safe-requirements-model/��© Scaled Agile, Inc.Include this copyright notice with the copied content.��

13 of 57

14 of 57

Software Engineering has evolved�Estimation needs to evolve too

Changes to COCOMO III for SW estimation in 2023 because of (and not limited too)

    • Agile
    • DevSecOps
    • Cloud
    • SW Factory
    • Cost Driver Changes
    • Output Changes
    • ML/AI
    • Infrastructure as Software
    • 5G
    • Depth and Breadth of Tool Integration

15 of 57

But the world has been changing

  • Customers are acquiring the resources (labor) to build and maintain systems without and description of scope and capability. For example
    • The IRS plans to release the request for quotation for its $2.6 billion applications development contract in June 2022 and award the contract in the first quarter of fiscal 2023, according to an agency spokesperson.
    • “Scope enhancements will be defined to modernize applications in parallel with executing annual legislative demands and supporting [tax] filing season,” reads the Professional Services Council event description. “The level of effort for this agreement is anticipated to have a total contract ceiling of [$2.6 billion] over a seven-year period of performance.”
    • Contractors will be expected to provide support across six task areas: project and program management, agile portfolio management, DME, sustainment O&M, upgrades and configuration management, and transition services. AD methods will include Waterfall, Agile and DevOps
  • Develop estimates for the entire portfolio not just a single application
  • Collaborate and communicate with governance and Senior Management

16 of 57

COCOMO 3

Size inputs are Source Code Based

Working on Function Points and SNAP Points

Cost Drivers have evolved from Waterfall and are written in a private language understood primarily by the estimation community

Requires a translation to explain these measures for both the development team and the customer and relate to the value stream

17 of 57

Future of Size Inputs/Measures

18 of 57

Enhanced Tee Shirt Size using Standard Feature Size

19 of 57

Source Code

  • IMO has an interesting future for the estimation community
  • New tools track and manage source code and can easily measure it and track it back to functions and features
  • SW Factories can build this analysis in
  • The development of simulations and models of existing systems can be driven by digital models of the system from the source code (GITDEV for example)
  • Shout out to Cheryl Jones
    • Digital Engineering (DE) Measurement Framework https://www.psmsc.com/DEMeasurement.asp

20 of 57

Function Points

  • Great paper by Wilson Rosa and Sarah Jardine at IT CAST they are presenting next
  • There are a few issues with function points going forward
  • Two principal components to a function point size
    • Data Entity (external logical file, internal logical file)
    • Transactions (inputs, outputs, Inquiries)
  • To relate back to Value Stream. I find the information in the number of data entities and the count of transactions for a feature are of more business interest and easier to explain than a Function Point size of SFP size
  • Its also easier to explain what is being counted with transactions and data than teaching leadership what a function point is. (Its like translating your metrics into Latin)
  • Function Points have a lot of overhead and are not “Agile”

21 of 57

Software Size for DevSecOps Teams

  • A simple feature consists of:
    • 1 Logical Data Entity (data that is saved/maintained) (7 Function Points)
    • 5 transactions related to the Data Entity (22 Function Points, total 29 Function Points for a default Feature)
      • Create
      • Update
      • Delete
      • Read
      • Report
  • This model is based on an analysis of the 2022 version of the ISBSG repository by the International Function Point Users Group. As well as work done at Fidelity Investments in 1996 -2001 for Financial Transaction Systems. And NSA 2011 through 2021 for a variety of Analytic platforms.

  • We can use this as a building block for the BOE when we are bidding on fixed resource SCRUM/Agile/DevSecOps teams. As well as SW as infrastructure teams.
  • We should (where feasible) refine this model with data from each specific GDIT customer going forward. This requires access and analysis of the user story data from ongoing SW related work. We need to explore options for collecting this data digitally in the future from CI/CD tool suites.

22 of 57

Example

  • Single feature
    • One Data object
    • 7 transactions
    • Process Control Application
    • Built by experienced team
    • Use SW Factory
    • All new Development
    • Java
    • Commercial Development Standard High Reliability
    • Total Effort for Single Feature 275 Hours
      • PM 19 Hours
      • Systems Engineering/Architects 54 Hours
      • Agile Team 203.35 hours

23 of 57

  • This table and the graph illustrates how the effort for a single feature increases as the application becomes more complex
  • As systems include more real-time capabilities and complicated algorithms, they require additional effort to develop
  • I used SEER SEM and TruePlanning to model the effort per feature

24 of 57

Basis of Estimate

Metric Table

Data

hours per person month

160

fte hours per year

1920

team size

8

total hours per year per team

15360

time per release/months

3

Release per year

4

hours per release

3840

Features Per Release

14

Hours Per Feature

275

Process Control Application

Commercial Production Software High Reliability

Includes Agile Team of 8 plus tax for SEPM

25 of 57

Hours per Feature/Application Complexity & SW Technology ( New Dev or Modification)

  • We standardize on a feature size of 29 Simple Function Points
  • This is based on an analysis of the ISBSG repository performed in 2022
  • The table to the left illustrates the effort hours to implement 1 feature for a range of IT applications over a variety of SW Dev Technology
    • LowCode/NoCode (Microsoft PowerApp)
    • 4GL
    • Java
    • Python
  • Options for 100% New Development
  • Or Modification to existing code
  • Please note this is notional data

26 of 57

Feature Estimation Example

  • Draft template to estimate feature delivery capability when customer is buying fixed staff for fixed duration

27 of 57

Impacts on COCOMO 3

Attributes and Drivers

28 of 57

Product and Platform Attributes

Product Attributes

 

Impact of Software Failure (FAIL)

Extra High descriptions for Impact on Project Activities come out of the box with CI/CD SW Factory on CLOUD. Infrastructure as SW/CODE

Product Complexity (CPLX)

Needs Work

Developed for Reusability (RUSE)

CI/CD changes

Required Software Security (SECU)

Needs Work

Platform Attributes

Platform Constraints (PLAT)

Cloud and SW Factory Updates needed

Platform Volatility (PVOL)

Cloud and SW Factory Updates needed

29 of 57

Personnel Attributes

 

Analyst Capability (ACAP)

Resource Names need to be reworked, as well as scope of activities

Programmer Capability (PCAP)

Resource Names need to be reworked, as well as scope of activities

Personnel Continuity (PCON)

These stay static in the context of bidding

Application Domain Experience (APEX)

Language and Tool Experience (LTEX)

Platform Experience (PLEX)

30 of 57

Project Attributes

Project Attributes

 

Precedentedness (PREC)

PREC is based on the need for innovative data processing architectures, algorithms, and development processes (e.g. testing). Much of this is mitigated by SW Factory and Cloud capabilities.

Development Flexibility (FLEX)

 IDK yet

Risk/Opportunity Management (RISK - New)

 IDK yet

Software Architecture Understanding (ARCH - New)

 IDK yet

Stakeholder Team Cohesion (TEAM)

This would default to a High to Extra High Setting

Process Capability & Usage (PCUS – Replaces PMAT)

Default ot VH or EH

Use of Software Tools (TOOL)

Scope of activities needs to be reflected in the tool attributes. Extra High needs to be added for Tool Coverage and Tool Integration

Multisite Development (SITE)

COVID and remote work has significantly impacted this

Required Development Schedule (SCED)

With release cadence is this still relevant ?

31 of 57

Degree the software development practices have been automated. We need to rethink what nominal is for this. �

Automated Tools Use

SEER SEM Inputs

Rating

Description

Very High

Advanced fully integrated tool set encompassing all aspects of requirements, design, development, test and configuration management.

 

High+

Modern fully automated application development environment, including requirements, design, and test analyzers

 

High

Modern visual programming tools, automated CM, test analyzers plus requirements or design tools

 

Nominal+

Visual programming, CM tools and simple test tools

Nominal

Interactive, programmer work bench (Ada minimal ASPE)

Low

Base batch tools (compiler, editor)

Very Low

Primitive tools (bit switches, dumps)

32 of 57

Quality Attributes

Quality Attributes

Automated Analysis (AUTO)

IDK Yet

Peer Reviews (PREV)

IDK Yet

Execution Testing and Tools (EXTT)

IDK Yet

33 of 57

Application Super Domains

  • Very Good Start …but it needs work
  • IMO the complexity inputs in classic COCOMO are too vague and difficult to use properly
  • And the super domains lump all AIS systems as one block
  • This leaves a large variation in effort and cost

34 of 57

Source Code

  • IMO has an interesting future for the estimation community
  • New tools track and manage source code and can easily measure it and track it back to functions and features
  • SW Factories can build this analysis in
  • The development of simulations and models of existing systems can be driven by digital models of the system from the source code (GITDEV for example)

35 of 57

Function Points

  • Great paper by Wilson Rosa at IT CAST, he is speaking tomorrow please listen to him
  • There are a few issues with function points going forward
  • Two principal components to a function point size
    • Data Entity (external logical file, internal logical file)
    • Transactions (inputs, outputs, Inquiries)
  • To relate back to Value Stream. I find the information in the number of data entities and the count of transactions for a feature are of more business interest and easier to explain than a Function Point size of SFP size
  • Its also easier to explain what is being counted with transactions and data than teaching leadership what a function point is. (Its like translating your metrics into Latin)
  • Function Points have a lot of overhead and are not “Agile”

36 of 57

Software Size for DevSecOps Teams

  • A simple feature consists of:
    • 1 Logical Data Entity (data that is saved/maintained) (7 Function Points)
    • 5 transactions related to the Data Entity (22 Function Points, total 29 Function Points for a default Feature)
      • Create
      • Update
      • Delete
      • Read
      • Report
  • This model is based on an analysis of the 2022 version of the ISBSG repository by the International Function Point Users Group. As well as work done at Fidelity Investments in 1996 -2001 for Financial Transaction Systems. And NSA 2011 through 2021 for a variety of Analytic platforms.

  • We can use this as a building block for the BOE when we are bidding on fixed resource SCRUM/Agile/DevSecOps teams. As well as SW as infrastructure teams.
  • We should (where feasible) refine this model with data from each specific GDIT customer going forward. This requires access and analysis of the user story data from ongoing SW related work. We need to explore options for collecting this data digitally in the future from CI/CD tool suites.

37 of 57

Software Size for DevSecOps Teams

  • A simple feature consists of:
    • 1 Logical Data Entity (data that is saved/maintained) (7 Function Points)
    • 5 transactions related to the Data Entity (22 Function Points, total 29 Function Points for a default Feature)
      • Create
      • Update
      • Delete
      • Read
      • Report
  • This model is based on an analysis of the 2022 version of the ISBSG repository by the International Function Point Users Group. As well as work done at Fidelity Investments in 1996 -2001 for Financial Transaction Systems. And NSA 2011 through 2021 for a variety of Analytic platforms.

  • We can use this as a building block for the BOE when we are bidding on fixed resource SCRUM/Agile/DevSecOps teams. As well as SW as infrastructure teams.
  • We should (where feasible) refine this model with data from each specific GDIT customer going forward. This requires access and analysis of the user story data from ongoing SW related work. We need to explore options for collecting this data digitally in the future from CI/CD tool suites.

38 of 57

39 of 57

Other Stuff you might want to know

Not going to talk about today but backup reference

40 of 57

SeaFeaMod

Software Estimation and Analysis of Features Model

41 of 57

Steps Involved In Defining And Realizing Value Streams

42 of 57

DevOps

  • DevOps is the union of people, processes, and products to enable the continuous delivery of value to the end-user. The following are the core principles of DevOps:
    • Systems thinking: optimize the entire system of value delivery
    • Amplify feedback: Strive to get more rapid feedback throughout the system
    • Continuous improvement: Use outcomes to continuously learn and improve
  • DevOps is neither a tool, a team, nor a role, but is a way of operating where the organization is aligned around more effective value delivery to minimize waste while automating everything possible. This reduces costs and improves delivered quality. The reason DevOps is important is that it enables improved innovation, faster feedback, happier customers, and happier product teams.
  • DevSecOp:There is no real difference between DevOps and DevSecOps.
    • DevOps is focused on improving the flow of value to the end-user. Insecure software is not valuable because it puts the end-user at risk. DevSecOps was coined in an effort to emphasize that security was core to DevOps.

43 of 57

GitOps

  • GitOps automates infrastructure provisioning for DevOps teams by dynamically updating infrastructure as code configuration files based on events in the DevOps workflow (i.e., version control, collaboration, merge requests, etc.). In this way, GitOps changes the declarative state of the infrastructure based on the activity of the real-time system or application and allows elastic infrastructure management based on demand. This level of automation reduces the traditional manual workload of provisioning infrastructure and integrates infrastructure operations into the DevOps cycle.

44 of 57

DoD ….

  • DoD Platform One Big Bang Big Bang provides DoD software teams or programs with secure DevSecOps environments that can be customized. Big Bang leverages Infrastructure as Code and Configuration as Code to enable deployment of a declarative platform environment. This consolidates and codifies best security practices into a product that is reliably deployable in a variety of environments.
  • Platform One Party Bus Party Bus is a deployment of core Big Bang for development, testing, and production environments. Party Bus is a declarative state and Party Bus environments benefit from the Platform One continuous authority to operate.
  • DoD Iron Bank DoD’s Iron Bank repository contains fully accredited container images that are deployable on any infrastructure. Iron Bank makes available industry-leading free and open source software and commercial off the shelf software for DoD customers.

45 of 57

ARTS

  • Agile Release Train (ART)
  • An ART is a team of aligned Agile teams that executes the products and services identified in the value streams.
  • The ideal ART is long-term, focused holistically on a set of related products and services, sized appropriately between 50-125 people, with minimal dependencies with other ARTs.
  • The size of the value stream dictates the ART design.
  • For smaller value streams, a single ART may be able to execute multiple value streams. Large value streams may require multiple ARTs

46 of 57

Agile Teams

  • The ideal team size is anywhere between 5-11 team members, with a preference for smaller team size whenever possible. These teams are cross-functional, meaning that they have the core competencies required to develop and deliver value in short, two-week time increments.
  • Agile team structures are arranged, not by communication or functional silos, but rather support the value streams of work. These teams are long lived and stay together with all the necessary skills to do requirements, design the solution, build, test and deploy the solution.
  • Agile teams are organized based on developing a particular feature or feature area within a product. These teams would focus on developing the specific customer feature or feature area that would add value to the product.

47 of 57

48 of 57

What is infrastructure as code?�

  • Infrastructure as Code or IaC is the process of provisioning and managing infrastructure defined through code, instead of doing so with a manual process.
  • As infrastructure is defined as code, it allows users to easily edit and distribute configurations while ensuring the desired state of the infrastructure. This means you can create reproducible infrastructure configurations.
  • Moreover, defining infrastructure as code also:
  • Allows infrastructure to be easily integrated into version control mechanisms to create trackable and auditable infrastructure changes.
  • Provides the ability to introduce extensive automation for infrastructure management. All these things lead to IaC being integrated into CI/CD pipelines as an integral part of the SDLC.
  • Eliminates the need for manual infrastructure provisioning and management. Thus, it allows users to easily manage the inevitable config drift of underlying infrastructure and configurations and keep all the environments within the defined configuration.

49 of 57

What is infrastructure as code?

  • Infrastructure is one of the core tenets of a software development process—it is directly responsible for the stable operation of a software application. This infrastructure can range from servers, load balancers, firewalls, and databases all the way to complex container clusters.
  • Infrastructure considerations are valid beyond production environments, as they spread across the complete development process. They include tools and platforms such as CI/CD platforms, staging environments, and testing tools. These infrastructure considerations increase as the level of complexity of the software product increases. Very quickly, the traditional approach for manually managing infrastructure becomes an unscalable solution to meet the demands of DevOps-based modern rapid software development cycles.

50 of 57

Nonfunctional Requirements

  • Nonfunctional Requirements (NFRs) define system attributes such as:
    • security
    • reliability
    • performance
    • maintainability
    • scalability
    • usability
  • NFRs serve as constraints or restrictions on the design of the system across the different backlogs.��© Scaled Agile, Inc.Include this copyright notice with the copied content.��

51 of 57

52 of 57

Containers and Kubernetes

  • Containers are packages of software that wrap all dependencies and settings for a given application into a single, fully executable unit. Containerization allows for a single application and all of the components needed to run it (e.g., databases, configurations, default settings, etc) to be isolated and abstracted from the computing environment. This provides flexibility to leverage and deploy applications enterprise-wide, on any infrastructure (i.e., on-premise, cloud, or hybrid), and reduces the risk of vendor-lock with a single cloud service provider. Scaling the use of containers across the Department of Defense can increase software reuse across Components and significantly reduce the overhead burden on developers deploying the applications into production.
  • Kubernetes Containers are deployed using container orchestration tools, such as Kubernetes. Kubernetes is an open source container orchestration platform that automates the deployment, scaling, and management of containerized software. Kubernetes is “infrastructure” agnostic meaning it can be deployed and executed regardless of hardware specifications (i.e., in the cloud, on your laptop, or an air gapped server). IaC and CaC can provide automated provisioning of this infrastructure and the deployment of Kubernetes.

53 of 57

Microservices

  • In a microservice-based approach to software system architecture, the system is decomposed into many individual software applications that each perform discrete tasks or processes. The individual applications communicate via well-defined application programming interfaces (APIs). Known as “microservices,” each application is fully abstractable from the rest of the system. For example, each microservice has its own, isolated business process flow, logic, data access layer, and codebase.
  • Microservices represent an alternative approach to traditional, monolithic architectures for software systems. By decomposing complex software systems into smaller, self-contained functional units, microservices architectures enhance system reliability and limit the scope and scale of impact when an issue is detected. In this same manner, microservices architectures enable wide, cost-effective scalability and limit the amount of overhead or institutional knowledge required by individual developers.
  • Rather than needing to learn the entire system and its dependencies, developers can work more efficiently, on smaller teams, and focus on ensuring the dependability of an individual microservice.
  • Within the Department of Defense, enterprise-wide availability of pre-approved, containerized microservices can significantly accelerate software development efforts by making the building blocks of software systems discoverable, available on-demand, and deployable in any computing environment. Given their decoupled nature (i.e., independent of a single system or workflow), microservices are highly composable. They can be configured with other, independent microservices to support workflows and applications tailored to any unique mission. Moreover, microservices that are containerized, configured and managed as a carefully chosen suite of services and functions can serve as pre-assembled software development pipelines that are infinitely scalable.

54 of 57

Container Orchestration

  • Container orchestrators automate the deployment and management of containers on any infrastructure. This removes the need to redesign or reconfigure an application in order to deploy it to a different environment.
  • container orchestration also enables virtualization and scaling of containerized microservices such as storage, networking, and security which are foundational components of cloud-native applications. Orchestration tools configure containers based on declared states which indicate how the containers should run. Rather than having to manually redesign or reconfigure an application to run in a different environment, container orchestrators standardize deployments.
  • Within the DoD context, containerized microservices and container orchestration can be used to provide a common, but extensible, platform that mission applications can be developed and run on top of. A common, secure platform would allow for better alignment between platform teams and mission application teams. In other words, creating greater standardization among platforms, while preserving the ability to quickly adapt and integrate new products or capabilities, would create a positive feedback loop between mission application teams and platform teams.

55 of 57

@Code

  • Infrastructure as Code (IaC) is an approach to managing the technical infrastructure required to run containerized applications. Instead of traditional methods to document procedures, infrastructure as code automates and source controls the configuration of infrastructure. The end result creates a declarative, repeatable configuration that automates the infrastructure provisioning effectively managing it as a single piece of software.

  • Configuration as Code (CaC) is an approach to managing the configuration of applications’ initial state. Like infrastructure as code, this practice results in a declarative state that can be automated, repeated and managed as a single piece of software.
  • Declarative State The combination of Infrastructure as Code, Configuration as Code, and containerized software provide a declarative state. Declarative state is the future of continuous delivery. Kubernetes is among the largest growing ecosystems and its services, support, and tools are widely available.

56 of 57

Continuous Integration/Continuous Delivery

    • Continuous delivery (CD) means we have the capability of releasing the latest changes validated by the pipeline to production on demand.
      • Continuous deployment means that every change we make will flow directly to production unless the CD pipeline invalidates the delivery. In either case, a true CD flow will have no human intervention between when code is accepted into the trunk and when it is delivered.
      • Very high performing teams will deliver changes as rapidly as possible. Depending on the delivery context, this could be several times an hour, day, or week.
      • The goal of CD is to minimize risk and cost of change by relentlessly driving down the size of change to expose and correct inefficiencies in the process and improve quality feedback loops.
      • CD is not a technology or tool. CD is how we use the tools to improve quality, organizational efficiency, and customer outcomes.
    • Continuous integration is the practice of a team of continuously making small changes and integrating them with other changes the team is making to verify that the changes work together and to minimize the risk of conflicting changes between developers. This is a core quality control and is a base requirement for CD.

57 of 57

Software FLOW

  • Software FLOW is middleware software to connect disparate systems, transforming and restructuring data between these systems. FLOW enables complicated data environments, sources, and destinations to interoperate.