1 of 11

Open-Source Chip Prototyping and PPA Analysis on Chameleon

6th Chameleon User Meeting (2026)

.

Background

This work focuses on building an open-source framework for floating-point hardware PPA evaluation and verification using reproducible ASIC-style OpenROAD characterization workflows on Chameleon. The project supports broader efforts toward AI-assisted hardware design and implementation-aware verification, while also helping identify infrastructure needs and workflow gaps for scalable AI/EDA research.

Speaker

Connor Bohannon (Argonne)

Presented: April 15th, 2026 (Boulder, CO)

Suggested line of text (optional):

WE START WITH YES.

2 of 11

Project Overview

  • Project Objective
    • Investigate future specialized computing architecture beyond traditional transistor scaling
    • Explore how emerging workloads map to domain-specific hardware accelerators
  • Research Themes
    • Rapid hardware prototyping with open-source design flows
    • Architecture/performance/resource estimation for specialized accelerators
    • AI-assisted chip design, verification, and physical implementation
    • Evaluation of accelerator architectures for scientific and HPC workloads
  • Infrastructure / Tooling
    • Chisel / Chipyard / FireSim
    • Cocotb / Verilator
    • OpenROAD / Open-Source EDA Stack
    • Cloud-based FPGA / CAD experimentation via Chameleon
  • PI/Contact:
    • Kazutomo Yoshii <kazutomo@anl.gov>

Exploring Future Computing Architecture Designs (CHI-231208)

Goal:

Move from “does it work?”

“can we actually build it?”

3 of 11

Research Vision / Motivation

  • LLM-assisted RTL and testbench generation are rapidly lowering the barrier to hardware design
  • However, generated RTL is often evaluated only functionally, not physically
  • A major gap remains between:
    • Functional correctness → Tapeout / deployment feasibility
  • Bridging this gap requires implementation-aware evaluation:
    • Physical feasibility / PPA analysis
    • Formal / pre-silicon verification
    • Implementation-aware design ranking

Toward AI-Assisted Hardware Design & Verification

AI-Generated RTL

Tapeout-Ready Design

Missing Evaluation Gap

PPA Analysis

Physical Design Feasibility �Formal Verification

Ranking / Selection

4 of 11

Current Framework Direction

  • Objective: Evaluate AI-generated and handwritten FP RTL using implementation-aware metrics beyond functional correctness

  • Implemented Today:OpenROAD automation, timing sweeps, and CSV extraction across OpenFloat / HardFloat / Rial libraries

  • Why it matters: Function simulation does not capture area, timing, or routability

  • Next: implementation-aware metrics for AI-guided design ranking

Emerging AI-Assisted HW FP Verification / Evaluation Framework

User / AI Intent

RTL / Testbench

Functional Verification

Physical design / PPA

Future Ranking / Selection

Future Work

5 of 11

Why Chameleon Was Needed

  • Open-source EDA stacks are dependency-sensitive; a controlled OS image and stable toolchain versions matter.
  • OpenROAD workloads are CPU- and memory-intensive; useful comparisons require many runs, not a single demo.
  • Parameter sweeps multiply cost quickly: libraries × operators × widths × pipeline depths × clock targets.
  • Reproducibility requires scripted orchestration and consistent machine configuration—not ad hoc GUI steps.

Infrastructure Requirements for Implementation-Aware Evaluation

compute_skylake

24 Xeon Cores

187 GB RAM

Ubuntu 22.04

OpenROAD P&R

6 of 11

Workflow on Chameleon

  • RTL input - Generated or handwritten designs (OpenFloat, HardFloat, Rial)
  • Functional verification - Validate correctness before physical evaluation
  • Synthesis & place-and-route - Map designs to hardware and evaluate timing, area, and feasibility
  • Batch experimentation - Run large parameter sweeps across designs, configurations, and targets
  • Metrics extraction - Convert tool outputs into structured data for comparison and analysis

Implemented Open-Source EDA/PPA Pipeline

7 of 11

Example Output / Enabled Research

  • Chameleon-enabled batch characterization supports scalable post-route evaluation of candidate RTL implementations
  • Physical design metrics expose feasibility and efficiency tradeoffs beyond functional correctness
  • Forms the implementation-aware measurement layer for future AI-guided hardware ranking and refinement

Representative PPA Characterization Results

8 of 11

What Worked Well

  • Environment control: Ubuntu 22.04–class setups compatible with modern ORFS/Yosys builds.
  • Compute headroom: multi-core machines made batch sweeps feasible.
  • Long-running jobs: stable enough for overnight sweeps without babysitting a laptop.
  • Iteration velocity: once scripted, we could rerun after fixing integration issues (constraints, includes, parsing).

What Chameleon Enabled Successfully

9 of 11

Challenges / Remaining Gaps

  • RTL integration surprises: generated SystemVerilog with verification scaffolding (include/macro patterns) does not always synthesize cleanly without preprocessing.
  • Constraint ecosystem mismatch: SDC portability issues and subtle command support differences can break flows in non-obvious ways.
  • Units and reporting inconsistency: timing reported in ps vs ns caused misleading constraint interpretation until caught.
  • Metric extraction brittleness: log/report formats differ across designs (e.g., purely combinational blocks vs pipelined cores), which breaks naive parsing under strict shell error modes.
  • Expertise tax: getting from “tool runs” to “trustworthy PPA comparisons” still requires EDA fluency.

Infrastructure Gaps for Researcher-Friendly EDA Workflows

Capability ≠ Accessibility

Infrastructure capable of running EDA tools

is not yet infrastructure that broader AI/HW researchers

can use productively.

10 of 11

Recommendations / Future Platform Opportunities

  • Prevalidated ORFS/Yosys images (pinned versions, known-good smoke tests).
  • Containerized, reproducible PDK/tool flows with documented resource requirements.
  • Guided PPA templates for non-EDA users: “single RTL file → metrics table” with explained constraints.
  • Sweep orchestration primitives: matrix runs, retries, caching, provenance metadata.
  • Experiment tracking tailored to hardware artifacts: RTL hash, tool versions, SDC, seed, machine type.

Potential Chameleon Support for AI/EDA Hardware Research

11 of 11

Future Directions

  • Near-term: harden automation: robust parsing, clearer failure diagnostics, standardized metadata alongside CSV outputs.
  • Research frontier: implementation-aware generation and ranking, formal methods at scale, FPGA/silicon validation loops, all blocked partly by workflow usability.
  • Community takeaway: feasibility is proven; the remaining gap is reliable usability. Open-source implementation-aware evaluation is possible, making it dependable for non-specialists is the next infrastructure problem.

  • Open-source implementation-aware evaluation is possible, making it dependable for non-specialists is the next infrastructure problem.

Toward Broader AI-Assisted Hardware Evaluation