Functional Random Stimulus Generators and Their Applications
Vighnesh Iyer
How is it usually done?
random.seed(1)
txns = []
for i in range(100):
alu_op = random.choice(list(ALUOP))
op1, op2 = (random.randint(0, 2**32), random.randint(0, 2**32))
txns.append((alu_op, op1, op2))
Adding Ad-Hoc Constraints
alu_op = random.choice(list(ALUOP))
if alu_op == DIVIDE:
op1, op2 = (random.randint(0, 2**32), random.randint(1, 2**32))
elif alu_op == ADD or SUB:
if random.randint(0, 1) == 0:
op1 = random.randint(0, 2**32)
op2 = random.randint(2**32 - op1, 2**33))
else:
# default distribution
What About Biasing?
if alu_op == ADD or SUB:
bias = random.randint(0, 100)
if bias in range(0, 50): # overflow
op1 = random.randint(0, 2**32)
op2 = random.randint(2**32 - op1, 2**33))
elif bias in range(50, 90): # limits
op1 = random.choice(0, 2**32 - 1)
op2 = random.choice(0, 2**32 - 1)
else:
# default distribution
Transaction to Transaction Relationships
prev_alu_op = txns[-1][0]
alu_op_bias = random.randint(0, 100)
if prev_alu_op == ADD:
if alu_op_bias in range(0, 10):
alu_op = XOR
elif alu_op_bias in range(10, 99):
alu_op = SUB
else:
alu_op = random.choice(list(ALUOP))
Adding Random Decision Instrumentation
def randint(start, stop) -> int:
value = random.randint(start, stop)
print(“got value {} from here {}”
.format(
value,
getframeinfo(stack()[magic])
))
return value
prev_alu_op == ADD
op1 == 2**32-1
op1 == 0
op1 in (0, 2**32-1)
10
20
70
alu_op == ADD
alu_op == SUB
alu_op == XOR
4
8
2
Adding Coverage
def cover(txns: List[(ALUOP, int, int)]):
op1 = [x[1] for x in txns]
extreme_values = (0 in op1, 2**32 - 1 in op1)
combos = itertools.permutations(list(ALUOP), 2)
combos = {x: False for x in combos}
# iterate with sliding window, mutate dict
print(extreme_values, combos)
What’s the Problem?
Separation of Description from Interpretation
inputs = keras.Input(shape=(None, None, 3))
x = CenterCrop(height=150, width=150)(inputs)
x = Rescaling(scale=1.0 / 255)(x)
x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x)
x = layers.MaxPooling2D(pool_size=(3, 3))(x)
x = layers.GlobalAveragePooling2D()(x)
outputs = layers.Dense(num_classes, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
Model (in-memory)
Summary
Compile / Train
ONNX
Constrained Random for Chisel
case class A(width: Int) extends Bundle {
val x = UInt(width.W)
val y = UInt(width.W)
}
val aProto = A(8)
val randBundles = Seq.tabulate(10){i => aProto.rand {
(b: A) => (b.x &+ b.y === (i+1).U) && (b.x =/= 0.U) && (b.y =/= 0.U)
}}
List(
Left(Unsat()),
Right(A(x=UInt<8>(1), y=UInt<8>(1))), // more bundle literals...
Right(A(x=UInt<8>(2), y=UInt<8>(8)))
)
Issues with Chisel Constrained Random
Randomization in SystemVerilog/UVM
class packet {
rand bit[3:0] addr;
rand bit[3:0] addr2;
constraint addr_range {
addr dist {2 :/ 5, [10:12] :/ 8}
}
constraint unq {unique{addr, addr2};}
}
initial begin
packet p;
p = new();
p.randomize();
end
What’s Missing?
A Functional Random API
class Example extends Bundle { val x: UInt, val y: UInt }
val rand: Random[Example] = for {
x <- Random[UInt].range(1, 10)
y <- Random[UInt].range(x, x*2)
} yield Example(x, y)
A Functional Random API (Cont)
Applications
What Do We Need
Prior OSS Work
CRAVE and pyVSC
Randomness in Functional Programming
How is randomness done in purely functional languages?
Purely functional = no side effects, total functions, functions are referentially transparent
Monads and Function Composition
What is a monad?
generalize function application as a type-specific construct
e.g. Optional, Either
The Random Monad (in software)
scalaz example
Haskell example