Adam takes design specs as its input. At a minimum, a design spec could be represented by the document representing the high-level architecture of the chip; it could also be more detailed, micro-architecture documents.
Within the design spec, Adam processes a wide range of elements, including textual descriptions, truth tables, timing diagrams, and command sequences. The formal model constructs an internal representation of flow and timing from this input.
In the future, as Adam evolves to handle more complex workflows from verification engineers, we expect it to handle incomplete specs, as well as multiple versions of specs.
The test plan is presented as a structured document, split into individual test items. For each test item, Adam describes what will be tested, links to the source in the spec, and enumerates a list of test cases (including the rationale and psuedocode for each). In addition, the test plan can be exported as a .csv.
We use a combination of human expertise and AI assessment. Verification engineers initially review a sample of the generated test plan to establish a grading standard. These judgments train an AI-based quality scoring system that we apply to the rest of the outputs. We continuously incorporate feedback from customers—especially those in regulated industries—to refine our quality assurance processes.
We are working towards achieving SOC 2 compliance. If your company requires other specific certifications or compliance guarantees, we are open to accommodating where possible. Security and data protection are top priorities for us, and we will collaborate with you to meet your standards.
At the moment, we are focused on auto-generating the test plan. In the future, we will expand our formal model’s capabilities to generate UVM-compatible testbench code and link it to the test plan.