Adam, a Normal EDA product
Automated test plans for the most complex chip designs.
The chip industry is facing a silicon complexity crisis – more complexity in chips exacerbates the amount of time required to go to market. And limits the design ideas and scope available to us.
These challenges are compounded by verification being a manual and iterative process. Adam is the verification workflow platform here to change that.
The case for AI reasoning in semiconductors
What is adam?
Adam is a multimodal AI-powered tool that writes complete test plans directly from chip design specifications. Adam is part of the Normal Electronic Design Automation (EDA) workflow platform.
Book a demo
A
Test plan generation
With Adam, you can take any design spec and automatically generate thousands of tests. Adam can be integrated with your existing tooling workflows and iterates with you based on human feedback.
B
Human-in-the-loop
Adam augments verification engineers by offering a human-in-the-loop feedback system, in which engineers review results, identify overlooked edge cases, and guide improvements.
C
Smart organization
Adam intelligently cites your tests by the source, allowing you to filter by different spec sections and features. It also includes additional filters by test category, confidence level, and approval status, making for an efficient test plan prioritization and review.
Technology
Powered by autoformalization. Unlike other AI software, Adam is backed by our foundational “auto-formalizing AI,” which autonomously builds a mathematical model from the device specification, creating a logically rigorous ontology powering industry-leading functional verification.
Book a demo
A
Autoformalization
Unlike other AI software, Adam is backed by our foundational “auto-formalizing AI,” which autonomously builds a mathematical model from the device specification, creating a logically rigorous ontology powering industry-leading functional verification.
B
Multi-modal understanding
Adam uses purpose-built multimodal AI to interpret graphical specifications, such as timing diagrams and waveform images. The result is a set of stimuli and checks derived directly from the non-textual sources, ensuring comprehensive coverage.
C
Security and infrastructure
Normal recognizes the confidentiality of your data. Our approach uses Normal-owned models that we host and train on isolated infrastructure. We deploy on private or customer-owned clouds to ensure that your proprietary specs never leave trusted environments, and we also implement strict access controls, encryption, and monitoring. If you have specific data residency or compliance requirements, we’ll work with you to configure a secure setup that aligns with your internal policies.
D
Partnership
Schedule a time to chat, so we can understand your particular use case. If we determine there's a fit, we'll kick off a prototyping phase and scale from there.
book a demo
FAQs
Here are the answers to the most common ones. If you need more details, feel free to reach out!

1. What are the prerequisites on what the inputs should be?

Adam takes design specs as its input. At a minimum, a design spec could be represented by the document representing the high-level architecture of the chip; it could also be more detailed, micro-architecture documents. 

Within the design spec, Adam processes a wide range of elements, including textual descriptions, truth tables, timing diagrams, and command sequences. The formal model constructs an internal representation of flow and timing from this input.

In the future, as Adam evolves to handle more complex workflows from verification engineers, we expect it to handle incomplete specs, as well as multiple versions of specs.

2. How is the test plan output?

The test plan is presented as a structured document, split into individual test items. For each test item, Adam describes what will be tested, links to the source in the spec, and enumerates a list of test cases (including the rationale and psuedocode for each). In addition, the test plan can be exported as a .csv.

3. How do you ensure the quality of the output?

We use a combination of human expertise and AI assessment. Verification engineers initially review a sample of the generated test plan to establish a grading standard. These judgments train an AI-based quality scoring system that we apply to the rest of the outputs. We continuously incorporate feedback from customers—especially those in regulated industries—to refine our quality assurance processes.

4. What security certifications do you have in place?

We are working towards achieving SOC 2 compliance. If your company requires other specific certifications or compliance guarantees, we are open to accommodating where possible. Security and data protection are top priorities for us, and we will collaborate with you to meet your standards.

5. Do you output testbench code?

At the moment, we are focused on auto-generating the test plan. In the future, we will expand our formal model’s capabilities to generate UVM-compatible testbench code and link it to the test plan.