Overview of Integr...

  • 2022-09-23 10:10:12

Overview of Integrated Circuit Test and Testable Design

Keywords: integrated circuit chip automatic test equipment DFT technology With the increasing integration of chips, today's IC testing is facing unprecedented challenges:

The test time is getting longer and longer, and it may take several months or even longer to test a million-gate SoC;

The number of test vectors is increasing, but the coverage rate is difficult to improve. People do not know how many test vectors are needed to cover all devices;

The cost of using test equipment is getting higher and higher, which directly affects the cost of the chip.

First, the concept and principle of testing

Integrated circuit (IC) testing is an important and indispensable part of the IC industry chain. It runs through the entire process from product design to completion. The test currently referred to usually refers to the test after the chip is tape-out, which is defined as applying a known test vector to the circuit under test, observing the output result, and comparing it with the known correct output result to judge the function, performance and structure of the chip. good and bad process. Figure 1 illustrates the test principle. In terms of its concept, the test includes three aspects: the known test vector, the determined circuit structure and the known correct output result.

10090953.jpg" src="/uploads/chip/collection/image2020-03-10/cc534f32-ea18-4fba-a1d2-74edad103655.jpg" />

Figure 1 The principle of integrated circuit testing

2. Classification of tests and test vectors

1. Sort by test purpose

According to the purpose of testing, integrated circuit testing can be divided into 4 types.

(1) Verification Testing (Verification Testing, also known as Design Validation)

When a new chip is first designed and produced, it first undergoes verification testing. At this stage, functional tests will be carried out, as well as comprehensive AC and DC parameter tests. Through verification testing, design errors can be diagnosed and corrected, various electrical parameters of the chip can be measured for the final specification (product manual), and a test flow can be developed.

(2) Manufacturing Testing

When the chip design has passed the verification test and entered the mass production stage, it will use the process debugged in the previous stage for production testing. At this stage, the purpose of the test is to make a clear decision on whether the chip under test passes the test. Since each chip is subject to production testing, the cost of testing is the primary concern at this stage. From this point of view, the set of test vectors usually used for production testing will not contain too many functional vectors, but must have sufficiently high coverage of modeled faults.

(3) Reliability Testing

Each chip that passes the production test is not exactly the same. The most typical example is that the service life of the same type of product is not the same. Reliability testing is to ensure the reliability of products. By increasing the power supply voltage, extending the test time, and increasing the temperature, unqualified products (such as products that will fail quickly) are eliminated.

(4) Acceptance Testing

When the chip is delivered to the user, the user will conduct another test. For example, system integrators perform this test on individual components purchased before assembling a system.

2. Classification by test method

According to the different test methods, test vectors can also be divided into three categories.

(1) Exhaustive Vector

The exhaustive test vector refers to all possible input vectors. The test vector is characterized by a high coverage rate, which can reach 100%, but its number is astonishing. For a chip with n input ports, 2n test vectors are required to cover all its possible states. For example, to test the 74181ALU, which has 14 input ports, requires 214 = 16384 test vectors. For a 16-bit ALU with 38 input ports, running all the test vectors at 10 MHz requires 7.64 hours, obviously, such a test is not advisable for mass-produced chips.

(2) Functional Vector (Functional Vector)

The functional test vector is mainly used in the verification test, the purpose is to verify whether the function of each device is correct. The number of vectors required is much lower than the exhaustive test. Taking the 74181ALU as an example, only 448 test vectors are needed, but there is no algorithm to calculate whether the vectors cover all the functions of the chip.

(3) Structural Vector

This is a fault model-based test vector, and its greatest benefit is that it can automatically generate test vectors for circuits using electronic design automation (EDA) tools, and can effectively evaluate test results. The 74181ALU requires only 47 test vectors. The disadvantage of this type of test vector is that sometimes the tool cannot detect all failure types.

3. Automatic test equipment

Another important concept related to IC testing is Automatic Test Equipment (ATE). Using ATE can automatically complete the input of test vectors and check the output, which greatly improves the test speed, but it still faces many challenges.

The challenge mainly comes from two aspects. The first is the requirement of different chips for the same test equipment. Under normal circumstances, 4 to 5 chips need to be tested with the same test equipment, and the test time is only arranged in batches. Each design has its own test vector and test environment, so changing the chip under test requires resetting the test equipment and updating the test vector. The second is the requirement of the huge test vector for the performance of the test equipment itself. At present, the scale of test vectors for a million-gate SoC is very large, possibly reaching tens of thousands, and it takes a long time to read these test vectors into the test equipment and initialize them. A solution to this approach is to develop a test vector loader with a large vector memory. For example, Advantest's W4322 high-speed test vector loading server, which provides 72 GB of storage, can reduce vector loading times by 80%.

The concept of testability

Testability is a word that is often used but often misunderstood. Its framework definition is that testability is generating, evaluating, and running tests to meet a series of test objects (eg, fault coverage, test time, etc.) under certain time and financial constraints. For some specific integrated circuits, the interpretation of this definition varies depending on the tools used and the state of the art. A narrower definition currently used by industry is that testability is the degree to which testing can detect various manufacturing defects that exist in a designed product.

1. Design for Testability (DFT)

The so-called design for testability refers to the design process in which the designer obtains the maximum testability by adding a certain amount of hardware overhead while considering the test requirements while designing the system and circuit. Simply put, design for testability refers to the auxiliary design for the purpose of fault detection. This design serves as a structural testing service based on fault models to detect production faults. At present, the main testability design methods are scan path test, built-in self test and boundary scan test.

Why is DFT required? Let's first look at the traditional test method, as shown in Figure 2. In the traditional test method, the designer's responsibilities end in the verification phase, and once the designer determines that the design meets various indicators including timing, power consumption, and area, his work ends. After that, testers take over the baton and begin to develop appropriate test procedures and sufficient test patterns to find hidden design and manufacturing errors. However, the designer's design intent is rarely understood during their work, so testers have to spend a lot of valuable time teasing out design details, and test developers have to wait until test programs and test models have been validated and debugged to know Whether earlier efforts were effective. With traditional test methods, the tester has no choice but to wait for tape-out to complete and allow him to use expensive automatic test equipment (ATE). This resulted in an extended design-test process cycle, full of delays and inefficient communication.

Figure 2 Traditional design and testing process

Since the 1980s, larger semiconductor manufacturers have used DFT technology to improve test cost and reduce test complexity. Front-end designers today are well aware that with the right tools and methodologies, a little consideration for testing at the earliest stages of design can benefit greatly in the future, see Figure 3. DFT technology is closely linked with modern EDA/ATE technology, which greatly reduces the test requirements for ATE resources, facilitates the quality control of integrated circuit products, improves product manufacturability, reduces product testing costs, and shortens product manufacturing. cycle.

Figure 3 The current design and test flow

2. controllability and observability

Controllability and Observability are important concepts in design for testability. Controllability refers to how easy it is to control the logic state of a node inside the circuit through the circuit initialization input. If a node inside the circuit can be driven to any value, the node is said to be controllable. Observability expresses how easy it is to propagate the failure of a node inside a circuit to the output so that it can be observed by controlling the input variables. A node is said to be observable if the value of a node inside the circuit can be propagated to the output of the circuit and its value is known in advance.

The so-called controllability of an integrated circuit can be understood as the difficulty of setting the signal to 0 or 1. As shown in Fig. 4, the failure of the input port A of the AND gate G3 fixed to logic value 1 can be detected by applying the vector 0011 to the peripheral ports B, C, D, E, so the node is considered controllable.

Figure 4 Example of controllability

Observability refers to the difficulty of observing the failure of this signal. As shown in Figure 5, a fault fixed to logic 1 at input port A of G3 can be transmitted to peripheral port Y by applying a 0 vector and is therefore considered observable.

Figure 5 Observability example

5. Advantages and disadvantages of design for testability

People often ask, why add additional test structures to the original circuit? This question is really difficult to answer. The economics of DFT involves various aspects including design, testing, manufacturing, and marketing. Different people measure different standards. Design engineers usually think that the additional circuit of DFT will affect the performance of the chip, while test engineers think that effective testability design will greatly improve fault coverage. Table 1 lists some of the strengths and weaknesses of Design for Testability.

Table 1 Advantages and disadvantages of DFT

Many examples from the industry have proved that adding additional test structures can indeed help improve the chip yield, thereby greatly reducing the cost of chip manufacturing. Of course, in order to make up for some defects, DFT technology itself is constantly improving and developing.

Six, commonly used testability design

1. Internal scan test design

The main task of the internal scan design is to increase the controllability and observability of the internal state. For integrated circuits, the practice is to connect the internal sequential storage logic units in the form of a shift register, so that the input signal can be shifted into the internal storage logic unit to meet the controllability requirements. Likewise, the internal state is output in a shifted fashion to meet observability requirements. When the chip designed with scan path works in the test mode, a long shift register is formed inside.

As shown in Figure 6, the scan test tool first turns an ordinary flip-flop into a flip-flop with scan enable and scan input, and then connects these flip-flops in series. When scan_enable is invalid, the circuit can work normally. When scan_enable is valid, the value of each flip-flop can be serially input from the scan_in signal from off-chip. In this way, you can assign values to each on-chip register, and you can also get their values through scan_out. Tools that support scan test design include DFT Compiler from Synopsys and DFT Advisor from Mentor.

Figure 6 Scanning test circuit

2. Automatic Test Pattern Generation (ATPG, Automation Test Pattern Generation)

ATPG uses a fault model to generate test vectors by analyzing the structure of the chip, conduct structural testing, and screen out unqualified chips. Usually the ATPG tool and the scan test tool are used together to complete the test vector generation and fault simulation at the same time.

The first is the choice of failure type. The fault types that ATPG can handle are not only blocking faults, but also delay faults and path delay faults. Once all the fault types to be detected are listed, ATPG will sort these faults reasonably, possibly alphabetically, Sort by hierarchy or randomly.

After determining the type of failure, ATPG will decide how to detect such failure, and it needs to consider applying the excitation vector test points, and need to calculate all the controllable points that will affect the target node. Such algorithms include the D algorithm and the like.

The last is to find the transmission path, which is arguably the most difficult in vector generation, and it takes a lot of time to find the propagation of faulty observation points. Because usually a fault has many observable points, some tools will generally find the closest one. The transmission paths of different target nodes may overlap and collide, which of course does not occur in the scanning structure. Tools that support ATPG generation are Mentor's Fastscan and Synopsys' TetraMAX.

3. Memory Built-in-self-test

The built-in self-test is a widely used memory testability design method. Its basic idea is that the circuit generates test vectors by itself, rather than requiring external test vectors. It depends on itself to determine whether the obtained test results are correct. Therefore, the built-in self-test must have additional circuitry, including a vector generator, BIST controller, and response analyzer, as shown in Figure 7. The method of BIST can be used for storage devices such as RAM, ROM and Flash, and is mainly used in RAM. A large number of memory testing algorithms are based on failure models. Commonly used algorithms are checkerboard graph algorithm and March algorithm.

Figure 7 Basic structure of BIST

Tools that support BIST are Mentor's mBISTArchitect and Synopsys' SoCBIST.

4. Boundary Scan

The principle of boundary scan is to add a register to the input and output ports of the core logic circuit. By connecting the registers on these I/Os, the data can be serially input to the unit under test, and serially read from the corresponding port. In this process, chip-level, board-level and system-level testing can be achieved. Among them, the most important function is to perform the interconnection test of the board-level chip, as shown in Figure 8.

Figure 8 Board-level testing with boundary scan

Boundary scan is a solution proposed by the Joint Test Action Group (JTAG), an organization jointly established by some large companies in Europe and America, to solve the interconnection test between chips and chips on a printed circuit board (PCB). Because of the rationality of this scheme, it was adopted by IEEE in 1990 and became a standard, namely IEEE 1149.1. The standard specifies the test port, test structure and operation instructions of boundary scan, and its structure is shown in Figure 9. The structure mainly includes the TAP controller and the register group. The register group includes boundary scan registers, bypass registers, flag registers and instruction registers. The main ports are TCK, TMS, TDI, TDO, and there is also a user selectable port TRST.

Automatic design tools that support boundary scan include Mentor's BSD Architect and Synopsys' BSD Compiler.

Figure 9 IEEE 1149.1 structure