ADC0801 series ar...

  • 2022-09-23 12:35:41

ADC0801 series are general purpose 8-bit μP compatible general purpose ADC converters

The ADC0805 devices are CMOS 8-bit continuous interface logic - using differential access time 135 ns approximate converters (ADCs) as stand-alone devices These converters are designed to allow operation of the bus with three-state output latches to directly drive the 8226 ; logic inputs and outputs Meet MOS and data bus. These ADCs look like memory location TTL voltage level specifications or I/O ports of microprocessors with no interface to a single 5V supply with zero input voltage values. In addition, any smaller analog voltage can achieve full 0-bit • 0.3-inch standard width 20-pin DIP package resolution.

Pin Configuration and Functional Diagram

Features

Easy to connect to all microprocessors, or to operate a potentiometer - similar to the 256R product.

Differential analog voltage input with NSC800 and INS8080A differential control

Requires 2.5 V ( LM336 ) voltage reference logic.

On-chip clock generator differential analog voltage input allows increasing

0 V to 5 V analog input voltage range with common mode rejection and offset analog

No zero adjustment to adjust the desired reference input to allow encoding

20-Pin Molded Chip Carrier or Small Outline Package Device Information (1)

Available in Ratiometric or 5 VDC, 2.5 VDC, Part Number Package Size (NOM)

Typical Application Schematic

ADC0801 has ±¼LSB accuracy graph

Detailed description

The ADC0801 series are general-purpose 8-bit μP-compatible general-purpose ADC converters that are supplied in a single 5V operating mode. These devices are considered memory locations or additional interface logic to the I/O ports of a microprocessor system. The outputs are tri-state latched for easy interfacing with the microprocessor control bus. The converter is designed with a differential potential ladder, which is equivalent to the circuit of the 256R network. It contains analog switches ordered by sub-approximation logic. A functional diagram of the ADC converter is shown in the functional block diagram. All package pinouts and main logic control paths shown are drawn in heavier lines. Differential analog voltage inputs have good common-mode rejection and allow offsets from the analog zero input voltage value. Also, the input reference voltage can be adjusted allowing small analog voltage ranges to be encoded to full 8-bit resolution. Make sure that in all possible cases an external WR pulse is required during the first power cycle. Using SAR logic, the most significant bit is tested first, then after 8 comparisons (64 clock cycles) the digital 8-bit two code (1111 1111 = full scale) is transferred to the output latch, then the interrupt is set (INTR transition from high to low). An ongoing conversion can be interrupted by issuing a second start command. The device can be operated in free running mode by connecting INTR to the CS input with CS = 0. On a high-to-low transition of the WR input, the internal SAR latch and shift register stage are reset. For example, the ADC will remain in reset as long as the CS input and WR input are held low. A transition will start at 1 8 clock cycles after a low-to-high transition on at least one of these inputs. The converter is started by bringing CS and WR low at the same time. This sets the start flip-flop (F/F) and the result "1" level resets the 8-bit shift register, resets the interrupt (INTR) F/F and inputs a "1" to the D flip-flop, F/F1, at bit 8 The input of the shift register. The internal clock signal then transmits this "1" to the Q. output F/F1. AND gate G1 combines this "1" output with the clock signal, providing it with a reset signal to start the F/F. If the set signal is no longer present (WR or CS is "1"), the F/F reset is initiated , 8-bit shift and then the register can input "1", thus starting the conversion process. If the set signal is still present, this reset pulse has no effect (both outputs of the start F/F are temporarily at "1" level) and the 8-bit shift register will continue to remain in reset mode. Therefore, this logic allows wide CS after at least one of these signals returns high and the internal clock, the WR signal and the converter will start again to provide a reset signal for the start F/F. After the "1" is clocked through the 8-bit shift register (finishing the SAR search), it appears as an input to the D-type latch, LATCH 1. Once a "1" is output from the shift register, AND gate G2 causes a new The digital word is transferred to the three-state output latch. When LATCH 1 is subsequently enabled, the Q output transitions from high to low, setting INTR F/F. The inverting buffer then provides the INTR input signal.

Note that the SET control of INTR F/F remains low for 8 external clock cycles (as the internal clock runs at 1/8 of the external clock frequency). If the data output is continuously enabled (both CS and RD are held constant low), the INTR output will still signal the end of conversion (via a high-to-low transition) because the SET input is always at the M "1M" level even though the RESET input is always at the M "1M" level. Controls the Q output operating mode of the INTR F/F. Therefore, this INTR output will remain low for the duration of the SET signal, which is 8 cycles of the external clock frequency (assuming the ADC is not enabled during this interval). When operating in freewheeling or continuous conversion mode (INTR pin tied to WR, CS tied low - continuous conversion), START F/F is set by a high-to-low transition of the INTR signal. This resets the SHIFT REGISTER causing the input of the D-type latch LATCH 1 to go low. Since the latch enable input is still present, the Q output will go high, then allowing INTR F/F to reset. This reduces the width of the resulting INTR output pulse to only a few propagation delays (~ 300 ns). When data is to be read, a combination of CS and RD both low will cause INTR F/F reset will enable the tri-state output latch to provide an 8-bit digital output.

Functional block diagram

The perfect ADC transfer characteristics (staircase waveform) are shown in Figure 16 and Figure 17. The horizontal scale is the analog input voltage, with specific points marked in 1 LSB steps (19.53 mV with 2.5V connected to the VREF/2 pin). The digital output codes corresponding to these inputs are shown as D-1, D and D+1. For a perfect ADC, not only would the center-valued (A-1, A, A+1, ...) analog input produce the correct digital output code, but each riser (transition between adjacent output codes) would be at ± Every central value outside 1/2 LSB. As shown, the riser is ideal and has no width. will provide the correct digital output codes for a range of analog input voltages that extend ±1/2 LSB from the ideal center value. Each tread (range) therefore provides the same digital output code with an analog input voltage that is 1 LSB wide. Figure 1 below shows the worst-case error plot for the ADC0801. All center value inputs are guaranteed to produce correct output codes and adjacent risers are specified to be no closer than ±1/4 to the center value point

LSB. In other words, if we apply an analog input equal to ±1/4 LSB of the center value, we guarantee that the ADC will generate the correct digital code. The maximum range of transcoding positions is specified by the horizontal arrow, not exceeding 1/2 LSB. The error curve in Figure 2 below shows the worst-case error plot for the ADC0802. If we apply, we guarantee here that the analog input is equal to the LSB analog voltage center value and the ADC will produce the correct digital code.

Digital control inputs (CS, RD and WR) meet standard TLL logic voltage levels. These signals have been renamed when compared to the standard ADC enable and output enable labels. Additionally, these inputs are active low for easy connection to the microprocessor control bus. Suitable for non-microprocessor applications, the CS input (pin 1) can be grounded, the standard ADC start function is obtained by activating the WR input (pin 3) by applying a low pulse, and the output enable function is caused by an active low pull low RD input (pin 2). Due to the internal switching action, displacement current will flow to the analog input. This is due to the on-chip stray capacitance to ground as shown in the figure below.

There are many levels of complexity in testing ADC converters. One of the simplest tests is to apply a known analog input voltage to the converter and use LEDs to display the resulting digital output code as shown in the image below. For ease of testing, VREF/2 (pin 9) should provide 2. 560 VDC and a VCC supply voltage of 5.12 VDC should be used. This provides an LSB value of 20 mV. For full adjustment, an analog input voltage of 5.090 VDC (5.120-1//LSB) should be applied to the VIN(+) pin and the VIN(-) pin to ground. The value of the VREF/2 input voltage should then be adjusted until the digital output code is changing from 1111 1110 to 1111 1111. This VREF/2 value should then be used for all tests. The digital output LED display can be decoded by splitting the 8 bits into 2 hex characters, where 4 bits are most significant (MS) and 4 least significant (LS). Table 1 shows the fractional binary equivalents of these two 4-bit groups. By adding the voltages obtained from the "VM" and "VLS" columns in Table 1, the nominal value display of the numbers (when VREF/2 = 2.560V) can be determined. For example, the output LED shows 1011 0110 or B6 (in hex) and the voltage value in the table is 3.520 + 0.120 or 3.640 VDC. These voltage values represent

The center value of the perfect ADC converter. The effect of quantification error must be taken into account when interpreting the test results.

For higher speed test systems or to obtain plot data, the test setup requires a digital-to-analog converter. An accurate 10-bit DAC can be used as a precision voltage source for the ADC. The error of the ADC under test can be expressed as an analog voltage or a difference of 2 digital words. Error as analog If a lab DVM with digital subtraction is available for reading, the 2 op-amp differential voltage, "AC", can be eliminated directly. The analog input voltage can be provided by a low frequency ramp generator and an XY plotter can be used to provide analog error (Y axis) and analog input (X axis). For operation using a microprocessor or computer-based test system, it is more convenient to present the error numbers. This can be done with the circuit in the figure below, where output code transitions can be detected as 10-bit DAC increments. This provides a 1/4 LSB step size for the 8-bit ADC under test. If the result of this test is to automatically plot the analog input and errors (in LSBs) on the X-axis as the Y-axis, this is a useful function to transmit the ADC in the test results. For acceptance testing, drawing is not necessary and test speed can be increased by establishing internal limits for allowable errors per code.

To discuss interfacing with the 8080A and 6800 microprocessors, a generic sample subroutine structure is used. The microprocessor starts the ADC, reads and stores the results of 16 consecutive conversions, and returns to the user's program. 16 data bytes are stored in 16 consecutive memory cells. All data and addresses will be given in hexadecimal. Software and hardware details are provided separately for each type of microprocessor.