What do electronic ...

  • 2022-09-23 11:09:02

What do electronic engineers need to know about noise?

What do electronic engineers need to know about noise?
In this article, we will try to gain a deep understanding of some of the most important characteristics of noise sources that electronic engineering typically has to deal with.
Noise is a harmful interference that reduces the accuracy of the desired signal. To analyze the effect of noise on a system, we need a basic understanding of its behavior.

In this article, we will try to gain a deep understanding of some of the most important characteristics of noise sources that electronic engineering typically has to deal with.

Random noise is a random signal. This means that its instantaneous amplitude cannot be predicted based on its previous value. Figure 1 shows an example.

figure 1

If the instantaneous amplitude of the noise is unknown, how can we determine its effect on the system output? Although the instantaneous amplitude is unpredictable, there are other properties of noise waveforms that can be predicted. This is at least true for the noise sources we typically have to deal with in circuit design and analysis.

Let's see which properties are predictable and how analyzing them can help us.

The first step in characterizing a noise source with a noise amplitude histogram can be to estimate how often a given value is likely to occur. To do this, we took a large number of samples from the noise waveform and created an amplitude histogram.

For example, let's say we took 100,000 samples from a noisy waveform. From the values of these samples, we can consider the possible range of noise amplitudes. We then divide the entire range of possible values into multiple consecutive non-overlapping magnitude intervals, called bins. The bins (intervals) of a histogram are usually equal in size. The height of the bin is determined by the number of occurrences of the noise magnitude value within the bin interval.

Figure 2 shows a histogram for a sample of 100,000 random variables. In this example, the histogram has 100 bins, and the maximum and minimum sample values are 4.34 and -4.43, respectively.
figure 2

The histogram above shows how often the noise magnitude takes on a certain value over a given time interval. For example, a histogram shows that values near zero are more likely to occur.

The information in the histogram above the estimated amplitude distribution represents the likelihood of having a particular amplitude value; however, it is based on a particular experiment that took 100,000 samples. We usually need a likelihood curve that is independent of the sample size. Therefore, we have to normalize the information of Figure 2 in some way.

Obviously, all bin heights should be divided by the same value so that the resulting curve still correctly shows the relative likelihood of different magnitude bins. But what is an appropriate normalization factor? We can divide the interval height by the total number of samples (100,000) to get the relative number of occurrences of the interval, not its absolute value. However, other modifications are still required before the curves represent probabilities.

As previously mentioned, the height of an interval represents the total number of noise amplitude values in a continuous range of that interval. All of these values are represented using a single number representing the likelihood of the interval within a given interval interval. While the values of the histograms in Figure 2 represent interval likelihoods, in probability theory we use density intensions to specify the likelihood of continuous variables. Therefore, in order for the curve to show the probability density correctly, we should divide the bin height by the bin width. This normalization curve is a rough estimate of the variable probability density function (PDF), a very important characteristic of the underlying stochastic process.

We can get the same result with a slightly different approach: according to our measurements, the noise magnitude is between -4.43 and 4.34. In practice, the noise magnitude can take a value outside this range. However, we use the measured data to estimate the amplitude distribution. For the rough model we're developing, events with values between -4.43 and 4.34 are absolutely certain to occur (with a probability of 1).

This probability can be found by calculating the total area under the normalized curve (i.e. the estimated PDF). In order for the normalized curve to have a total area of 1, we should normalize the bin heights by a factor equal to the total histogram area. The histogram area is equal to the bin width multiplied by the total number of samples. Therefore, the normalization factor is equal to the bin width times the total number of samples. Applying this normalization factor results in an estimated PDF as shown in Figure 3.

image 3

Stationarity Assumption

The above discussion is based on basic assumptions. It is assumed that long-term observations of a random process can be used to estimate its distribution function. In other words, the distribution function from which the random signal comes does not change over time. In practice, this is usually not the case, but it is valid for the noise sources we are interested in. A random process is called stationary if its statistical properties do not change with time.

Computing the PDF of a mean random variable allows us to estimate its sample mean. Let's consider a simple example. Suppose a hypothetical random signal X has three possible values: 1, -2, and 3, with probabilities 0.3, 0.6, and 0.1, respectively. How do we find the average of this signal? One way is to estimate the mean by taking a large number of samples from the signal. In this case, we can calculate the sample mean by computing the arithmetic mean of the observations in the data:

where N represents the total number of samples and xi represents the ith sample. Note that what we get is still an estimate of the mean of the random variable, because the signal is random and we cannot predict future values. A better way to estimate the mean is based on using the probabilities of different outcomes.

From the probability values given by this example, we can conclude that if this random signal is observed for a long time, it will have a value of 1 around 30% of the duration of our observations. The signal will have values -2 and 3 around 60% and 10% of our observation duration, respectively. Therefore, we can use the probability of a different outcome as a weight for that outcome. We get:

where E(X) represents the expectation of the random variable X. The expectation of a random variable can be thought of as an estimate of the sample mean of the random variable. The expectation of a discrete random variable X is:

where X represents a random variable, and x represents the value that X can take. p(x) represents the probability that X takes the value x. For continuous random variables, we have the following equations:

As you can see, the PDF allows us to predict the mean value of the noisy waveform. The expected value of a random variable is sometimes denoted by μ. We can plug in the exact values in Figure 3 to find the expected value for this example. However, visual inspection shows that the symmetry is around zero, and we can predict that this random variable will have a mean of zero.

Variance of a random variable Similarly, we can use the PDF of a random variable to estimate its variance. If we have N samples from a random variable, the sample variance can be found using the following equation:

Using the probability of a given outcome as a weight for the distance between that outcome and the mean, we get:

For continuous random variables, we have the following equations:


Therefore, the PDF allows us to predict the mean and variance of the noise waveform.

Variance and Mean Power For μ=0, the variance of a continuous random variable simplifies to:

This is the expectation for the squared value of the noise samples. This value is conceptually similar to the voltage determined to determine the average power of the signal s(t)

The idea where the average power is in V2 instead of W is that if we know Pavg, we can easily calculate the actual power for a given load RL by dividing Pavg by RL. For random variables, we don't know the instantaneous sample value. However, we can use the expectation concept to predict the mean of x2. Therefore, for μ = 0, the variance of the noise waveform estimates the mean power of the noise.

As you can see, the PDF allows us to extract some valuable information, such as the mean and mean power of the noise components.

Although we have now been able to estimate the average power of noise, a major question remains: how is the noise power distributed in the frequency domain? The next article in this series will explore this question.

Conclusion Noise is a harmful interference that reduces the accuracy of the desired signal. To analyze the effect of noise on a system, we need a basic understanding of its behavior. The instantaneous magnitude of noise cannot be predicted; however, we can still develop statistical models for our noise sources of interest. For example, we can estimate the mean and mean power of the noise. This information, along with the noise power spectral density (PSD), is usually sufficient to analyze the effect of noise on circuit performance