The post Fourier Transform and Inverse Fourier Transform with Examples and Solutions appeared first on Electrical Academia.
]]>If a function f (t) is not a periodic and is defined on an infinite interval, we cannot represent it by Fourier series. It may be possible, however, to consider the function to be periodic with an infinite period. In this section we shall consider this case in a non-rigorous way, but the results may be obtained rigorously if f (t) satisfies the following conditions:
Let us begin with the exponential series for a function f_{T }(t) defined to be f (t) for
$-T/2<t<T/2$
The result is
${{f}_{T}}(t)=\sum\limits_{-\infty }^{\infty }{{{c}_{n}}{{e}^{{}^{j2\pi nt}/{}_{T}}}}\text{ }\cdots \text{ (1)}$
Where
\[{{c}_{n}}=\frac{1}{T}\int\limits_{-T/2}^{T/2}{{{f}_{T}}(x)}{{e}^{{}^{-j2\pi nx}/{}_{T}}}dx\text{ }\cdots \text{ }(2)\]
We have replaced ω_{o} by 2π/T and are using the dummy variable x instead of t in the coefficient expression. Our intention is to let T→∞, in which case f_{T }(t) →f (t).
Since the limiting process requires that ωo=2π/T→∞, for emphasis we replace 2π/T by ∆ω. Therefore substituting (2) into (1), we have
$\begin{align} & {{f}_{T}}(t)=\sum\limits_{-\infty }^{\infty }{\left[ \frac{\Delta \omega }{2\pi }\int\limits_{-T/2}^{T/2}{{{f}_{T}}(x)}{{e}^{-jxn\Delta \omega }}dx \right]{{e}^{jtn\Delta \omega }}} \\ & \text{=}\sum\limits_{-\infty }^{\infty }{\left[ \frac{1}{2\pi }\int\limits_{-T/2}^{T/2}{{{f}_{T}}(x)}{{e}^{-j(x-t)n\Delta \omega }}dx \right]}\Delta \omega \text{ }\cdots \text{ (3)} \\\end{align}$
If we define the function
\[g(\omega ,t)=\frac{1}{2\pi }\int\limits_{-T/2}^{T/2}{{{f}_{T}}(x)}{{e}^{-j\omega (x-t)}}dx\text{ }\cdots \text{ }(4)\]
Then clearly the limit of (3) is given by
$f(t)=\underset{T\to \infty }{\mathop{\lim }}\,\sum\limits_{n=-\infty }^{\infty }{g(n\Delta \omega ,t)\Delta \omega \text{ }\cdots \text{ (5)}}$
By the fundamental theorem of integral calculus the last result appears to be
$f(t)=\int\limits_{-\infty }^{\infty }{g(\omega ,t)d}\omega \text{ }\cdots \text{ (6)}$
But in the limit, f_{T}→ f and T→∞ in (4) so that what appears to be g (ω, t) in (6) is really its limit, which by (4) is
\[\underset{T\to \infty }{\mathop{\lim }}\,g(\omega ,t)=\frac{1}{2\pi }\int\limits_{-\infty }^{\infty }{f(x)}{{e}^{-j\omega (x-t)}}dx\]
Therefore (6) is actually
$f(t)=\frac{1}{2\pi }\int\limits_{-\infty }^{\infty }{\left[ \int\limits_{-\infty }^{\infty }{f(x)}{{e}^{-j\omega (x-t)}}dx \right]d}\omega \text{ }\cdots \text{ (7)}$
As we said, this is non-rigorous development, but the results may be obtained rigorously.
Let us rewrite (7) in the form
$f(t)=\frac{1}{2\pi }\int\limits_{-\infty }^{\infty }{\left[ \int\limits_{-\infty }^{\infty }{f(x)}{{e}^{-j\omega x}}dx \right]{{e}^{j\omega t}}d}\omega \text{ }\cdots \text{ (8)}$
Now, let us define the expression in brackets to be the function
Where we have changed the dummy variable from x to t. then (8) becomes
The function F (jω) is called the Fourier Transform of f (t), and f (t) is called the inverse Fourier Transform of F (jω). These facts are often stated symbolically as
$\begin{matrix} \begin{align} & F(j\omega )=\Im [f(t)] \\ & f(t)={{\Im }^{-1}}[F(j\omega )] \\\end{align} & \cdots & (11) \\\end{matrix}$
Also, (9) and (10) are collectively called the Fourier Transform Pair, the symbolism for which is
$f(t)\leftrightarrow F(j\omega )\text{ }\cdots \text{ (12)}$
The expression in (7), called the Fourier Integral, is the analogy for a non-periodic f (t) to the Fourier series for a periodic f (t). Equation (10) is, of course, another form of (7). Another description for these analogies is to say that the Fourier Transform is a continuous representation (ω being a continuous variable), whereas the Fourier series is a discrete representation (nω_{o}, for n an integer, being a discrete variable).
As an example, let us find the transform of
$f(t)={{e}^{-at}}u(t)$
Whereas a>0. By definition we have
$\begin{align} & \Im [{{e}^{-at}}u(t)]=\int\limits_{-\infty }^{\infty }{{{e}^{-at}}u(t){{e}^{-j\omega t}}dt} \\ & =\int\limits_{0}^{\infty }{{{e}^{-(a+j\omega )t}}dt} \\\end{align}$
Or
$\Im [{{e}^{-at}}u(t)]=\left. \frac{1}{-(a+j\omega )}{{e}^{-(a+j\omega )t}} \right|_{0}^{\infty }$
The upper limit is given by
$\underset{t\to \infty }{\mathop{\lim }}\,{{e}^{-at}}(\cos \omega t-j\sin \omega t)=0$
Since the expression in parentheses is bounded while the exponential goes to zero. Thus we have
$\Im [{{e}^{-at}}u(t)]=\frac{1}{(a+j\omega )}$
Or
${{e}^{-at}}u(t)\leftrightarrow \frac{1}{(a+j\omega )}$
The post Fourier Transform and Inverse Fourier Transform with Examples and Solutions appeared first on Electrical Academia.
]]>The post Exponential Fourier Series with Solved Example appeared first on Electrical Academia.
]]>$\cos (n{{\omega }_{o}}t)=\frac{1}{2}({{e}^{jn{{\omega }_{o}}t}}+{{e}^{-jn{{\omega }_{o}}t}})$
And
$\sin (n{{\omega }_{o}}t)=\frac{1}{j2}({{e}^{jn{{\omega }_{o}}t}}-{{e}^{-jn{{\omega }_{o}}t}})$
Now, let us put the above exponential equivalents in the trigonometric Fourier series and get the Exponential Fourier Series expression:
The trigonometric Fourier series can be represented as:
\[f(t)=\frac{{{a}_{o}}}{2}+\sum\limits_{n=1}^{\infty }{(}{{a}_{n}}\cos (n{{\omega }_{o}}t)+{{b}_{n}}\sin (n{{\omega }_{o}}t))\text{ }\cdots \text{ (1)}\]
Where
$\begin{matrix} {{a}_{n}}=\frac{2}{T}\int\limits_{{{t}_{o}}}^{{{t}_{o}}+T}{f(t)cos(n{{\omega }_{o}}t)}dt, & \text{n=0,1,2,}\cdots & {} \\ {} & {} & (2) \\ {{b}_{n}}=\frac{2}{T}\int\limits_{{{t}_{o}}}^{{{t}_{o}}+T}{f(t)\sin (n{{\omega }_{o}}t)}dt, & \text{n=1,2,3,}\cdots & {} \\\end{matrix}\text{ }$
Let us replace the sinusoidal terms in (1)
$f(t)=\frac{{{a}_{0}}}{2}+\sum\limits_{n=1}^{\infty }{\frac{{{a}_{n}}}{2}({{e}^{jn{{\omega }_{o}}t}}+{{e}^{-jn{{\omega }_{o}}t}})+}\frac{{{b}_{n}}}{2}({{e}^{jn{{\omega }_{o}}t}}-{{e}^{-jn{{\omega }_{o}}t}})$
$f(t)=\frac{{{a}_{0}}}{2}+\sum\limits_{n=1}^{\infty }{\left[ \left( \frac{{{a}_{n}}}{2}-\frac{j{{b}_{n}}}{2} \right){{e}^{jn{{\omega }_{o}}t}}+\left( \frac{{{a}_{n}}}{2}+\frac{j{{b}_{n}}}{2} \right){{e}^{-jn{{\omega }_{o}}t}} \right]}\text{ }\cdots \text{ (3)}$
If we define a new coefficient c_{n} by
${{c}_{n}}=\frac{{{a}_{n}}-j{{b}_{n}}}{2}$
And then substitute for a_{n} and b_{n} from (2), with t_{o}=-T/2, we have
${{c}_{n}}=\frac{1}{2}\left[ \frac{2}{T}\int\limits_{-T/2}^{T/2}{f(t)(cos(n{{\omega }_{o}}t)dt}-\frac{j2}{T}\int\limits_{-T/2}^{T/2}{f(t)(sin(n{{\omega }_{o}}t)dt} \right]$
${{c}_{n}}=\frac{1}{T}\int\limits_{-T/2}^{T/2}{f(t)(cos(n{{\omega }_{o}}t)-j\sin (n{{\omega }_{o}}t))}dt$
So, By Euler Formula
${{e}^{-j\theta }}=\cos \theta -j\sin \theta $
We can simply write,
${{c}_{n}}=\frac{1}{T}\int\limits_{-T/2}^{T/2}{f(t){{e}^{-jn{{\omega }_{o}}t}}}dt\text{ }\cdots \text{ (4)}$
We also observe that the conjugate of c_{n} is given by
$c_{n}^{*}=\frac{{{a}_{n}}+j{{b}_{n}}}{2}=\frac{1}{T}\int\limits_{-T/2}^{T/2}{f(t)(cos(n{{\omega }_{o}}t)+j\sin (n{{\omega }_{o}}t))}dt$
Which is evidently c_{-n }(c_{n} with n replaced by -n). That is,
${{c}_{-n}}=\frac{{{a}_{n}}+j{{b}_{n}}}{2}\text{ }\cdots \text{ (5)}$
Finally, let us observe that
$\frac{{{a}_{0}}}{2}=\frac{1}{T}\int\limits_{-T/2}^{T/2}{f(t)}dt\text{ }$
Which by (4) is
$\frac{{{a}_{0}}}{2}={{c}_{0}}\text{ }\cdots \text{ (6)}$
Summing up, (4), (5) and (6) enable us to write (3) in the form
$\begin{align} & \text{f(t)=}{{c}_{0}}+\sum\limits_{n=1}^{\infty }{{{c}_{n}}}{{e}^{jn{{\omega }_{o}}t}}+\sum\limits_{n=1}^{\infty }{{{c}_{-n}}}{{e}^{-jn{{\omega }_{o}}t}} \\ & =\sum\limits_{n=0}^{\infty }{{{c}_{n}}}{{e}^{jn{{\omega }_{o}}t}}+\sum\limits_{n=-1}^{-\infty }{{{c}_{n}}}{{e}^{jn{{\omega }_{o}}t}} \\\end{align}$
We have combined c_{o} with the first summation and replaced the dummy summation index n by –n in the second summation. The result is more compactly written as
Where c_{n} is given by (4). This version of the Fourier series is called the exponential Fourier series and is generally easier to obtain because only one set of coefficients needs to be evaluated.
As an example, let us find the exponential series for the following rectangular wave, given by
$\begin{matrix} \begin{matrix} f(t)=4, \\ =-4, \\ f(t+2)=f(t) \\\end{matrix} & \begin{matrix} 0<t<1 \\ 1<t<2 \\ {} \\\end{matrix} \\\end{matrix}$
With T=2. We have ω_{o}=2π/T= π, and thus by (4)
${{c}_{n}}=\frac{1}{2}\int\limits_{-1}^{1}{f(t){{e}^{-jn\pi t}}}dt\text{ }$
For n≠0 this is
${{c}_{n}}=\frac{1}{2}\int\limits_{-1}^{0}{(-4){{e}^{-jn\pi t}}}dt+\frac{1}{2}\int\limits_{0}^{1}{4{{e}^{-jn\pi t}}}dt\text{ =}\frac{4}{jn\pi }\left[ 1-{{(-1)}^{n}} \right]\text{ }$
Also, we have
\[\begin{align} & {{c}_{n}}=\frac{1}{2}\int\limits_{-1}^{1}{f(t)}dt \\ & =\frac{1}{2}\int\limits_{-1}^{0}{4}dt-\frac{1}{2}\int\limits_{0}^{1}{4}dt=0\text{ } \\\end{align}\]
Since c_{n}=0 for n even and c_{n}=8/jnπ for n odd, we may write the exponential series in the form
This expression covers the function for both even and odd values.
The post Exponential Fourier Series with Solved Example appeared first on Electrical Academia.
]]>The post Symmetry Properties of the Fourier series appeared first on Electrical Academia.
]]>\[f(t)=f(-t)\]
For all t. that is, we may replace t by –t without changing the function. Examples of even function are t^{2} and cost, and a typical even function is shown in figure 1 (a). Evidently, we could fold the figure along the vertical axis, and the two properties of the graph would coincide.
Fig.1 (a): Even Function
A function f (t) which is symmetrical about the origin is said to be an odd function and has the property
$f(t)=-f(-t)$
In other words, replacing t by –t changes only the sign of the function. A typical odd function is shown in figure 1 (b), and other examples are t and sin t. evidently, we can fold the right half of the figure of an odd function and then rotate it about the t-axis (x-axis normally) so that it will coincide with the left half.
Fig.1 (b): Odd Function
Now let us see how symmetry properties can help us in determining the Fourier coefficients. Evidently, in figure 1 (a) we may see by inspection that for f (t) an even function we have
$\int\limits_{-a}^{a}{f(t)dt=2}\int\limits_{0}^{a}{f(t)dt}\text{ }\cdots \text{ (3)}$
This is true because the area from –a to 0 is identical to that from 0 to a. this result may also be established analytically by writing
$\begin{align} & \int\limits_{-a}^{a}{f(t)dt=}\int\limits_{-a}^{0}{f(t)dt+\int\limits_{0}^{a}{f(t)dt}} \\ & =-\int\limits_{a}^{0}{f(-\tau )d\tau +\int\limits_{0}^{a}{f(t)dt}} \\ & =\int\limits_{0}^{a}{f(\tau )d\tau +\int\limits_{0}^{a}{f(t)dt}} \\ & =2\int\limits_{0}^{a}{f(t)dt} \\\end{align}$
In the case of f (t) an odd function, it is clear from figure 1(b) that
$\int\limits_{-a}^{a}{f(t)dt=0}\text{ }\cdots \text{ (4)}$
This is true because the area from –a to 0 is precisely the negative of that from0 to a. this result may also be obtained analytically, as was done for even functions.
In the case of the fourier coefficients we need to integrate the functions
$g(t)=f(t)\cos (n{{\omega }_{o}}t)\text{ }\cdots \text{ (5)}$
And
$h(t)=f(t)\sin (n{{\omega }_{o}}t)\text{ }\cdots \text{ }(6)$
If f (t) is even, then
$\begin{align} & g(-t)=f(-t)\cos (-n{{\omega }_{o}}t) \\ & =f(t)\cos (n{{\omega }_{o}}t) \\ & =g(t) \\\end{align}$
And
$\begin{align} & h(-t)=f(-t)sin(-n{{\omega }_{o}}t) \\ & =-f(t)sin(n{{\omega }_{o}}t) \\ & =-h(t) \\\end{align}$
Thus g (t) is even and h (t) is odd. Therefore, taking t_{o}=-T/2 in Fourier coefficients which are given below
$\begin{matrix} {{a}_{n}}=\frac{2}{T}\int\limits_{{{t}_{o}}}^{{{t}_{o}}+T}{f(t)cos(n{{\omega }_{o}}t)}dt, & \text{n=0,1,2,}\cdots & {} \\ {} & {} & {} \\ {{b}_{n}}=\frac{2}{T}\int\limits_{{{t}_{o}}}^{{{t}_{o}}+T}{f(t)\sin (n{{\omega }_{o}}t)}dt, & \text{n=1,2,3,}\cdots & {} \\\end{matrix}\text{ }$
We have, for f (t) even,
$\begin{matrix} {{a}_{n}}=\frac{4}{T}\int\limits_{0}^{T/2}{f(t)cos(n{{\omega }_{o}}t)}dt, & \text{n=0,1,2,}\cdots & {} \\ {} & {} & (7) \\ {{b}_{n}}=0, & \text{n=1,2,3,}\cdots & {} \\\end{matrix}\text{ }$
By a similar procedure we may show that if f (t) is odd, then
$\begin{matrix} {{a}_{n}}=0, & \text{n=0,1,2,}\cdots & {} \\ {} & {} & (8) \\ {{b}_{n}}=\frac{4}{T}\int\limits_{0}^{T/2}{f(t)\sin (n{{\omega }_{o}}t)}dt, & \text{n=1,2,3,}\cdots & {} \\\end{matrix}\text{ }$
In either case one set of coefficients is zero, and the other set is obtained by taking twice the integral over half the period, as described by (7) and (8).
In summary, an even function has no sine terms, and an odd function has no constant or cosine terms in its Fourier series. To illustrate further, let us find the Fourier coefficients of
$\begin{matrix} \begin{matrix} f(t)=4, \\ =-4, \\ f(t+2)=f(t) \\\end{matrix} & \begin{matrix} 0<t<1 \\ 1<t<2 \\ {} \\\end{matrix} \\\end{matrix}$
Evidently T=2, and thus ω_{o}=π. The function is odd, and therefore by (8) we have
$\begin{align} & \begin{matrix} {{a}_{n}}=0, & \text{n=0,1,2,}\cdots & {} \\ {} & {} & {} \\ {{b}_{n}}=\frac{4}{2}\int\limits_{0}^{1}{4\sin (n\pi t)}dt, & {} & {} \\\end{matrix} \\ & =\frac{8}{n\pi }[1-{{(-1)}^{n}}] \\ & \text{ } \\\end{align}$
Therefore b_{n}=0 for n even and b_{n}=16/nπ for n odd.
It often happens that the function f (t) is not periodic, so we shall consider an alternate method for this case. However, as is often the case, we may be interested only in f (t) on some finite interval (0, T), in which case we can consider it as periodic of period T, and find its Fourier series. Of course, what we have is not the Fourier series of f (t) but of its periodic extension. We may, however, use the result on the interval of interest and not be concerned either its periodic behavior elsewhere. Alternatively, we may consider (0, T) to be half the period and expand f (t) as either an even or an odd function using (7) or (8). The result called a half-range sine or cosine series, is easier to obtain and is equally valid in the interval of interest.
The post Symmetry Properties of the Fourier series appeared first on Electrical Academia.
]]>The post Trigonometric Fourier Series Solved Examples appeared first on Electrical Academia.
]]>There are many functions that are important in engineering which are not sinusoids or exponentials. A few examples are square waves, saw-tooth waves, and triangular pulses. Indeed, a function may be represented by a set of data points and have no analytical representation given at all. In this tutorial, we shall consider these additional functions and show how we may represent them in terms of our familiar sinusoidal functions. The techniques we shall use were first given in 1822 by the French mathematician and physicist Joseph Fourier.
Let us begin by considering a function f (t) which is periodic of period T; that is,
$f(t)=f(t+T)$
As Fourier showed, if f (t) satisfies a set of rather general conditions, it may be represented by the infinite series of sinusoids
$f(t)=\frac{{{a}_{o}}}{2}+{{a}_{1}}\cos {{\omega }_{o}}t+{{a}_{1}}\cos {{\omega }_{o}}t+\cdots +{{b}_{1}}\sin {{\omega }_{o}}t+{{b}_{1}}\sin {{\omega }_{o}}t+\cdots $
Or more compactly,
\[f(t)=\frac{{{a}_{o}}}{2}+\sum\limits_{n=1}^{\infty }{(}{{a}_{n}}\cos (n{{\omega }_{o}}t)+{{b}_{n}}\sin (n{{\omega }_{o}}t))\text{ (1)}\]
Where ${{\omega }_{o}}={}^{2\pi }/{}_{T}$ . This series is called the trigonometric Fourier series, or simply the Fourier series, of f (t). The a’s and b’s are called the Fourier coefficients and depend, of course, on f (t).
The coefficients may be determined rather easily by the use of Table 1.
$f(t)$ $\int\limits_{0}^{{}^{2\pi }/{}_{\omega }}{f(t)dt,\text{ }\omega \ne \text{0}}$
$1.\text{ sin(}\omega \text{t+}\alpha \text{),cos(}\omega \text{t+}\alpha \text{)}$ $0$
$2.\text{ sin(n}\omega \text{t+}\alpha \text{),cos(n}\omega \text{t+}\alpha \text{)*}$ $0$
$3.\text{ si}{{\text{n}}^{\text{2}}}\text{(}\omega \text{t+}\alpha \text{),co}{{\text{s}}^{\text{2}}}\text{(}\omega \text{t+}\alpha \text{)}$ ${}^{\pi }/{}_{\omega }$
$4.\text{ sin(m}\omega \text{t+}\alpha \text{)cos(n}\omega \text{t+}\alpha \text{)*}$ $0$
$5.\text{ cos(m}\omega \text{t+}\alpha \text{)cos(n}\omega \text{t+}\beta \text{)*}$ $\left\{ \begin{matrix}
\begin{matrix}
0, & m\ne n \\
\end{matrix} \\
\begin{matrix}
\frac{\pi \cos (\alpha -\beta )}{\omega }, & m=n \\
\end{matrix} \\
\end{matrix} \right.$
Table.1: Integrals of Sinusoidal Functions and Their Products
* m and n are integers
Let us begin by obtaining ${{a}_{o}}$ , which may be done by integrating both sides of equation (1) over a full period; that is,
$\int\limits_{0}^{T}{f(t)dt=}\int\limits_{0}^{T}{\frac{a{}_{o}}{2}dt+}\sum\limits_{n=1}^{\infty }{\int\limits_{0}^{T}{({{a}_{n}}\cos (n{{\omega }_{o}}t)+{{b}_{n}}\sin (n{{\omega }_{o}}t))dt}}$
Since $T={}^{2\pi }/{}_{{{\omega }_{o}}}$ , every term in the summation is zero by entry 2 in Table 1 ($\alpha =0$ ), and therefore we have
${{a}_{o}}=\frac{2}{T}\int\limits_{0}^{T}{f(t)dt\text{ (2)}}$
Next, let us multiply equation (1) through by $\cos (m{{\omega }_{o}}t)$ , where m is an integer, and integrate. This yields
\[\int\limits_{0}^{T}{f(t)\text{cos(m}{{\omega }_{\text{o}}}\text{t)}dt=}\int\limits_{0}^{T}{\frac{a{}_{o}}{2}\text{cos(m}{{\omega }_{\text{o}}}\text{t)}dt+}\sum\limits_{n=1}^{\infty }{{{a}_{n}}\int\limits_{0}^{T}{\text{cos(m}{{\omega }_{\text{o}}}\text{t)}(\cos (n{{\omega }_{o}}t)dt+\sum\limits_{n=1}^{\infty }{{{b}_{n}}}\int\limits_{0}^{T}{\text{cos(m}{{\omega }_{\text{o}}}\text{t)}}\sin (n{{\omega }_{o}}t))dt}}\]
By entries 2, 4 and 5 of Table 1 ($for\text{ }\alpha =\beta =0$ ), every term in the right member is zero except the term where n=m in the first summation. This term is given by
\[{{a}_{m}}\int\limits_{0}^{T}{{{\cos }^{2}}(m{{\omega }_{o}}t)}dt=\frac{\pi }{{{\omega }_{o}}}{{a}_{m}}=\frac{T}{2}{{a}_{m}}\]
So that
${{a}_{m}}=\frac{2}{T}\int\limits_{0}^{T}{f(t)cos(m{{\omega }_{o}}t)}dt,\text{ m=1,2,3,}\cdots \text{ (3)}$
Finally, multiplying by $\sin (m{{\omega }_{o}}t)$ , integrating, and applying Table 1, we have
${{b}_{m}}=\frac{2}{T}\int\limits_{0}^{T}{f(t)\sin (m{{\omega }_{o}}t)}dt,\text{ m=1,2,3,}\cdots \text{ (4)}$
We note that equation (2) is the special case, m=0, of equation (3) (which is why we used ${}^{{{a}_{o}}}/{}_{2}$ instead of ${{a}_{o}}$ for the constant term). Also, as the reader may easily show, we may integrate over any interval of length T, such as ${{t}_{o}}\text{ to }{{t}_{o}}\text{+T}$ , for arbitrary ${{t}_{o}}$, and the results will be the same. Therefore we may summarize by giving the Fourier coefficients in the form
$\begin{matrix} {{a}_{n}}=\frac{2}{T}\int\limits_{{{t}_{o}}}^{{{t}_{o}}+T}{f(t)cos(n{{\omega }_{o}}t)}dt, & \text{n=0,1,2,}\cdots & {} \\ {} & {} & (5) \\ {{b}_{n}}=\frac{2}{T}\int\limits_{{{t}_{o}}}^{{{t}_{o}}+T}{f(t)\sin (n{{\omega }_{o}}t)}dt, &\text{n=1,2,3,}\cdots & {} \\\end{matrix}\text{ }$
We have replaced the dummy subscript m by n to correspond to the notation of equation (1). The term \[({{a}_{n}}\cos (n{{\omega }_{o}}t)+{{b}_{n}}\sin (n{{\omega }_{o}}t)\] in equation (1) is sometimes called the nth harmonic. The case n=1 is the first harmonic, or fundamental, with fundamental frequency ${{\omega }_{o}}$ . The case n=2 is the second harmonic with frequency $2{{\omega }_{o}}$, and so forth. The term ${}^{{{a}_{o}}}/{}_{2}$ is the constant, or dc, component.
The conditions that equation (1) is the Fourier series representing f(t), where the Fourier coefficients are given by equation (5), are, as we have said, quite general and hold for almost any function we are likely to encounter in engineering. For the reader’s information they are as follows:
As an example, consider f(t) is the saw-tooth wave as shown in figure 1,
Fig.1: Saw-Tooth Wave
Given by
\[\begin{matrix} f(t)=t, & -\pi <t<\pi \\\end{matrix}\text{ (6)}\]
$f(t+2\pi )=f(t)$
Since $T=2\pi $ , we have ${{\omega }_{o}}={}^{2\pi }/{}_{T}=1$ . If we choose ${{t}_{o}}=-\pi $ , then the first equation of (5) for n=0 yields,
${{a}_{o}}=\frac{1}{\pi }\int\limits_{-\pi }^{\pi }{t\text{ }dt=0}$
For n=1, 2, 3, ⋯, we have
\[{{a}_{n}}=\frac{1}{\pi }\int\limits_{-\pi }^{\pi }{tcos(nt)}\]
\[=\frac{1}{{{n}^{2}}\pi }\left. (cos(nt)+ntsin(nt)) \right|_{-\pi }^{\pi }=0\]
And
${{b}_{n}}=\frac{1}{\pi }\int\limits_{-\pi }^{\pi }{t\sin (nt)dt}$
\[=\frac{1}{{{n}^{2}}\pi }\left. (sin(nt)-nt\cos (nt)) \right|_{-\pi }^{\pi }\]
\[=-\frac{2\cos (n\pi )}{n}=-\frac{2{{(-1)}^{n+1}}}{n}\]
The case n=0 had to be considered separately because of the appearance of n^{2} in the denominator in the general case.
From our results, the Fourier series for equation (6) can be obtained by putting all the coefficients in equation (1):
\[f(t)=2(\frac{\sin t}{1}-\frac{\sin 2t}{2}+\frac{\sin 3t}{3}-\cdots )\text{ (7)}\]
As another example, suppose we have
$\begin{matrix} f(t) & =0, & -2<t<-1 \\ {} & =6, & -1<t<1 \\ {} & =0, & 1<t<2 \\\end{matrix}$
$f(t+4)=f(t)$
Evidently, T=4, and ${{\omega }_{o}}={}^{2\pi }/{}_{T}=\frac{\pi }{2}$ . If we take ${{t}_{o}}=0$ in equation (5), we must break each integral into three parts since on the interval from 0 to 4, f (t) has 0, 6, and 0 values. If ${{t}_{o}}=-1$ , we only have to divide the integral intro two parts since f (t)=6 on -1 to 1 and f (t)=0 on 1 to 3. Therefore let us choose this value of ${{t}_{o}}$ and obtain
${{a}_{o}}=\frac{1}{\pi }\int\limits_{-1}^{1}{\text{6 }dt+\frac{2}{4}\int\limits_{1}^{3}{\text{0 }dt=6}}$
Also we have
\[{{a}_{n}}=\frac{2}{4}\int\limits_{-1}^{1}{6cos(\frac{n\pi t}{2})dt}+\frac{2}{4}\int\limits_{1}^{3}{0cos(\frac{n\pi t}{2})dt}\]
\[=\frac{12}{n\pi }\sin (\frac{n\pi }{2})\]
And, finally,
\[{{b}_{n}}=\frac{2}{4}\int\limits_{-1}^{1}{6\sin (\frac{n\pi t}{2})dt}+\frac{2}{4}\int\limits_{1}^{3}{0\sin (\frac{n\pi t}{2})dt}\]
$=0$
Thus the Fourier series is
\[f(t)=3+\frac{12}{\pi }\left( cos(\frac{\pi t}{2})-\frac{1}{3}cos(\frac{3\pi t}{2})+\frac{1}{5}cos(\frac{5\pi t}{2})+\cdots \right)\]
Since there are no even harmonics, we may put the results in the more compact form
\[f(t)=3+\frac{12}{\pi }\sum\limits_{n=1}^{\infty }{\frac{{{(-1)}^{n+1}}\cos \{[(2n-1)\pi t]/2\}}{2n-1}}\]
There are a number of reasons why Fourier series analysis is important in the study of signals and systems.
The post Trigonometric Fourier Series Solved Examples appeared first on Electrical Academia.
]]>The post Classification of Systems in Signals and Systems appeared first on Electrical Academia.
]]>Before proceeding to more detailed consideration of methods for solving the equations corresponding to the mathematical models of systems, it is essential to define more precisely some of the terms used in describing and specifying the systems. To this end it is useful to consider a very general mathematical model that encompasses a wide range of systems. Such a model for a system having an input x (t) and output y (t) is
\[\begin{matrix} {{a}_{n}}(t)\frac{{{d}^{n}}y}{d{{t}^{n}}}+{{a}_{n-1}}(t)\frac{{{d}^{n-1}}y}{d{{t}^{n-1}}}+\cdots +{{a}_{1}}(t)\frac{dy}{dt}+{{a}_{o}}(t)y= & {} & {} \\ {} & {} & \text{(1)} \\ {{b}_{m}}(t)\frac{{{d}^{m}}x}{d{{t}^{m}}}+{{b}_{m-1}}(t)\frac{{{d}^{m-1}}x}{d{{t}^{m-1}}}+\cdots +{{b}_{1}}(t)\frac{dx}{dt}+{{b}_{o}}(t)x & {} & {} \\\end{matrix}\]
The mathematical model given above expresses the behavior of the system in terms of a single nth-order differential equation.
The differential equation (1) is of the nth order since this is the highest-order derivative of the response to appear. The corresponding system is said to be nth order also.
A causal system is one whose present response does not depend on future values of the input.
A non-causal system is one for which this condition is not assumed. Non-causal systems do not exist in the real world but can be approximated by the use of time delay, and they frequently occur in system analysis problems.
The system equation (1) represents a linear system , since all derivatives of the excitation (input) and response are raised to the first power only, and since there are no products of derivatives. One of the most important consequences of linearity is that superposition applies. In fact, this may be used as a definition of linearity. Specifically, if
${{y}_{1}}(t)=system\text{ }response\text{ }to\text{ }{{x}_{1}}(t)$
${{y}_{2}}(t)=system\text{ }response\text{ }to\text{ }{{x}_{2}}(t)$
And if
$a{{y}_{1}}(t)+b{{y}_{2}}(t)=system\text{ }response\text{ }to\text{ a}{{\text{x}}_{\text{1}}}\text{(t)+b}{{\text{x}}_{\text{2}}}\text{(t)}$
For all a, b, x_{1}(t), and x_{2}(t), then the system is linear. If this is not true, then the system is not linear.
In the case of nonlinear systems, it is not possible to write a general differential equation of finite order that can be used as the mathematical model for all systems. This is because there are many different ways in which nonlinearities can arise, and they cannot be all described mathematically in the same form. It is also important to remember that superposition does not apply in nonlinear systems.
A linear system usually results if none of the components in the system changes its characteristics as a function of the magnitude of the excitation (input) applied to it. In the case of an electrical system, this means that resistors, inductors, and capacitors do not change their values as the voltages across them or the currents through them change.
Equation (1), as written, represents a time-varying system since the coefficients a_{i}(t) and b_{j}(t) are indicated as being functions of time. The analysis of time-varying devices is difficult since differential equations with non-constant coefficients cannot be solved except in special cases. The systems of greatest concern for the present discussion are characterized by a differential equation having constant coefficients. Such a system is known as fixed, time-invariant or stationary.
Fixed systems usually result when the physical components in the system, and the configuration in which they are connected, do not change with time. Most systems that are not exposed to a changing environment can be considered fixed unless they have been deliberately designed to be time-varying.
Time-varying system results when any of its components, or their manner of connection, do change with time. In many cases, this change is a result of environmental conditions.
Equation (1) represents a lumped-parameter system by virtue of being an ordinary differential equation. The implication of this designation is that the physical size of the system is of no concern since excitations (inputs) propagate through the system instantaneously. The assumption is usually valid if the largest physical dimension of the system is small compared with the wavelength of the highest significant frequency considered.
A distributed-parameter system is represented by a partial differential equation and generally has dimensions that are not small compared with the shortest wavelength of interest. Transmission lines, waveguides, antennas, and microwave tubes are typical examples of distributed-parameter electrical systems.
Equation (1) represents a continuous time system by the virtue of being a differential equation rather than a difference equation. That is, the inputs and outputs are defined for all values of time rather than just for discrete values of time. Since time itself is inherently continuous, all physical systems are actually continuous-time systems.
However, there are situations in which one is interested solely in what happens at certain discrete instants of time. In many of these cases, the system contains a digital computer, which is performing certain specified computations and producing its answers at discrete time instants. If no change (in input or output) takes place between instants, then system analysis is simplified by considering the system to be discrete-time and having a mathematical model that is a difference equation.
An instantaneous is one in which the response at time t_{1} depends only upon the excitation (input) at time t_{1} and not upon any future or past values of the excitation. This may also be called a zero-memory or memoryless system. A typical example is a resistance network or nonlinear device without energy storage.
If the response does depend on past values of the excitation (input), then the system is said to be dynamic and to have memory. Any system that contains at least two different types of elements, one of which can store energy, is dynamic.
The post Classification of Systems in Signals and Systems appeared first on Electrical Academia.
]]>The post Laplace Transform Properties in Signal and Systems appeared first on Electrical Academia.
]]>If
$x(t)\leftrightarrow X(s)\text{ and v}(t)\leftrightarrow V(s)$
Then for any real or complex numbers a, b,
$ax(t)+bv(t)\leftrightarrow aX(s)+bV(s)$
This property says that the Laplace transform is a linear operation.
If
$x(t)\leftrightarrow X(s)$
Then for any positive real number c,
$x(t-c)\leftrightarrow {{e}^{-cs}}X(s)$
The function x(t-c) is a c-second right shift of x(t). we can see that a c-second right shift in the time domain corresponds to multiplication by e^{-cs }in the Laplace transform domain.
If
$x(t)\leftrightarrow X(s)$
For any positive real number a,
$x(at)\leftrightarrow \frac{1}{a}X(\frac{s}{a})$
The function x(at) is a time-scaled version of the given function x(t). We can see that time scaling corresponds to scaling by the factor 1/a in the Laplace transform domain (plus multiplication of the transform by 1/a).
If
$x(t)\leftrightarrow X(s)$
Then for any positive integer n,
${{t}^{n}}x(t)\leftrightarrow {{(-1)}^{n}}\frac{{{d}^{n}}X(s)}{d{{s}^{n}}}$
If
$x(t)\leftrightarrow X(s)$
Then for any real or complex number a,
${{e}^{at}}x(t)\leftrightarrow X(s-a)$
By this property, multiplication by an exponential function in the time domain corresponds to a shift of the s variable in the Laplace transform domain.
If
$x(t)\leftrightarrow X(s)$
Then for any real number ω,
$x(t)sin\omega t\leftrightarrow \frac{j}{2}[X(s+j\omega )-X(s-j\omega )]$
$x(t)cos\omega t\leftrightarrow \frac{j}{2}[X(s+j\omega )+X(s-j\omega )]$
Given two functions x(t) and v(t) with x(t) and v(t) equal to zero for t<0, we define the convolution by
$x(t)*v(t)\leftrightarrow \int\limits_{0}^{\infty }{x(\tau )}v(t-\tau )d\tau \text{ t}\ge \text{0}$
Now, letting X(s) denote the Laplace transform of x(t) and v(s) denote the Laplace transform of v(t). we have the transform pair
$x(t)*v(t)\leftrightarrow X(s)V(s)$
By this property, convolution in the time domain corresponds to a product in the Laplace transform domain.
Given a function x(t) with x(t)=0 for all t<0, we define the integral of x(t) to be the function
$\int\limits_{0}^{\infty }{x(t)dt}\leftrightarrow \frac{1}{s}X(s)$
By this property, the Laplace transform of the integral of x(t) is equal to X(s) divided by s.
If
$x(t)\leftrightarrow X(s)$
Then
$\overset{.}{\mathop{x}}\,(t)\leftrightarrow sX(s)-x(0)$
Given a signal x(t) with transform X(s), we have
\[x(0)\leftrightarrow \underset{s\to \infty }{\mathop{\lim }}\,\text{ }sX(s)\]
\[\overset{.}{\mathop{x}}\,(0)\leftrightarrow \underset{s\to \infty }{\mathop{\lim }}\,\text{ }\!\![\!\!\text{ }{{s}^{2}}X(s)-sx(0)]\]
The initial-value theorem is useful since it allows for computation of the initial values of a function x(t) and its derivatives directly from the Laplace transform X(s) of x(t).Hence, if we know X(s) but not x(t), it is possible to compute these initial values without having to compute the inverse Laplace transform of x(t).
Given the signal x(t) with transform X(s), suppose that x(t) has a limit t→∞. Then the final-value theorem states that
\[\underset{t\to \infty }{\mathop{\lim }}\,x(t)\leftrightarrow \underset{s\to 0}{\mathop{\lim }}\,\text{ }sX(s)\]
The post Laplace Transform Properties in Signal and Systems appeared first on Electrical Academia.
]]>The post Laplace Transform:Introduction and Example appeared first on Electrical Academia.
]]>The Laplace transform X(s) is a complex-valued function of the complex variable s. In other words, given a complex number s, the value X(s) of the transform at the point s is, in general, a complex number.
Given a function x (t) of the continuous-time variable t, the two-sided Laplace transform of x (t), denoted by X(s), is a function of the complex variable s=σ+jω defined by
\[X(s)=\int_{-\infty }^{\infty }{x(t){{e}^{-st}}dt}\text{ }(1)\]
The one-sided Laplace transform of x (t), also denoted by X(s), is defined by
\[X(s)=\int_{0}^{\infty }{x(t){{e}^{-st}}dt}\text{ (2)}\]
By (2), we see that one-sided transform depends only on the values of the signal x (t) for t≥0. This is the reason that definition (2) of the transform is called the one-sided Laplace transform. We can apply the one-sided Laplace transform to signals x (t) that are nonzero for t<0; however, any nonzero values of x (t) for t<0 will not be recomputable from the one-sided transform.
If x (t) is zero for all t<0, the expression (1) reduces to (2), and thus, in this case, the one-sided and two-sided transforms are the same. Let Λ denote the set of all positive or negative real numbers σ such that
\[X(s)=\int_{0}^{\infty }{\left| x(t) \right|{{e}^{-\sigma t}}dt}<\infty \text{ (3)}\]
If the set Λ is empty, that is, there is no real number σ such that (3) is satisfied, the function x (t) does not have a Laplace transform (which converges “absolutely”). Most functions arising in engineering do have a Laplace transform, and thus the set Λ is not empty in many cases of interest.
If Λ is not empty, let σ_{min }denote the minimal element of the set Λ; that is, σ_{min }is the smallest number such that
σϵ Λ for all σ> σ_{min}
The set of all complex numbers s such that
_{ }_{\[Real\text{ }s>{{\sigma }_{min}}\text{ (4)}\] }
Where Real s = real part of s, is called the region of absolute convergence of the Laplace transform of x (t). for any complex number s such that (4) is satisfied, the integral in (2) exists, and thus the Laplace transform X(s) exists for this values of s. Hence the Laplace transform X (s) of x (t) is well defined for all values of s belonging to the region of absolute convergence. It should be stressed that the region of absolute convergence depends on the given function x (t).
Suppose that x (t) is the unit step function u(t). then the Laplace transform U (s) of u (t) is given by
\[U(s)=\int\limits_{0}^{\infty }{u(t){{e}^{-st}}dt}\]
\[=\int\limits_{0}^{\infty }{{{e}^{-st}}dt}\]
\[=-\frac{1}{s}\left. {{e}^{-st}} \right]_{t=0}^{t=\infty }\text{ (5)}\]
Now exp (-st) evaluated at t=∞ is defined by
\[{{e}^{-s\infty }}=\underset{T\to \infty }{\mathop{\lim }}\,{{e}^{-sT}}\text{ (6)}\]
Setting s=σ+jω in the right side of (6), we have
\[{{e}^{-s\infty }}=\underset{T\to \infty }{\mathop{\lim }}\,[-(\sigma +j\omega )T]\]
\[=\underset{T\to \infty }{\mathop{\lim }}\,{{e}^{-\sigma T}}{{e}^{-j\omega T}}\text{ (7)}\]
The limit in (7) exists if and only if σ>0, which is equivalent to Real s >0. If Real s>0, the limit in (7) is zero, and the expression (5) for U (s) reduces to
$U(s)=-(-\frac{1}{s}){{e}^{0}}=\frac{1}{s}$
We also have
\[\int\limits_{0}^{\infty }{u(t){{e}^{-\sigma t}}dt<\infty }\]
For all real numbers σ such that σ>0. Thus the region of absolute convergence of U (s) is the set of all complex numbers s such that Real s >0.
Little Explanation
The limit in (7) exists if and only if σ>0. WHY?
Let’s take a look at this issue by plotting the given function
\[{{e}^{-s\infty }}=\underset{T\to \infty }{\mathop{\lim }}\,{{e}^{-\sigma T}}\]
Since we are just dealing with real part (σ) of s, so we intentionally ignored e^{-jwT }. In order to plot the above function, we assume σ=1 (it could be any number > 0).
Matlab Code:
T=[1:0.1:20];
y=exp(-T);
plot(T,y)
Graph:
So, if we look at the graph, we can perceive that the function clearly approaches 0 as T→∞. Hence, If Real s>0, the limit in (7) is zero.
The post Laplace Transform:Introduction and Example appeared first on Electrical Academia.
]]>The post Z Transform Introduction | Z Transform Properties appeared first on Electrical Academia.
]]>For the sake of analyzing continuous-time linear time-invariant (LTI) system, Laplace transformation is utilized. And z-transform is applied for the analysis of discrete-time LTI system. Z-transform is fundamentally a numerical tool applied for a transition of a time domain into frequency domain and is a mathematical function of the complex-valued variable named Z. The z-transform of any discrete time signal x (n) referred by X (z) is specified as
$X\left( z \right)=\sum\limits_{n=-\infty }^{\infty }{x\left[ n \right]{{z}^{-n}}}$
Z transform is a non-finite power series as summing index number n changes from -∞ to ∞. However, it is valuable for values of z for which aggregate is finite (bounded). The values of z for which function f (z) is finite and lie down inside the region named as “region of convergence (ROC)”.
Above figure displays the diagram of z-transform with the region of convergence (ROC). The z-transform possesses both real and imaginary components. Therefore a diagram of the imaginary component against real component is titled complex z-plane. The radius of the above circle is 1 so named as unit circle. The complex z plane is utilized to demonstrate ROC, poles, and zeros of a function. Complex variable z is carried in terms of polar form as
\[Z=\text{ }r{{e}^{j\omega }}\]
Whereas r is the radius of a circle, and ω is the angular frequency of the given sequence.
The linearity property describes that if
${{x}_{1}}[n]\overset{z}\leftrightarrows{{X}_{1}}(z)$ and
${{x}_{2}}[n]\overset{z}\leftrightarrows{{X}_{2}}(z)$
then
\[{{a}_{1}}{{x}_{1}}\left[ n \right]\text{ }+\text{ }{{a}_{2}}{{x}_{2}}\left[ n \right]~~\overset{z}\leftrightarrows~~{{a}_{1}}{{X}_{1}}\left( z \right)\text{ }+\text{ }{{a}_{2}}{{X}_{2}}\left( z \right)\]
From preceding relation, we can infer that Z-Transform of a linear combination of two signals is equal to the linear combination of z-transform of two separate signals.
The Time shifting property describes that if
\[x\left[ n \right]~~~\overset{z}\leftrightarrows~~~X\text{ }\left( z \right)\] then
\[x\text{ }\left[ n-k \right]~~~~~~~\overset{z}\leftrightarrows~~~~~~X\text{ }\left( z \right)\text{ }{{z}^{-k}}\]
From above, it’s obvious that transferring the sequence circularly by ‘k’ number of samples is equal to multiplying its z-transform by z^{-k} element.
This property describes that if
\[x\left[ n \right]~~~\overset{z}\leftrightarrows~~~X\text{ }\left( z \right)\] then\[{{a}^{n}}~~x\left[ n \right]\overset{z}\leftrightarrows~X\text{ }\left( z/a \right)\]
So, we can say that scaling the function in z-transform is equal to multiplying it by factor a^{n }in time domain.
The Time reversal property describes that if
\[x\left[ n \right]~~~\overset{z}\leftrightarrows~~~X\text{ }\left( z \right)\] then
\[~x\text{ }\left[ -n \right]~~~~~~~~~\overset{z}\leftrightarrows~~~~~~~~X\text{ }\left( {{z}^{-1}} \right)\]
It implies that if the certain sequence is folded then in z domain, it is just equal to substituting z by z^{-1}.
The Differentiation property describes that if
\[x\left[ n \right]~~~\overset{z}\leftrightarrows~~~X\text{ }\left( z \right)\] then
\[~n\text{ }x\text{ }\left[ n \right]~\overset{z}\leftrightarrows~-z\text{ }\frac{d\left( X\text{ }\left( z \right) \right)}{dx}\]
The Circular property describes that if
${{x}_{1}}[n]\overset{z}\leftrightarrows{{X}_{1}}(z)$ and
${{x}_{2}}[n]\overset{z}\leftrightarrows{{X}_{2}}(z)$ then
\[{{x}_{1}}\left[ n \right]\text{ }*\text{ }{{x}_{2}}\left[ n \right]~\overset{z}\leftrightarrows~{{X}_{1}}\left( z \right)\text{ }{{X}_{2}}\left( z \right)\]
Convolution of two sequences in time domain equates to a multiplication of Z transform of both sequences.
The Correlation of two sequences describes that if
${{x}_{1}}[n]\overset{z}\leftrightarrows{{X}_{1}}(z)$ and
${{x}_{2}}[n]\overset{z}\leftrightarrows{{X}_{2}}(z)$ then
\[\sum\limits_{n=-\infty }^{\infty }{{{x}_{1}}\left( n \right)~{{x}_{2}}\left( -n \right)~}~~~~\overset{z}\leftrightarrows~~~~~{{X}_{1}}\left( z \right)\text{ }{{X}_{2}}\left( {{z}^{-1}} \right)\]
Initial value theorem describes that if
\[x\left[ n \right]~\overset{z}\leftrightarrows~X\text{ }\left( z \right)\] then
\[x\text{ }\left[ 0 \right]~~=~{{\lim }_{z\to \infty }}\text{ }X\left( z \right)\]
Final value theorem describes that if
\[x\left[ n \right]~~~\overset{z}\leftrightarrows~~~X\text{ }\left( z \right)\] then
\[{{\lim }_{n\to \infty }}\text{ }x\left[ n \right]~=\text{ }{{\lim }_{z\to 1\text{ }}}\left( z-1 \right)\text{ }X\left( z \right)~\]
The post Z Transform Introduction | Z Transform Properties appeared first on Electrical Academia.
]]>The post Continuous Time Graphical Convolution Example appeared first on Electrical Academia.
]]>Noteworthy: It will forever be…τ_{L,t} = t + τ_{L}_{,0}
Noteworthy: It will forever be…τ_{R,t} = t + τ_{R}_{,0}
This example is provided in collaboration with Prof. Mark L. Fowler, Binghamton University.
The post Continuous Time Graphical Convolution Example appeared first on Electrical Academia.
]]>The post Continuous Time Convolution Properties | Continuous Time Signal appeared first on Electrical Academia.
]]>For linear time-invariant (LTI) systems, the convolution is being utilized in order to achieve output response from the knowledge of input and impulse response.
Given two continuous-time signals x(t) and h(t), the convolution is defined as
$y\left( t \right)=\sum\limits_{\tau =-\infty }^{\infty }{x\left( \tau \right)h\left( t-\tau \right)d\tau }~~~~~~~~~~~~~~~~~~~~~~~~\left( 1 \right)$
The integral defined in above equation is an important and fundamental equation in the study of the linear system; it is called the convolution integral. Because it is so often used, it has been given a special shorthand representation;
It should be noted that convolution integral exists when x(t) and h(t) are both zero for all integers t <0. If x(t) and h(t) are zero for all integers t<0, then x(𝞃)=0 for all integers 𝞃<0 and h(t-𝞃) =0 for all integers t-𝞃<0. Thus the integral on 𝞃 in equation (1) may be taken from 𝞃=0 to 𝞃=t, and the convolution operation is given by;
$x\left( t \right)*h\left( t \right)=\left\{ \begin{matrix} 0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~t=-1,-2,\ldots \\ \underset{0}{\overset{t}{\mathop \int }}\,x\left( \tau \right)h\left( t-\tau\right)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~t=0,1,2,\ldots ~~~~~~~~~~\left( 2 \right)~~~~~~~ \\\end{matrix} \right.$
Since the integral in equation (2) is over the finite range of integers (𝞃=0 to 𝞃=t), the convolution integral exists. Hence, any two signals that are zero for all integers t<0 can be convolved.
The convolution mapping possesses a number of important properties, among those are:
If x(t) is a signal and h(t) and impulse response, then
An LTI system output with input x(t) and impulse response h (t) is same as an LTI system output with input h(t) and impulse response x(t).
If x(t) is a signal and h_{1}(t) and h_{2}(t) are impulse responses, then
The order of convolution is not important.
If x(t) is signal and h_{1}(t) and h_{2}(t) are impulse responses, then
LTI systems parallel combination can be substituted with a single LTI system whose unit impulse response is the summation of the separate unit impulse responses in the parallel combination.
The post Continuous Time Convolution Properties | Continuous Time Signal appeared first on Electrical Academia.
]]>