I need this exercise to be implemented in MATLAB. (NotSimulink) ܕ EXAMPLE 2.1. This example illustrates the ability of the adaptive linear combi- ner, trained with the LMS algorithm, to estimate the parameters of a linear model. The input data consist of 1,000 zero-mean Gaussian random vectors with three components, that is, re R3xl; and the bias is set to zero, or B = 0. The variances of the components of r are 5, 1, and 0.5, respectively. The assumed linear model is given by b = [1,0.8, -1]. To generate the target values (desired outputs), the 1,000 input vectors are used to form a matrix X = (x ? ... $1,000), and the desired outputs are computed according to d=b'x. The covariance matrix of the vector input signals can be estimated as 1.000 C Irx= XXT 1,000 1,000 h=1 [30]. Using the LMS algorithm in Table 2.1, with a value of Ho = 0.9/Amax = 0.1936, where Imax is the largest eigenvalue of the covariance matrix Cy, and t = 200 (the search time constant), the input vectors along with the associated desired output values are presented to the linear combiner. The criter- ion used to terminate the learning process involved monitoring the square root of the MSE values every time step k. The learning process was terminated when VJ = leik) < 10-8, where elk) = dík) – W" (k)(k). The initial values of the synaptic weight vector were selected as zero-mean Gaussian random numbers with a variance of 0.25, Winitial = (-0.3043, -0.8195,0.3855)". The LMS learning process was terminated after only 204 iterations (training epochs). In other words, after the first 204 input vectors along with the associated desired output values were presented, the network converged. The final synaptic weight matrix was given as wrinat = [1.000000, 0.800000, -1.0000001", which is exactly the assumed linear model b (out to six decimal places). In fact, the L2-norm of the difference between the linear model band the final weight vector is | 5-Wfinal ||2= 1.505404 x 10-7. Figure 2.17 shows the progress of the learning rate parameter as it is adjusted according to the search-then-converge schedule. As we can see from the plot, at the beginning of training u does not change much; then toward the end of training it becomes much smaller. Figure 2.18 shows the root mean square (RMS) value of the performance measure, that is, VJ, as training of the network progresses. This exercise is similar to problems in system identification of estimating a parameter vector associated with a dynamic model of a system given only input/output data from the system, that is, parametric system identification [31]. W finai $) X():11C1, 5000) )- d=6x X(2,:) = randa (1, 1000 X(2, ) d=$3x3 3 SX1000) X(3,:) = sqrt(0.5) Condn (11000) : {{x1000) scalar coux-x***/1000 randa (N) normally distrand. numbers I rand (0) NXN uniform dist the interval (0.0, 1.0)