940 resultados para linear approximation method
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
Numerical methods related to Krylov subspaces are widely used in large sparse numerical linear algebra. Vectors in these subspaces are manipulated via their representation onto orthonormal bases. Nowadays, on serial computers, the method of Arnoldi is considered as a reliable technique for constructing such bases. However, although easily parallelizable, this technique is not as scalable as expected for communications. In this work we examine alternative methods aimed at overcoming this drawback. Since they retrieve upon completion the same information as Arnoldi's algorithm does, they enable us to design a wide family of stable and scalable Krylov approximation methods for various parallel environments. We present timing results obtained from their implementation on two distributed-memory multiprocessor supercomputers: the Intel Paragon and the IBM Scalable POWERparallel SP2. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
Recent studies have demonstrated that spatial patterns of fMRI BOLD activity distribution over the brain may be used to classify different groups or mental states. These studies are based on the application of advanced pattern recognition approaches and multivariate statistical classifiers. Most published articles in this field are focused on improving the accuracy rates and many approaches have been proposed to accomplish this task. Nevertheless, a point inherent to most machine learning methods (and still relatively unexplored in neuroimaging) is how the discriminative information can be used to characterize groups and their differences. In this work, we introduce the Maximum Uncertainty Linear Discrimination Analysis (MLDA) and show how it can be applied to infer groups` patterns by discriminant hyperplane navigation. In addition, we show that it naturally defines a behavioral score, i.e., an index quantifying the distance between the states of a subject from predefined groups. We validate and illustrate this approach using a motor block design fMRI experiment data with 35 subjects. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
To obtain a high quality EMG acquisition, the signal must be recorded as far away as possible from muscle innervations and tendon zones, which are known to shift during dynamic contractions. This study describes a methodology, using commercial bipolar electrodes, to identify better electrode positions for superficial EMG of lower limb muscles during dynamic contractions. Eight female volunteers participated in this study. Myoelectric signals of the vastus lateralis, gastrocnemius medialis, peroneus longus and tibialis anterior muscles were acquired during maximum isometric contractions using bipolar electrodes. The electrode positions of each muscle were selected assessing SENIAM and then, other positions were located along the length of muscle up and down the SENIAM site. The raw signal (density), the linear envelopes, the RMS value, the motor point site, the position of the IZ and its shift during dynamic contractions were taken into account to select and compare electrode positions. For vastus lateralis and peroneus longus, the best sites were 66% and 25% of muscle length, respectively (similar to SENIAM location). The position of the tibialis anterior electrodes presented the best signal at 47.5% of its length (different from SENIAM location). The position of the gastrocnemius medialis electrodes was at 38% of its length and SENIAM does not specify a precise location for signal acquisition. The proposed method should be considered as another methodological step in every EMG study to guarantee the quality of the signal and subsequent human movement interpretations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Background: We report the validation of a method for the determination of acetaldehyde, acetone, methanol, and ethanol in biological fluids using manual headspace sample introduction and an acetonitrile internal standard. Method: This method uses a capillary column (I = 30 m, I.D. = 0.25 mm, dF = 0.25 mu m) installed in a gas chromatography-flame ionization detector (GC-FID) apparatus with a run time of 7.5 minutes. Results: Analysis of the retention times and the resolution of the analyte peaks demonstrated excellent separation without widening of the peaks. Precision and accuracy were good (interassay precision < 15% and recovery between 85% and 115%) in both blood and urine. Conclusion: The method was linear (r > 0.09) over the analytical measurement range (AMR) of each analyte.
Resumo:
It is difficult to precisely measure articular arc movement in newborns using a goniometer. This article proposes an objective method based on trigonometry for the evaluation of lower limb abduction. With the newborn aligned in the dorsal decubitus position, 2 points are marked at the level of the medial malleolus, one on the sagittal line and the other at the end of the abduction. Using the right-sided line between these 2 points and a line from the medial malleolus to the reference point at the anterior superior iliac spine or umbilical scar, an isosceles triangle is drawn, and half of the inferential abduction angle is obtained by calculating the sine. Twenty healthy full-term newborns comprise the study cohort. Intersubject and intrasubject variability among the abduction angle values (mean [SD], 37 degrees [4]degrees) is low. This method is advantageous because the measurement is precise and because the sine can be used without approximation.
Resumo:
Purpose: The purpose of this study was to evaluate the amount of dentifrice applied to the toothbrush by school children using a liquid dentifrice (drop technique), when compared to toothpaste. Materials and Methods: A total of 178 school children (4-8 years old) from two cities in Brazil (Bauru and Bariri) participated in the present two-part crossover study. Children from Bauru received training regarding tooth-brushing techniques and use of dentifrice before data collection. In each phase, the amount of toothpaste or liquid dentifrice applied by the children to the toothbrush was measured, using a portable analytical balance (+/- 0.01 g). Data were tested by analysis of covariance (Ancova) and linear regression (p < 0.05). Results: The mean (+/- standard deviation) amounts of toothpaste and liquid dentifrice applied to the toothbrushes for children from Bauru were 0.41 +/- 0.20 g and 0.15 +/- 0.06 g, respectively. For children from Bariri, the amounts applied were and 0.48 +/- 0.24 g and 0.14 +/- 0.05 g, respectively. The amount of toothpaste applied was significantly larger than the amount of liquid dentifrice for both cities. Children from Bariri applied a significantly larger amount of toothpaste, when compared to those from Bauru. However, for the liquid dentifrice, there was no statistically significant difference between the cities. A significant correlation between the amount of toothpaste applied and the age of the children was verified, but the same was not found for the liquid dentifrice. Conclusion: The use of the drop technique reduced and standardised the amount of dentifrice applied to the toothbrush, which could reduce the risk of dental fluorosis for young children.
Resumo:
In this paper we present the composite Euler method for the strong solution of stochastic differential equations driven by d-dimensional Wiener processes. This method is a combination of the semi-implicit Euler method and the implicit Euler method. At each step either the semi-implicit Euler method or the implicit Euler method is used in order to obtain better stability properties. We give criteria for selecting the semi-implicit Euler method or the implicit Euler method. For the linear test equation, the convergence properties of the composite Euler method depend on the criteria for selecting the methods. Numerical results suggest that the convergence properties of the composite Euler method applied to nonlinear SDEs is the same as those applied to linear equations. The stability properties of the composite Euler method are shown to be far superior to those of the Euler methods, and numerical results show that the composite Euler method is a very promising method. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The anisotropic norm of a linear discrete-time-invariant system measures system output sensitivity to stationary Gaussian input disturbances of bounded mean anisotropy. Mean anisotropy characterizes the degree of predictability (or colouredness) and spatial non-roundness of the noise. The anisotropic norm falls between the H-2 and H-infinity norms and accommodates their loss of performance when the probability structure of input disturbances is not exactly known. This paper develops a method for numerical computation of the anisotropic norm which involves linked Riccati and Lyapunov equations and an associated special type equation.
Resumo:
This paper presents a new approach for the design of genuinely finite-length shim and gradient coils, intended for use in magnetic resonance imaging equipment. A cylindrical target region is located asymmetrically, at an arbitrary position within a coil of finite length. A desired target field is specified on the surface of that region, and a method is given that enables winding patterns on the surface of the coil to be designed, to produce the desired field at the inner target region. The method uses a minimization technique combined with regularization, to find the current density on the surface of the coil. The method is illustrated for linear, quadratic and cubic magnetic target fields located asymmetrically within a finite-length coil.
Resumo:
The numerical implementation of the complex image approach for the Green's function of a mixed-potential integralequation formulation is examined and is found to be limited to low values of k(0) rho (in this context k(0) rho = 2 pirho/ lambda(0), where rho is the distance between the source and the field points of the Green's function and lambda(0) is the free space wavelength). This is a clear limitation for problems of large dimension or high frequency where this limit is easily exceeded. This paper examines the various strategies and proposes a hybrid method whereby most of the above problems can be avoided. An efficient integral method that is valid for large k(0) rho is combined with the complex image method in order to take advantage of the relative merits of both schemes. It is found that a wide overlapping region exists between the two techniques allowing a very efficient and consistent approach for accurately calculating the Green's functions. In this paper, the method developed for the computation of the Green's function is used for planar structures containing both lossless and lossy media.
Resumo:
A finite-element method is used to study the elastic properties of random three-dimensional porous materials with highly interconnected pores. We show that Young's modulus, E, is practically independent of Poisson's ratio of the solid phase, nu(s), over the entire solid fraction range, and Poisson's ratio, nu, becomes independent of nu(s) as the percolation threshold is approached. We represent this behaviour of nu in a flow diagram. This interesting but approximate behaviour is very similar to the exactly known behaviour in two-dimensional porous materials. In addition, the behaviour of nu versus nu(s) appears to imply that information in the dilute porosity limit can affect behaviour in the percolation threshold limit. We summarize the finite-element results in terms of simple structure-property relations, instead of tables of data, to make it easier to apply the computational results. Without using accurate numerical computations, one is limited to various effective medium theories and rigorous approximations like bounds and expansions. The accuracy of these equations is unknown for general porous media. To verify a particular theory it is important to check that it predicts both isotropic elastic moduli, i.e. prediction of Young's modulus alone is necessary but not sufficient. The subtleties of Poisson's ratio behaviour actually provide a very effective method for showing differences between the theories and demonstrating their ranges of validity. We find that for moderate- to high-porosity materials, none of the analytical theories is accurate and, at present, numerical techniques must be relied upon.
Resumo:
A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
In this paper the diffusion and flow of carbon tetrachloride, benzene and n-hexane through a commercial activated carbon is studied by a differential permeation method. The range of pressure is covered from very low pressure to a pressure range where significant capillary condensation occurs. Helium as a non-adsorbing gas is used to determine the characteristics of the porous medium. For adsorbing gases and vapors, the motion of adsorbed molecules in small pores gives rise to a sharp increase in permeability at very low pressures. The interplay between a decreasing behavior in permeability due to the saturation of small pores with adsorbed molecules and an increasing behavior due to viscous flow in larger pores with pressure could lead to a minimum in the plot of total permeability versus pressure. This phenomenon is observed for n-hexane at 30degreesC. At relative pressure of 0.1-0.8 where the gaseous viscous flow dominates, the permeability is a linear function of pressure. Since activated carbon has a wide pore size distribution, the mobility mechanism of these adsorbed molecules is different from pore to pore. In very small pores where adsorbate molecules fill the pore the permeability decreases with an increase in pressure, while in intermediate pores the permeability of such transport increases with pressure due to the increasing build-up of layers of adsorbed molecules. For even larger pores, the transport is mostly due to diffusion and flow of free molecules, which gives rise to linear permeability with respect to pressure. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.