924 resultados para error model
Resumo:
The existing three widely used pull-in theoretical models (i.e., one-dimensional lumped model, linear supposition model and planar model) are compared with the nonlinear beam mode in this paper by considering both cantilever and fixed-fixed type micro and nano-switches. It is found that the error of the pull-in parameters between one-dimensional lumped model and the nonlinear beam model is large because the denominator of the electrostatic force is minimal when the electrostatic force is computed at the maximum deflection along the beam. Since both the linear superposition model and the slender planar model consider the variation of electrostatic force with the beam's deflection, these two models not only are of the same type but also own little error of the pull-in parameters with the nonlinear beam model, the error brought by these two models attributes to that the boundary conditions are not completely satisfied when computing the numerical integration of the deflection.
Resumo:
This paper uses a new method for describing dynamic comovement and persistence in economic time series which builds on the contemporaneous forecast error method developed in den Haan (2000). This data description method is then used to address issues in New Keynesian model performance in two ways. First, well known data patterns, such as output and inflation leads and lags and inflation persistence, are decomposed into forecast horizon components to give a more complete description of the data patterns. These results show that the well known lead and lag patterns between output and inflation arise mostly in the medium term forecasts horizons. Second, the data summary method is used to investigate a rich New Keynesian model with many modeling features to see which of these features can reproduce lead, lag and persistence patterns seen in the data. Many studies have suggested that a backward looking component in the Phillips curve is needed to match the data, but our simulations show this is not necessary. We show that a simple general equilibrium model with persistent IS curve shocks and persistent supply shocks can reproduce the lead, lag and persistence patterns seen in the data.
Resumo:
This paper proposes an extended version of the basic New Keynesian monetary (NKM) model which contemplates revision processes of output and inflation data in order to assess the importance of data revisions on the estimated monetary policy rule parameters and the transmission of policy shocks. Our empirical evidence based on a structural econometric approach suggests that although the initial announcements of output and inflation are not rational forecasts of revised output and inflation data, ignoring the presence of non well-behaved revision processes may not be a serious drawback in the analysis of monetary policy in this framework. However, the transmission of inflation-push shocks is largely affected by considering data revisions. The latter being especially true when the nominal stickiness parameter is estimated taking into account data revision processes.
Resumo:
In this paper, the gamma-gamma probability distribution is used to model turbulent channels. The bit error rate (BER) performance of free space optical (FSO) communication systems employing on-off keying (OOK) or subcarrier binary phase-shift keying (BPSK) modulation format is derived. A tip-tilt adaptive optics system is also incorporated with a FSO system using the above modulation formats. The tip-tilt compensation can alleviate effects of atmospheric turbulence and thereby improve the BER performance. The improvement is different for different turbulence strengths and modulation formats. In addition, the BER performance of communication systems employing subcarrier BPSK modulation is much better than that of compatible systems employing OOK modulation with or without tip-tilt compensation.
Resumo:
Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.
For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.
For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.
For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.
Resumo:
I. It was not possible to produce anti-tetracycline antibody in laboratory animals by any of the methods tried. Tetracycline protein conjugates were prepared and characterized. It was shown that previous reports of the detection of anti-tetracycline antibody by in vitro-methods were in error. Tetracycline precipitates non-specifically with serum proteins. The anaphylactic reaction reported was the result of misinterpretation, since the observations were inconsistent with the known mechanism of anaphylaxis and the supposed antibody would not sensitize guinea pig skin. The hemagglutination reaction was not reproducible and was extremely sensitive to minute amounts of microbial contamination. Both free tetracyclines and the conjugates were found to be poor antigens.
II. Anti-aspiryl antibodies were produced in rabbits using 3 protein carriers. The method of inhibition of precipitation was used to determine the specificity of the antibody produced. ε-Aminocaproate was found to be the most effective inhibitor of the haptens tested, indicating that the combining hapten of the protein is ε-aspiryl-lysyl. Free aspirin and salicylates were poor inhibitors and did not combine with the antibody to a significant extent. The ortho group was found to participate in the binding to antibody. The average binding constants were measured.
Normal rabbit serum was acetylated by aspirin under in vitro conditions, which are similar to physiological conditions. The extent of acetylation was determined by immunochemical tests. The acetylated serum proteins were shown to be potent antigens in rabbits. It was also shown that aspiryl proteins were partially acetylated. The relation of these results to human aspirin intolerance is discussed.
III. Aspirin did not induce contact sensitivity in guinea pigs when they were immunized by techniques that induce sensitivity with other reactive compounds. The acetylation mechanism is not relevant to this type of hypersensitivity, since sensitivity is not produced by potent acetylating agents like acetyl chloride and acetic anhydride. Aspiryl chloride, a totally artificial system, is a good sensitizer. Its specificity was examined.
IV. Protein conjugates were prepared with p-aminosalicylic acid and various carriers using azo, carbodiimide and mixed anhydride coupling. These antigens were injected into rabbits and guinea pigs and no anti-hapten IgG or IgM response was obtained. Delayed hypersensitivity was produced in guinea pigs by immunization with the conjugates, and its specificity was determined. Guinea pigs were not sensitized by either injections or topical application of p-amino-salicylic acid or p-aminosalicylate.
Resumo:
Only the first- order Doppler frequency shift is considered in current laser dual- frequency interferometers; however; the second- order Doppler frequency shift should be considered when the measurement corner cube ( MCC) moves at high velocity or variable velocity because it can cause considerable error. The influence of the second- order Doppler frequency shift on interferometer error is studied in this paper, and a model of the second- order Doppler error is put forward. Moreover, the model has been simulated with both high velocity and variable velocity motion. The simulated results show that the second- order Doppler error is proportional to the velocity of the MCC when it moves with uniform motion and the measured displacement is certain. When the MCC moves with variable motion, the second- order Doppler error concerns not only velocity but also acceleration. When muzzle velocity is zero the second- order Doppler error caused by an acceleration of 0.6g can be up to 2.5 nm in 0.4 s, which is not negligible in nanometric measurement. Moreover, when the muzzle velocity is nonzero, the accelerated motion may result in a greater error and decelerated motion may result in a smaller error.
Resumo:
In previous papers (S. Adhikari and J. Woodhouse 2001 Journal of Sound and Vibration 243, 43-61; 63-88; S. Adhikari and J. Woodhouse 2002 Journal of Sound and Vibration 251, 477-490) methods were proposed to obtain the coefficient matrix for a viscous damping model or a non-viscous damping model with an exponential relaxation function, from measured complex natural frequencies and modes. In all these works, it has been assumed that exact complex natural frequencies and complex modes are known. In reality, this will not be the case. The purpose of this paper is to analyze the sensitivity of the identified damping matrices to measurement errors. By using numerical and analytical studies it is shown that the proposed methods can indeed be expected to give useful results from moderately noisy data provided a correct damping model is selected for fitting. Indications are also given of what level of noise in the measured modal properties is needed to mask the true physical behaviour.
Resumo:
We report a Monte Carlo representation of the long-term inter-annual variability of monthly snowfall on a detailed (1 km) grid of points throughout the southwest. An extension of the local climate model of the southwestern United States (Stamm and Craig 1992) provides spatially based estimates of mean and variance of monthly temperature and precipitation. The mean is the expected value from a canonical regression using independent variables that represent controls on climate in this area, including orography. Variance is computed as the standard error of the prediction and provides site-specific measures of (1) natural sources of variation and (2) errors due to limitations of the data and poor distribution of climate stations. Simulation of monthly temperature and precipitation over a sequence of years is achieved by drawing from a bivariate normal distribution. The conditional expectation of precipitation. given temperature in each month, is the basis of a numerical integration of the normal probability distribution of log precipitation below a threshold temperature (3°C) to determine snowfall as a percent of total precipitation. Snowfall predictions are tested at stations for which long-term records are available. At Donner Memorial State Park (elevation 1811 meters) a 34-year simulation - matching the length of instrumental record - is within 15 percent of observed for mean annual snowfall. We also compute resulting snowpack using a variation of the model of Martinec et al. (1983). This allows additional tests by examining spatial patterns of predicted snowfall and snowpack and their hydrologic implications.
Resumo:
We present a method to integrate environmental time series into stock assessment models and to test the significance of correlations between population processes and the environmental time series. Parameters that relate the environmental time series to population processes are included in the stock assessment model, and likelihood ratio tests are used to determine if the parameters improve the fit to the data significantly. Two approaches are considered to integrate the environmental relationship. In the environmental model, the population dynamics process (e.g. recruitment) is proportional to the environmental variable, whereas in the environmental model with process error it is proportional to the environmental variable, but the model allows an additional temporal variation (process error) constrained by a log-normal distribution. The methods are tested by using simulation analysis and compared to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. In the traditional method, the estimates of recruitment were provided by a model that allowed the recruitment only to have a temporal variation constrained by a log-normal distribution. We illustrate the methods by applying them to test the statistical significance of the correlation between sea-surface temperature (SST) and recruitment to the snapper (Pagrus auratus) stock in the Hauraki Gulf–Bay of Plenty, New Zealand. Simulation analyses indicated that the integrated approach with additional process error is superior to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. The results suggest that, for the snapper stock, recruitment is positively correlated with SST at the time of spawning.
Resumo:
We have formulated a model for analyzing the measurement error in marine survey abundance estimates by using data from parallel surveys (trawl haul or acoustic measurement). The measurement error is defined as the component of the variability that cannot be explained by covariates such as temperature, depth, bottom type, etc. The method presented is general, but we concentrate on bottom trawl catches of cod (Gadus morhua). Catches of cod from 10 parallel trawling experiments in the Barents Sea with a total of 130 paired hauls were used to estimate the measurement error in trawl hauls. Based on the experimental data, the measurement error is fairly constant in size on the logarithmic scale and is independent of location, time, and fish density. Compared with the total variability of the winter and autumn surveys in the Barents Sea, the measurement error is small (approximately 2–5%, on the log scale, in terms of variance of catch per towed distance). Thus, the cod catch rate is a fairly precise measure of fish density at a given site at a given time.
Resumo:
The majority of computational studies of confined explosion hazards apply simple and inaccurate combustion models, requiring ad hoc corrections to obtain realistic flame shapes and often predicting an order of magnitude error in the overpressures. This work describes the application of a laminar flamelet model to a series of two-dimensional test cases. The model is computationally efficient applying an algebraic expression to calculate the flame surface area, an empirical correlation for the laminar flame speed and a novel unstructured, solution adaptive numerical grid system which allows important features of the solution to be resolved close to the flame. Accurate flame shapes are predicted, the correct burning rate is predicted near the walls, and an improvement in the predicted overpressures is obtained. However, in these fully turbulent calculations the overpressures are still too high and the flame arrival times too low, indicating the need for a model for the early laminar burning phase. Due to the computational expense, it is unrealistic to model a laminar flame in the complex geometries involved and therefore a pragmatic approach is employed which constrains the flame to propagate at the laminar flame speed. Transition to turbulent burning occurs at a specified turbulent Reynolds number. With the laminar phase model included, the predicted flame arrival times increase significantly, but are still too low. However, this has no significant effect on the overpressures, which are predicted accurately for a baffled channel test case where rapid transition occurs once the flame reaches the first pair of baffles. In a channel with obstacles on the centreline, transition is more gradual and the accuracy of the predicted overpressures is reduced. However, although the accuracy is still less than desirable in some cases, it is much better than the order of magnitude error previously expected.
Resumo:
A parallel processing network derived from Kanerva's associative memory theory Kanerva 1984 is shown to be able to train rapidly on connected speech data and recognize further speech data with a label error rate of 0·68%. This modified Kanerva model can be trained substantially faster than other networks with comparable pattern discrimination properties. Kanerva presented his theory of a self-propagating search in 1984, and showed theoretically that large-scale versions of his model would have powerful pattern matching properties. This paper describes how the design for the modified Kanerva model is derived from Kanerva's original theory. Several designs are tested to discover which form may be implemented fastest while still maintaining versatile recognition performance. A method is developed to deal with the time varying nature of the speech signal by recognizing static patterns together with a fixed quantity of contextual information. In order to recognize speech features in different contexts it is necessary for a network to be able to model disjoint pattern classes. This type of modelling cannot be performed by a single layer of links. Network research was once held back by the inability of single-layer networks to solve this sort of problem, and the lack of a training algorithm for multi-layer networks. Rumelhart, Hinton & Williams 1985 provided one solution by demonstrating the "back propagation" training algorithm for multi-layer networks. A second alternative is used in the modified Kanerva model. A non-linear fixed transformation maps the pattern space into a space of higher dimensionality in which the speech features are linearly separable. A single-layer network may then be used to perform the recognition. The advantage of this solution over the other using multi-layer networks lies in the greater power and speed of the single-layer network training algorithm. © 1989.
Resumo:
This study compared adaptation in novel force fields where trajectories were initially either stable or unstable to elucidate the processes of learning novel skills and adapting to new environments. Subjects learned to move in a null force field (NF), which was unexpectedly changed either to a velocity-dependent force field (VF), which resulted in perturbed but stable hand trajectories, or a position-dependent divergent force field (DF), which resulted in unstable trajectories. With practice, subjects learned to compensate for the perturbations produced by both force fields. Adaptation was characterized by an initial increase in the activation of all muscles followed by a gradual reduction. The time course of the increase in activation was correlated with a reduction in hand-path error for the DF but not for the VF. Adaptation to the VF could have been achieved solely by formation of an inverse dynamics model and adaptation to the DF solely by impedance control. However, indices of learning, such as hand-path error, joint torque, and electromyographic activation and deactivation suggest that the CNS combined these processes during adaptation to both force fields. Our results suggest that during the early phase of learning there is an increase in endpoint stiffness that serves to reduce hand-path error and provides additional stability, regardless of whether the dynamics are stable or unstable. We suggest that the motor control system utilizes an inverse dynamics model to learn the mean dynamics and an impedance controller to assist in the formation of the inverse dynamics model and to generate needed stability.
Resumo:
This study investigated the neuromuscular mechanisms underlying the initial stage of adaptation to novel dynamics. A destabilizing velocity-dependent force field (VF) was introduced for sets of three consecutive trials. Between sets a random number of 4-8 null field trials were interposed, where the VF was inactivated. This prevented subjects from learning the novel dynamics, making it possible to repeatedly recreate the initial adaptive response. We were able to investigate detailed changes in neural control between the first, second and third VF trials. We identified two feedforward control mechanisms, which were initiated on the second VF trial and resulted in a 50% reduction in the hand path error. Responses to disturbances encountered on the first VF trial were feedback in nature, i.e. reflexes and voluntary correction of errors. However, on the second VF trial, muscle activation patterns were modified in anticipation of the effects of the force field. Feedforward cocontraction of all muscles was used to increase the viscoelastic impedance of the arm. While stiffening the arm, subjects also exerted a lateral force to counteract the perturbing effect of the force field. These anticipatory actions indicate that the central nervous system responds rapidly to counteract hitherto unfamiliar disturbances by a combination of increased viscoelastic impedance and formation of a crude internal dynamics model.