976 resultados para CORRECTION MODELS
Resumo:
This paper is concerned with using the bootstrap to obtain improved critical values for the error correction model (ECM) cointegration test in dynamic models. In the paper we investigate the effects of dynamic specification on the size and power of the ECM cointegration test with bootstrap critical values. The results from a Monte Carlo study show that the size of the bootstrap ECM cointegration test is close to the nominal significance level. We find that overspecification of the lag length results in a loss of power. Underspecification of the lag length results in size distortion. The performance of the bootstrap ECM cointegration test deteriorates if the correct lag length is not used in the ECM. The bootstrap ECM cointegration test is therefore not robust to model misspecification.
Resumo:
The conformation of (Pro-Gly-Phe)n in trifluoroethanol was investigated using CD, nmr and ir techniques. After making appropriate correction for the contribution of the phenylalanine chromophore to the observed CD spectra of the polytripeptide at several temperatures, it is found that (Pro-Gly-Phe)n can exist in a partially triple-helical conformation in this solvent a t low temperatures. The nmr and ir data support this conclusion. In conjunction with recent theoretical sutdies, our data offer an explanation for the preferential occurrence of the Phe residue in position 2 of the tripeptide sequence Gly-R2-R3, in collagen.
Resumo:
A systematic assessment of the submodels of conditional moment closure (CMC) formalism for the autoignition problem is carried out using direct numerical simulation (DNS) data. An initially non-premixed, n-heptane/air system, subjected to a three-dimensional, homogeneous, isotropic, and decaying turbulence, is considered. Two kinetic schemes, (1) a one-step and (2) a reduced four-step reaction mechanism, are considered for chemistry An alternative formulation is developed for closure of the mean chemical source term
Resumo:
General circulation models (GCMs) are routinely used to simulate future climatic conditions. However, rainfall outputs from GCMs are highly uncertain in preserving temporal correlations, frequencies, and intensity distributions, which limits their direct application for downscaling and hydrological modeling studies. To address these limitations, raw outputs of GCMs or regional climate models are often bias corrected using past observations. In this paper, a methodology is presented for using a nested bias-correction approach to predict the frequencies and occurrences of severe droughts and wet conditions across India for a 48-year period (2050-2099) centered at 2075. Specifically, monthly time series of rainfall from 17 GCMs are used to draw conclusions for extreme events. An increasing trend in the frequencies of droughts and wet events is observed. The northern part of India and coastal regions show maximum increase in the frequency of wet events. Drought events are expected to increase in the west central, peninsular, and central northeast regions of India. (C) 2013 American Society of Civil Engineers.
Resumo:
Eleven GCMs (BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1) were evaluated for India (covering 73 grid points of 2.5 degrees x 2.5 degrees) for the climate variable `precipitation rate' using 5 performance indicators. Performance indicators used were the correlation coefficient, normalised root mean square error, absolute normalised mean bias error, average absolute relative error and skill score. We used a nested bias correction methodology to remove the systematic biases in GCM simulations. The Entropy method was employed to obtain weights of these 5 indicators. Ranks of the 11 GCMs were obtained through a multicriterion decision-making outranking method, PROMETHEE-2 (Preference Ranking Organisation Method of Enrichment Evaluation). An equal weight scenario (assigning 0.2 weight for each indicator) was also used to rank the GCMs. An effort was also made to rank GCMs for 4 river basins (Godavari, Krishna, Mahanadi and Cauvery) in peninsular India. The upper Malaprabha catchment in Karnataka, India, was chosen to demonstrate the Entropy and PROMETHEE-2 methods. The Spearman rank correlation coefficient was employed to assess the association between the ranking patterns. Our results suggest that the ensemble of GFDL2.0, MIROC3, BCCR-BCCM2.0, UKMO-HADCM3, MPIECHAM4 and UKMO-HADGEM1 is suitable for India. The methodology proposed can be extended to rank GCMs for any selected region.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images.
Resumo:
In a model commonly used in dynamic traffic assignment the link travel time for a vehicle entering a link at time t is taken as a function of the number of vehicles on the link at time t. In an alternative recently introduced model, the travel time for a vehicle entering a link at time t is taken as a function of an estimate of the flow in the immediate neighbourhood of the vehicle, averaged over the time the vehicle is traversing the link. Here we compare the solutions obtained from these two models when applied to various inflow profiles. We also divide the link into segments, apply each model sequentially to the segments and again compare the results. As the number of segments is increased, the discretisation refined to the continuous limit, the solutions from the two models converge to the same solution, which is the solution of the Lighthill, Whitham, Richards (LWR) model for traffic flow. We illustrate the results for different travel time functions and patterns of inflows to the link. In the numerical examples the solutions from the second of the two models are closer to the limit solutions. We also show that the models converge even when the link segments are not homogeneous, and introduce a correction scheme in the second model to compensate for an approximation error, hence improving the approximation to the LWR model.
Resumo:
OBJECTIVE: To compare outcomes between adjustable spectacles and conventional methods for refraction in young people. DESIGN: Cross sectional study. SETTING: Rural southern China. PARTICIPANTS: 648 young people aged 12-18 (mean 14.9 (SD 0.98)), with uncorrected visual acuity ≤ 6/12 in either eye. INTERVENTIONS: All participants underwent self refraction without cycloplegia (paralysis of near focusing ability with topical eye drops), automated refraction without cycloplegia, and subjective refraction by an ophthalmologist with cycloplegia. MAIN OUTCOME MEASURES: Uncorrected and corrected vision, improvement of vision (lines on a chart), and refractive error. RESULTS: Among the participants, 59% (384) were girls, 44% (288) wore spectacles, and 61% (393/648) had 2.00 dioptres or more of myopia in the right eye. All completed self refraction. The proportion with visual acuity ≥ 6/7.5 in the better eye was 5.2% (95% confidence interval 3.6% to 6.9%) for uncorrected vision, 30.2% (25.7% to 34.8%) for currently worn spectacles, 96.9% (95.5% to 98.3%) for self refraction, 98.4% (97.4% to 99.5%) for automated refraction, and 99.1% (98.3% to 99.9%) for subjective refraction (P = 0.033 for self refraction v automated refraction, P = 0.001 for self refraction v subjective refraction). Improvements over uncorrected vision in the better eye with self refraction and subjective refraction were within one line on the eye chart in 98% of participants. In logistic regression models, failure to achieve maximum recorded visual acuity of 6/7.5 in right eyes with self refraction was associated with greater absolute value of myopia/hyperopia (P<0.001), greater astigmatism (P = 0.001), and not having previously worn spectacles (P = 0.002), but not age or sex. Significant inaccuracies in power (≥ 1.00 dioptre) were less common in right eyes with self refraction than with automated refraction (5% v 11%, P<0.001). CONCLUSIONS: Though visual acuity was slightly worse with self refraction than automated or subjective refraction, acuity was excellent in nearly all these young people with inadequately corrected refractive error at baseline. Inaccurate power was less common with self refraction than automated refraction. Self refraction could decrease the requirement for scarce trained personnel, expensive devices, and cycloplegia in children's vision programmes in rural China.
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
Resumo:
Notre recherche s’inscrit dans le cadre de l’adoption par le Sénégal de l’approche par compétences avec des pratiques évaluatives favorisant la réussite du plus grand nombre d’élèves. Parmi ces pratiques, figurent celles liées à l’évaluation formative et particulièrement aux rétroactions écrites, pouvant prolonger les activités d’apprentissage (Scallon, 2004; OCDE, 2005) et permettre la maîtrise des acquis. De ce point de vue, nous avons examiné les façons de faire l’évaluation formative de trois enseignants sénégalais du primaire. Le but est de documenter leurs façons de pratiquer la rétroaction écrite avant et après une expérimentation des annotations selon les modèles de Rodet (2000) et de Lebœuf (1999). Dans cette optique, notre recherche puise ses fondements dans les recherches qualitatives liées au paradigme naturaliste (Fortin, 2010; Deslauriers et Kérésit, 1997; Savoie-Zajc et Karsenti, 2011). Plus précisément, nous avons opté pour une recherche-formation au travers de la double démarche que nous avons empruntée. D’une part, l’étude est parcourue par une démarche de recherche qualitative telle que Fortin (2010) la conçoit, et ce, à des fins de compréhension des pratiques de rétroaction écrite des enseignants ciblés. D’autre part, le chercheur suit une démarche de formation en s’appuyant sur les travaux de Galvani (1999) et de Lafortune (2006), avec comme visée l’expérimentation de la pratique des annotations. Comme instruments de collecte, nous avons utilisé l’entrevue individuelle semi-structurée ainsi que deux guides d’entretien, en plus des deux grilles de recueil d’annotations. L’analyse des données a permis de constater que, en amont de l’expérimentation, la rétroaction écrite est constituée essentiellement de signes divers, de notes chiffrées et de commentaires sous forme d’appréciations globales. En aval, elle se compose majoritairement d’annotations à formulations positive et constructive à contenu cognitif, et des commentaires injonctifs, verdictifs et explicatifs. Au final, les enseignants trouvent cette nouvelle façon de faire avantageuse en termes de réussite des élèves mais contraignante au vu de son caractère chronophage. Cette recherche ouvre, entre autres, sur une étude traitant des liens entre annotations et réussite des élèves.
Resumo:
Motivation for Speaker recognition work is presented in the first part of the thesis. An exhaustive survey of past work in this field is also presented. A low cost system not including complex computation has been chosen for implementation. Towards achieving this a PC based system is designed and developed. A front end analog to digital convertor (12 bit) is built and interfaced to a PC. Software to control the ADC and to perform various analytical functions including feature vector evaluation is developed. It is shown that a fixed set of phrases incorporating evenly balanced phonemes is aptly suited for the speaker recognition work at hand. A set of phrases are chosen for recognition. Two new methods are adopted for the feature evaluation. Some new measurements involving a symmetry check method for pitch period detection and ACE‘ are used as featured. Arguments are provided to show the need for a new model for speech production. Starting from heuristic, a knowledge based (KB) speech production model is presented. In this model, a KB provides impulses to a voice producing mechanism and constant correction is applied via a feedback path. It is this correction that differs from speaker to speaker. Methods of defining measurable parameters for use as features are described. Algorithms for speaker recognition are developed and implemented. Two methods are presented. The first is based on the model postulated. Here the entropy on the utterance of a phoneme is evaluated. The transitions of voiced regions are used as speaker dependent features. The second method presented uses features found in other works, but evaluated differently. A knock—out scheme is used to provide the weightage values for the selection of features. Results of implementation are presented which show on an average of 80% recognition. It is also shown that if there are long gaps between sessions, the performance deteriorates and is speaker dependent. Cross recognition percentages are also presented and this in the worst case rises to 30% while the best case is 0%. Suggestions for further work are given in the concluding chapter.
Resumo:
Comparison of donor-acceptor electronic couplings calculated within two-state and three-state models suggests that the two-state treatment can provide unreliable estimates of Vda because of neglecting the multistate effects. We show that in most cases accurate values of the electronic coupling in a π stack, where donor and acceptor are separated by a bridging unit, can be obtained as Ṽ da = (E2 - E1) μ12 Rda + (2 E3 - E1 - E2) 2 μ13 μ23 Rda2, where E1, E2, and E3 are adiabatic energies of the ground, charge-transfer, and bridge states, respectively, μij is the transition dipole moments between the states i and j, and Rda is the distance between the planes of donor and acceptor. In this expression based on the generalized Mulliken-Hush approach, the first term corresponds to the coupling derived within a two-state model, whereas the second term is the superexchange correction accounting for the bridge effect. The formula is extended to bridges consisting of several subunits. The influence of the donor-acceptor energy mismatch on the excess charge distribution, adiabatic dipole and transition moments, and electronic couplings is examined. A diagnostic is developed to determine whether the two-state approach can be applied. Based on numerical results, we showed that the superexchange correction considerably improves estimates of the donor-acceptor coupling derived within a two-state approach. In most cases when the two-state scheme fails, the formula gives reliable results which are in good agreement (within 5%) with the data of the three-state generalized Mulliken-Hush model
Resumo:
Proactive motion in hand tracking and in finger bending, in which the body motion occurs prior to the reference signal, was reported by the preceding researchers when the target signals were shown to the subjects at relatively high speed or high frequencies. These phenomena indicate that the human sensory-motor system tends to choose an anticipatory mode rather than a reactive mode, when the target motion is relatively fast. The present research was undertaken to study what kind of mode appears in the sensory-motor system when two persons were asked to track the hand position of the partner with each other at various mean tracking frequency. The experimental results showed a transition from a mutual error-correction mode to a synchronization mode occurred in the same region of the tracking frequency with that of the transition from a reactive error-correction mode to a proactive anticipatory mode in the mechanical target tracking experiments. Present research indicated that synchronization of body motion occurred only when both of the pair subjects operated in a proactive anticipatory mode. We also presented mathematical models to explain the behavior of the error-correction mode and the synchronization mode.
Resumo:
Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.