963 resultados para Time-difference-of arrival estimation
Resumo:
We propose an alternate parameterization of stationary regular finite-state Markov chains, and a decomposition of the parameter into time reversible and time irreversible parts. We demonstrate some useful properties of the decomposition, and propose an index for a certain type of time irreversibility. Two empirical examples illustrate the use of the proposed parameter, decomposition and index. One involves observed states; the other, latent states.
Resumo:
In the theory of the Navier-Stokes equations, the proofs of some basic known results, like for example the uniqueness of solutions to the stationary Navier-Stokes equations under smallness assumptions on the data or the stability of certain time discretization schemes, actually only use a small range of properties and are therefore valid in a more general context. This observation leads us to introduce the concept of SST spaces, a generalization of the functional setting for the Navier-Stokes equations. It allows us to prove (by means of counterexamples) that several uniqueness and stability conjectures that are still open in the case of the Navier-Stokes equations have a negative answer in the larger class of SST spaces, thereby showing that proof strategies used for a number of classical results are not sufficient to affirmatively answer these open questions. More precisely, in the larger class of SST spaces, non-uniqueness phenomena can be observed for the implicit Euler scheme, for two nonlinear versions of the Crank-Nicolson scheme, for the fractional step theta scheme, and for the SST-generalized stationary Navier-Stokes equations. As far as stability is concerned, a linear version of the Euler scheme, a nonlinear version of the Crank-Nicolson scheme, and the fractional step theta scheme turn out to be non-stable in the class of SST spaces. The positive results established in this thesis include the generalization of classical uniqueness and stability results to SST spaces, the uniqueness of solutions (under smallness assumptions) to two nonlinear versions of the Euler scheme, two nonlinear versions of the Crank-Nicolson scheme, and the fractional step theta scheme for general SST spaces, the second order convergence of a version of the Crank-Nicolson scheme, and a new proof of the first order convergence of the implicit Euler scheme for the Navier-Stokes equations. For each convergence result, we provide conditions on the data that guarantee the existence of nonstationary solutions satisfying the regularity assumptions needed for the corresponding convergence theorem. In the case of the Crank-Nicolson scheme, this involves a compatibility condition at the corner of the space-time cylinder, which can be satisfied via a suitable prescription of the initial acceleration.
Resumo:
We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
The Arctic is an important region in the study of climate change, but monitoring surface temperatures in this region is challenging, particularly in areas covered by sea ice. Here in situ, satellite and reanalysis data were utilised to investigate whether global warming over recent decades could be better estimated by changing the way the Arctic is treated in calculating global mean temperature. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques. Kriging techniques provided the smallest errors in anomaly estimates. Similar accuracies were found for anomalies estimated from in situ meteorological station SAT records using a kriging technique. Whether additional data sources, which are not currently utilised in temperature anomaly datasets, would improve estimates of Arctic surface air temperature anomalies was investigated within the reanalysis testbed and using in situ data. For the reanalysis study, the additional input anomalies were reanalysis data sampled at certain supplementary data source locations over Arctic land and sea ice areas. For the in situ data study, the additional input anomalies over sea ice were surface temperature anomalies derived from the Advanced Very High Resolution Radiometer satellite instruments. The use of additional data sources, particularly those located in the Arctic Ocean over sea ice or on islands in sparsely observed regions, can lead to substantial improvements in the accuracy of estimated anomalies. Decreases in Root Mean Square Error can be up to 0.2K for Arctic-average anomalies and more than 1K for spatially resolved anomalies. Further improvements in accuracy may be accomplished through the use of other data sources.
Resumo:
In numerical weather prediction, parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known; and the parameterisations themselves are also approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation (DA), such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential DA methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to pre-determined functional forms of missing physics or parameterisations that are based upon prior information. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. Furthermore, it is shown how the method depends on the quality of the DA results. The results indicate that this new method is a powerful tool in systematic model improvement.
Resumo:
We consider the critical short-time evolution of magnetic and droplet-percolation order parameters for the Ising model in two and three dimensions, through Monte Carlo simulations with the (local) heat-bath method. We find qualitatively different dynamic behaviors for the two types of order parameters. More precisely, we find that the percolation order parameter does not have a power-law behavior as encountered for the magnetization, but develops a scale (related to the relaxation time to equilibrium) in the Monte Carlo time. We argue that this difference is due to the difficulty in forming large clusters at the early stages of the evolution. Our results show that, although the descriptions in terms of magnetic and percolation order parameters may be equivalent in the equilibrium regime, greater care must be taken to interpret percolation observables at short times. In particular, this concerns the attempts to describe the dynamics of the deconfinement phase transition in QCD using cluster observables.
Resumo:
The iterative quadratic maximum likelihood IQML and the method of direction estimation MODE are well known high resolution direction-of-arrival DOA estimation methods. Their solutions lead to an optimization problem with constraints. The usual linear constraint presents a poor performance for certain DOA values. This work proposes a new linear constraint applicable to both DOA methods and compare their performance with two others: unit norm and usual linear constraint. It is shown that the proposed alternative performs better than others constraints. The resulting computational complexity is also investigated.
Resumo:
The characterization of soil CO2 emissions (FCO2) is important for the study of the global carbon cycle. This phenomenon presents great variability in space and time, a characteristic that makes attempts at modeling and forecasting FCO2 challenging. Although spatial estimates have been performed in several studies, the association of these estimates with the uncertainties inherent in the estimation procedures is not considered. This study aimed to evaluate the local, spatial, local-temporal and spatial-temporal uncertainties of short-term FCO2 after harvest period in a sugar cane area. The FCO2 was featured in a sampling grid of 60m×60m containing 127 points with minimum separation distances from 0.5 to 10m between points. The FCO2 was evaluated 7 times within a total period of 10 days. The variability of FCO2 was described by descriptive statistics and variogram modeling. To calculate the uncertainties, 300 realizations made by sequential Gaussian simulation were considered. Local uncertainties were evaluated using the probability values exceeding certain critical thresholds, while the spatial uncertainties considering the probability of regions with high probability values together exceed the adopted limits. Using the daily uncertainties, the local-spatial and spatial-temporal uncertainty (Ftemp) was obtained. The daily and mean emissions showed a variability structure that was described by spherical and Gaussian models. The differences between the daily maps were related to variations in the magnitude of FCO2, covering mean values ranging from 1.28±0.11μmolm-2s-1 (F197) to 1.82±0.07μmolm-2s-1 (F195). The Ftemp showed low spatial uncertainty coupled with high local uncertainty estimates. The average emission showed great spatial uncertainty of the simulated values. The evaluation of uncertainties associated with the knowledge of temporal and spatial variability is an important tool for understanding many phenomena over time, such as the quantification of greenhouse gases or the identification of areas with high crop productivity. © 2013 Elsevier B.V.
Resumo:
Introduction: The aim of this study was to assess the influence of curing time and power on the degree of conversion and surface microhardness of 3 orthodontic composites. Methods: One hundred eighty discs, 6 mm in diameter, were divided into 3 groups of 60 samples according to the composite used-Transbond XT (3M Unitek, Monrovia, Calif), Opal Bond MV (Ultradent, South Jordan, Utah), and Transbond Plus Color Change (3M Unitek)-and each group was further divided into 3 subgroups (n = 20). Five samples were used to measure conversion, and 15 were used to measure microhardness. A light-emitting diode curing unit with multiwavelength emission of broad light was used for curing at 3 power levels (530, 760, and 1520 mW) and 3 times (8.5, 6, and 3 seconds), always totaling 4.56 joules. Five specimens from each subgroup were ground and mixed with potassium bromide to produce 8-mm tablets to be compared with 5 others made similarly with the respective noncured composite. These were placed into a spectrometer, and software was used for analysis. A microhardness tester was used to take Knoop hardness (KHN) measurements in 15 discs of each subgroup. The data were analyzed with 2 analysis of variance tests at 2 levels. Results: Differences were found in the conversion degree of the composites cured at different times and powers (P < 0.01). The composites showed similar degrees of conversion when light cured at 8.5 seconds (80.7%) and 6 seconds (79.0%), but not at 3 seconds (75.0%). The conversion degrees of the composites were different, with group 3 (87.2%) higher than group 2 (83.5%), which was higher than group 1 (64.0%). Differences in microhardness were also found (P < 0.01), with lower microhardness at 8.5 seconds (35.2 KHN), but no difference was observed between 6 seconds (41.6 KHN) and 3 seconds (42.8 KHN). Group 3 had the highest surface microhardness (35.9 KHN) compared with group 2 (33.7 KHN) and group 1 (30.0 KHN). Conclusions: Curing time can be reduced up to 6 seconds by increasing the power, with a slight decrease in the degree of conversion at 3 seconds; the decrease has a positive effect on the surface microhardness.
Resumo:
Abstract Background and Aims: Data on the influence of calibration on accuracy of continuous glucose monitoring (CGM) are scarce. The aim of the present study was to investigate whether the time point of calibration has an influence on sensor accuracy and whether this effect differs according to glycemic level. Subjects and Methods: Two CGM sensors were inserted simultaneously in the abdomen on either side of 20 individuals with type 1 diabetes. One sensor was calibrated predominantly using preprandial glucose (calibration(PRE)). The other sensor was calibrated predominantly using postprandial glucose (calibration(POST)). At minimum three additional glucose values per day were obtained for analysis of accuracy. Sensor readings were divided into four categories according to the glycemic range of the reference values (low, ≤4 mmol/L; euglycemic, 4.1-7 mmol/L; hyperglycemic I, 7.1-14 mmol/L; and hyperglycemic II, >14 mmol/L). Results: The overall mean±SEM absolute relative difference (MARD) between capillary reference values and sensor readings was 18.3±0.8% for calibration(PRE) and 21.9±1.2% for calibration(POST) (P<0.001). MARD according to glycemic range was 47.4±6.5% (low), 17.4±1.3% (euglycemic), 15.0±0.8% (hyperglycemic I), and 17.7±1.9% (hyperglycemic II) for calibration(PRE) and 67.5±9.5% (low), 24.2±1.8% (euglycemic), 15.5±0.9% (hyperglycemic I), and 15.3±1.9% (hyperglycemic II) for calibration(POST). In the low and euglycemic ranges MARD was significantly lower in calibration(PRE) compared with calibration(POST) (P=0.007 and P<0.001, respectively). Conclusions: Sensor calibration predominantly based on preprandial glucose resulted in a significantly higher overall sensor accuracy compared with a predominantly postprandial calibration. The difference was most pronounced in the hypo- and euglycemic reference range, whereas both calibration patterns were comparable in the hyperglycemic range.
Resumo:
Numerous time series studies have provided strong evidence of an association between increased levels of ambient air pollution and increased levels of hospital admissions, typically at 0, 1, or 2 days after an air pollution episode. An important research aim is to extend existing statistical models so that a more detailed understanding of the time course of hospitalization after exposure to air pollution can be obtained. Information about this time course, combined with prior knowledge about biological mechanisms, could provide the basis for hypotheses concerning the mechanism by which air pollution causes disease. Previous studies have identified two important methodological questions: (1) How can we estimate the shape of the distributed lag between increased air pollution exposure and increased mortality or morbidity? and (2) How should we estimate the cumulative population health risk from short-term exposure to air pollution? Distributed lag models are appropriate tools for estimating air pollution health effects that may be spread over several days. However, estimation for distributed lag models in air pollution and health applications is hampered by the substantial noise in the data and the inherently weak signal that is the target of investigation. We introduce an hierarchical Bayesian distributed lag model that incorporates prior information about the time course of pollution effects and combines information across multiple locations. The model has a connection to penalized spline smoothing using a special type of penalty matrix. We apply the model to estimating the distributed lag between exposure to particulate matter air pollution and hospitalization for cardiovascular and respiratory disease using data from a large United States air pollution and hospitalization database of Medicare enrollees in 94 counties covering the years 1999-2002.
Resumo:
The purpose of this study was to evaluate the neuroimaging quality and accuracy of prospective real-time navigator-echo acquisition correction versus untriggered intrauterine magnetic resonance imaging (MRI) techniques. Twenty women in whom fetal motion artifacts compromised the neuroimaging quality of fetal MRI taken during the 28.7 +/- 4 week of pregnancy below diagnostic levels were additionally investigated using a navigator-triggered half-Fourier acquired single-shot turbo-spin echo (HASTE) sequence. Imaging quality was evaluated by two blinded readers applying a rating scale from 1 (not diagnostic) to 5 (excellent). Diagnostic criteria included depiction of the germinal matrix, grey and white matter, CSF, brain stem and cerebellum. Signal-difference-to-noise ratios (SDNRs) in the white matter and germinal zone were quantitatively evaluated. Imaging quality improved in 18/20 patients using the navigator echo technique (2.4 +/- 0.58 vs. 3.65 +/- 0.73 SD, p < 0.01 for all evaluation criteria). In 2/20 patients fetal movement severely impaired image quality in conventional and navigated HASTE. Navigator-echo imaging revealed additional structural brain abnormalities and confirmed diagnosis in 8/20 patients. The accuracy improved from 50% to 90%. Average SDNR increased from 0.7 +/- 7.27 to 19.83 +/- 15.71 (p < 0.01). Navigator-echo-based real-time triggering of fetal head movement is a reliable technique that can deliver diagnostic fetal MR image quality despite vigorous fetal movement.
Resumo:
OBJECTIVE: Acute mountain sickness is a frequent and debilitating complication of high-altitude exposure, but there is little information on the prevalence and time course of acute mountain sickness in children and adolescents after rapid ascent by mechanical transportation to 3500 m, an altitude at which major tourist destinations are located throughout the world. METHODS: We performed serial assessments of acute mountain sickness (Lake Louise scores) in 48 healthy nonacclimatized children and adolescents (mean +/- SD age: 13.7 +/- 0.3 years; 20 girls and 28 boys), with no previous high-altitude experience, 6, 18, and 42 hours after arrival at the Jungfraujoch high-altitude research station (3450 m), which was reached through a 2.5-hour train ascent. RESULTS: We found that the overall prevalence of acute mountain sickness during the first 3 days at high altitude was 37.5%. Rates were similar for the 2 genders and decreased progressively during the stay (25% at 6 hours, 21% at 18 hours, and 8% at 42 hours). None of the subjects needed to be evacuated to lower altitude. Five subjects needed symptomatic treatment and responded well. CONCLUSION: After rapid ascent to high altitude, the prevalence of acute mountain sickness in children and adolescents was relatively low; the clinical manifestations were benign and resolved rapidly. These findings suggest that, for the majority of healthy nonacclimatized children and adolescents, travel to 3500 m is safe and pharmacologic prophylaxis for acute mountain sickness is not needed.
Resumo:
BACKGROUND: The estimation of physiologic ability and surgical stress (E-PASS) has been used to produce a numerical estimate of expected mortality and morbidity after elective gastrointestinal surgery. The aim of this study was to validate E-PASS in a selected cohort of patients requiring liver resections (LR). METHODS: In this retrospective study, E-PASS predictor equations for morbidity and mortality were applied to the prospective data from 243 patients requiring LR. The observed rates were compared with predicted rates using Fisher's exact test. The discriminative capability of E-PASS was evaluated using receiver-operating characteristic (ROC) curve analysis. RESULTS: The observed and predicted overall mortality rates were both 3.3% and the morbidity rates were 31.3 and 26.9%, respectively. There was a significant difference in the comprehensive risk scores for deceased and surviving patients (p = 0.043). However, the scores for patients with or without complications were not significantly different (p = 0.120). Subsequent ROC curve analysis revealed a poor predictive accuracy for morbidity. CONCLUSIONS: The E-PASS score seems to effectively predict mortality in this specific group of patients but is a poor predictor of complications. A new modified logistic regression might be required for LR in order to better predict the postoperative outcome.