940 resultados para Time of Arrival Method
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
There is no accepted way of measuring prothrombin time without time loss for patients undergoing major surgery who are at risk of intraoperative dilution and consumption coagulopathy due to bleeding and volume replacement with crystalloids or colloids. Decisions to transfuse fresh frozen plasma and procoagulatory drugs have to rely on clinical judgment in these situations. Point-of-care devices are considerably faster than the standard laboratory methods. In this study we assessed the accuracy of a Point-of-care (PoC) device measuring prothrombin time compared to the standard laboratory method. Patients undergoing major surgery and intensive care unit patients were included. PoC prothrombin time was measured by CoaguChek XS Plus (Roche Diagnostics, Switzerland). PoC and reference tests were performed independently and interpreted under blinded conditions. Using a cut-off prothrombin time of 50%, we calculated diagnostic accuracy measures, plotted a receiver operating characteristic (ROC) curve and tested for equivalence between the two methods. PoC sensitivity and specificity were 95% (95% CI 77%, 100%) and 95% (95% CI 91%, 98%) respectively. The negative likelihood ratio was 0.05 (95% CI 0.01, 0.32). The positive likelihood ratio was 19.57 (95% CI 10.62, 36.06). The area under the ROC curve was 0.988. Equivalence between the two methods was confirmed. CoaguChek XS Plus is a rapid and highly accurate test compared with the reference test. These findings suggest that PoC testing will be useful for monitoring intraoperative prothrombin time when coagulopathy is suspected. It could lead to a more rational use of expensive and limited blood bank resources.
Resumo:
Spatial tracking is one of the most challenging and important parts of Mixed Reality environments. Many applications, especially in the domain of Augmented Reality, rely on the fusion of several tracking systems in order to optimize the overall performance. While the topic of spatial tracking sensor fusion has already seen considerable interest, most results only deal with the integration of carefully arranged setups as opposed to dynamic sensor fusion setups. A crucial prerequisite for correct sensor fusion is the temporal alignment of the tracking data from several sensors. Tracking sensors are typically encountered in Mixed Reality applications, are generally not synchronized. We present a general method to calibrate the temporal offset between different sensors by the Time Delay Estimation method which can be used to perform on-line temporal calibration. By applying Time Delay Estimation on the tracking data, we show that the temporal offset between generic Mixed Reality spatial tracking sensors can be calibrated. To show the correctness and the feasibility of this approach, we have examined different variations of our method and evaluated various combinations of tracking sensors. We furthermore integrated this time synchronization method into our UBITRACK Mixed Reality tracking framework to provide facilities for calibration and real-time data alignment.
Resumo:
Frequency-transformed EEG resting data has been widely used to describe normal and abnormal brain functional states as function of the spectral power in different frequency bands. This has yielded a series of clinically relevant findings. However, by transforming the EEG into the frequency domain, the initially excellent time resolution of time-domain EEG is lost. The topographic time-frequency decomposition is a novel computerized EEG analysis method that combines previously available techniques from time-domain spatial EEG analysis and time-frequency decomposition of single-channel time series. It yields a new, physiologically and statistically plausible topographic time-frequency representation of human multichannel EEG. The original EEG is accounted by the coefficients of a large set of user defined EEG like time-series, which are optimized for maximal spatial smoothness and minimal norm. These coefficients are then reduced to a small number of model scalp field configurations, which vary in intensity as a function of time and frequency. The result is thus a small number of EEG field configurations, each with a corresponding time-frequency (Wigner) plot. The method has several advantages: It does not assume that the data is composed of orthogonal elements, it does not assume stationarity, it produces topographical maps and it allows to include user-defined, specific EEG elements, such as spike and wave patterns. After a formal introduction of the method, several examples are given, which include artificial data and multichannel EEG during different physiological and pathological conditions.
Resumo:
In this study, the development of a new sensitive method for the analysis of alpha-dicarbonyls glyoxal (G) and methylglyoxal (MG) in environmental ice and snow is presented. Stir bar sorptive extraction with in situ derivatization and liquid desorption (SBSE-LD) was used for sample extraction, enrichment, and derivatization. Measurements were carried out using high-performance liquid chromatography coupled to electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). As part of the method development, SBSE-LD parameters such as extraction time, derivatization reagent, desorption time and solvent, and the effect of NaCl addition on the SBSE efficiency as well as measurement parameters of HPLC-ESI-MS/MS were evaluated. Calibration was performed in the range of 1–60 ng/mL using spiked ultrapure water samples, thus incorporating the complete SBSE and derivatization process. 4-Fluorobenzaldehyde was applied as internal standard. Inter-batch precision was <12 % RSD. Recoveries were determined by means of spiked snow samples and were 78.9 ± 5.6 % for G and 82.7 ± 7.5 % for MG, respectively. Instrumental detection limits of 0.242 and 0.213 ng/mL for G and MG were achieved using the multiple reaction monitoring mode. Relative detection limits referred to a sample volume of 15 mL were 0.016 ng/mL for G and 0.014 ng/mL for MG. The optimized method was applied for the analysis of snow samples from Mount Hohenpeissenberg (close to the Meteorological Observatory Hohenpeissenberg, Germany) and samples from an ice core from Upper Grenzgletscher (Monte Rosa massif, Switzerland). Resulting concentrations were 0.085–16.3 ng/mL for G and 0.126–3.6 ng/mL for MG. Concentrations of G and MG in snow were 1–2 orders of magnitude higher than in ice core samples. The described method represents a simple, green, and sensitive analytical approach to measure G and MG in aqueous environmental samples.
Resumo:
OBJECTIVE To assess the reliability of the cervical vertebrae maturation method (CVM). BACKGROUND Skeletal maturity estimation can influence the manner and time of orthodontic treatment. The CVM method evaluates skeletal growth on the basis of the changes in the morphology of cervical vertebrae C2, C3, C4 during growth. These vertebrae are visible on a lateral cephalogram, so the method does not require an additional radiograph. METHODS In this website based study, 10 orthodontists with a long clinical practice (3 routinely using the method - "Routine user - RU" and 7 with less experience in the CVM method - "Non-Routine user - nonRU") rated twice cervical vertebrae maturation with the CVM method on 50 cropped scans of lateral cephalograms of children in circumpubertal age (for boys: 11.5 to 15.5 years; for girls: 10 to 14 years). Kappa statistics (with lower limits of 95% confidence intervals (CI)) and proportion of complete agreement on staging was used to evaluate intra- and inter-assessor agreement. RESULTS The mean weighted kappa for intra-assessor agreement was 0.44 (range: 0.30-0.64; range of lower limits of 95% CI: 0.12-0.48) and for inter-assessor agreement was 0.28 (range: -0.01-0.58; range of lower limits of 95% CI: -0.14-0.42). The mean proportion of identical scores assigned by the same assessor was 55.2 %(range: 44-74 %) and for different pairs of assessors was 42 % (range: 16-68 %). CONCLUSIONS The reliability of the CVM method is questionable and if orthodontic treatment should be initiated relative to the maximum growth, the use of additional biologic indicators should be considered (Tab. 4, Fig. 1, Ref. 24).
Resumo:
I have developed a novel approach to test for toxic organic substances adsorbed onto ultra fine particulate particles present in the ambient air in Northeast Houston, Texas. These particles are predominantly carbon soot with an aerodynamic diameter (AD) of <2.5 μm. If present in the ambient air, many of the organic substances will be absorbed to the surface of the particles (which act just like a charcoal air filter), and may be adducted into the respiratory system. Once imbedded into the lungs these particles may release the adsorbed toxic organic substances with serious health consequences. I used a Airmetrics portable Minivol air sampler time drawing the ambient air through collection filters samples from 6 separate sites in Northeast Houston, an area known for high ambient PM 2.5 released from chemical plants and other sources (e.g. vehicle emissions).(1) In practice, the mass of the collected particles were much less than the mass of the filters. My technique was designed to release the adsorbed organic substances on the fine carbon particles by heating the filter samples that included the PM 2.5 particles prior to identification by gas chromatography/mass spectrometry (GCMS). The results showed negligible amounts of target chemicals from the collection filters. However, the filters alone released organic substances and GCMS could not distinguish between the organic substances released from the soot particles from those released from the heated filter fabric. However, an efficacy tests of my method using two wax burning candles that released soot revealed high levels of benzene. This suggests that my method has the potential to reveal the organic substances adsorbed onto the PM 2.5 for analysis. In order to achieve this goal, I must refine the particle collection process which would be independent of the filters; the filters upon heating also release organic substances obscuring the contribution from the soot particles. To obtain pure soot particles I will have to filter more air so that the soot particles can be shaken off the filters and then analyzed by my new technique. ^
Resumo:
Orientation based on visual cues can be extremely difficult in crowded bird colonies due to the presence of many individuals. We studied king penguins (Aptenodytes patagonicus) that live in dense colonies and are constantly faced with such problems. Our aims were to describe adult penguin homing paths on land and to test whether visual cues are important for their orientation in the colony. We also tested the hypothesis that older penguins should be better able to cope with limited visual cues due to their greater experience. We collected and examined GPS paths of homing penguins. In addition, we analyzed 8 months of penguin arrivals to and departures from the colony using data from an automatic identification system. We found that birds rearing chicks did not minimize their traveling time on land and did not proceed to their young (located in creches) along straight paths. Moreover, breeding birds' arrivals and departures were affected by the time of day and luminosity levels. Our data suggest that king penguins prefer to move in and out of the colony when visual cues are available. Still, they are capable of navigating even in complete darkness, and this ability seems to develop over the years, with older breeding birds more likely to move through the colony at nighttime luminosity levels. This study is the first step in unveiling the mysteries of king penguin orientation on land.
Resumo:
A nested ice flow model was developed for eastern Dronning Maud Land to assist with the dating and interpretation of the EDML deep ice core. The model consists of a high-resolution higher-order ice dynamic flow model that was nested into a comprehensive 3-D thermomechanical model of the whole Antarctic ice sheet. As the drill site is on a flank position the calculations specifically take into account the effects of horizontal advection as deeper ice in the core originated from higher inland. First the regional velocity field and ice sheet geometry is obtained from a forward experiment over the last 8 glacial cycles. The result is subsequently employed in a Lagrangian backtracing algorithm to provide particle paths back to their time and place of deposition. The procedure directly yields the depth-age distribution, surface conditions at particle origin, and a suite of relevant parameters such as initial annual layer thickness. This paper discusses the method and the main results of the experiment, including the ice core chronology, the non-climatic corrections needed to extract the climatic part of the signal, and the thinning function. The focus is on the upper 89% of the ice core (appr. 170 kyears) as the dating below that is increasingly less robust owing to the unknown value of the geothermal heat flux. It is found that the temperature biases resulting from variations of surface elevation are up to half of the magnitude of the climatic changes themselves.