925 resultados para Travel Time Prediction
Resumo:
A lack of quantitative high resolution paleoclimate data from the Southern Hemisphere limits the ability to examine current trends within the context of long-term natural climate variability. This study presents a temperature reconstruction for southern Tasmania based on analyses of a sediment core from Duckhole Lake (43.365°S, 146.875°E). The relationship between non-destructive whole core scanning reflectance spectroscopy measurements in the visible spectrum (380–730 nm) and the instrumental temperature record (ad 1911–2000) was used to develop a calibration-in-time reflectance spectroscopy-based temperature model. Results showed that a trough in reflectance from 650 to 700 nm, which represents chlorophyll and its derivatives, was significantly correlated to annual mean temperature. A calibration model was developed (R = 0.56, p auto < 0.05, root mean squared error of prediction (RMSEP) = 0.21°C, five-year filtered data, calibration period 1911–2000) and applied down-core to reconstruct annual mean temperatures in southern Tasmania over the last c. 950 years. This indicated that temperatures were initially cool c. ad 1050, but steadily increased until the late ad 1100s. After a brief cool period in the ad 1200s, temperatures again increased. Temperatures steadily decreased during the ad 1600s and remained relatively stable until the start of the 20th century when they rapidly decreased, before increasing from ad 1960s onwards. Comparisons with high resolution temperature records from western Tasmania, New Zealand and South America revealed some similarities, but also highlighted differences in temperature variability across the mid-latitudes of the Southern Hemisphere. These are likely due to a combination of factors including the spatial variability in climate between and within regions, and differences between records that document seasonal (i.e. warm season/late summer) versus annual temperature variability. This highlights the need for further records from the mid-latitudes of the Southern Hemisphere in order to constrain past natural spatial and seasonal/annual temperature variability in the region, and to accurately identify and attribute changes to natural variability and/or anthropogenic activities.
Resumo:
BACKGROUND Timing is critical for efficient hepatitis A vaccination in high endemic areas as high levels of maternal IgG antibodies against the hepatitis A virus (HAV) present in the first year of life may impede the vaccine response. OBJECTIVES To describe the kinetics of the decline of anti-HAV maternal antibodies, and to estimate the time of complete loss of maternal antibodies in infants in León, Nicaragua, a region in which almost all mothers are anti-HAV seropositive. METHODS We collected cord blood samples from 99 healthy newborns together with 49 corresponding maternal blood samples, as well as further blood samples at 2 and 7 months of age. Anti-HAV IgG antibody levels were measured by enzyme immunoassay (EIA). We predicted the time when antibodies would fall below 10 mIU/ml, the presumed lowest level of seroprotection. RESULTS Seroprevalence was 100% at birth (GMC 8392 mIU/ml); maternal and cord blood antibody concentrations were similar. The maternal antibody levels of the infants decreased exponentially with age and the half-life of the maternal antibody was estimated to be 40 days. The relationship between the antibody concentration at birth and time until full waning was described as: critical age (months)=3.355+1.969 × log(10)(Ab-level at birth). The survival model estimated that loss of passive immunity will have occurred in 95% of infants by the age of 13.2 months. CONCLUSIONS Complete waning of maternal anti-HAV antibodies may take until early in the second year of life. The here-derived formula relating maternal or cord blood antibody concentrations to the age at which passive immunity is lost may be used to determine the optimal age of childhood HAV vaccination.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.
Resumo:
OBJECTIVE Cognitive impairments are regarded as a core component of schizophrenia. However, the cognitive dimension of psychosis is hardly considered by ultra-high risk (UHR) criteria. Therefore, we studied whether the combination of symptomatic UHR criteria and the basic symptom criterion "cognitive disturbances" (COGDIS) is superior in predicting first-episode psychosis. METHOD In a naturalistic 48-month follow-up study, the conversion rate to first-episode psychosis was studied in 246 outpatients of an early detection of psychosis service (FETZ); thereby, the association between conversion, and the combined and singular use of UHR criteria and COGDIS was compared. RESULTS Patients that met UHR criteria and COGDIS (n=127) at baseline had a significantly higher risk of conversion (hr=0.66 at month 48) and a shorter time to conversion than patients that met only UHR criteria (n=37; hr=0.28) or only COGDIS (n=30; hr=0.23). Furthermore, the risk of conversion was higher for the combined criteria than for UHR criteria (n=164; hr=0.56 at month 48) and COGDIS (n=158; hr=0.56 at month 48) when considered irrespective of each other. CONCLUSIONS Our findings support the merits of considering both COGDIS and UHR criteria in the early detection of persons who are at high risk of developing a first psychotic episode within 48months. Applying both sets of criteria improves sensitivity and individual risk estimation, and may thereby support the development of stage-targeted interventions. Moreover, since the combined approach enables the identification of considerably more homogeneous at-risk samples, it should support both preventive and basic research.
Resumo:
Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. In order to evaluate this hypothesis, the general goal of this research is to build models for survival prediction of glioma patients using DNA molecular profiles (U133 Affymetrix gene expression microarrays) along with clinical information. First, a predictive Random Forest model is built for binary outcomes (i.e. short vs. long-term survival) and a small subset of genes whose expression values can be used to predict survival time is selected. Following, a new statistical methodology is developed for predicting time-to-death outcomes using Bayesian ensemble trees. Due to a large heterogeneity observed within prognostic classes obtained by the Random Forest model, prediction can be improved by relating time-to-death with gene expression profile directly. We propose a Bayesian ensemble model for survival prediction which is appropriate for high-dimensional data such as gene expression data. Our approach is based on the ensemble "sum-of-trees" model which is flexible to incorporate additive and interaction effects between genes. We specify a fully Bayesian hierarchical approach and illustrate our methodology for the CPH, Weibull, and AFT survival models. We overcome the lack of conjugacy using a latent variable formulation to model the covariate effects which decreases computation time for model fitting. Also, our proposed models provides a model-free way to select important predictive prognostic markers based on controlling false discovery rates. We compare the performance of our methods with baseline reference survival methods and apply our methodology to an unpublished data set of brain tumor survival times and gene expression data, selecting genes potentially related to the development of the disease under study. A closing discussion compares results obtained by Random Forest and Bayesian ensemble methods under the biological/clinical perspectives and highlights the statistical advantages and disadvantages of the new methodology in the context of DNA microarray data analysis.
Resumo:
Contaminant metals bound to sediments are subject to considerable solubilization during passage of the sediments through the digestive systems of deposit feeders. We examined the kinetics of this process, using digestive fluids extracted from deposit feeders Arenicola marina and Parastichopus californicus and then incubated with contaminated sediments. Kinetics are complex, with solubilization followed occasionally by readsorption onto the sediment. In general, solubilization kinetics are biphasic, with an initial rapid step followed by a slower reaction. For many sediment-organism combinations, the reaction will not reach a steady state or equilibrium within the gut retention time (GRT) of the organisms, suggesting that metal bioavailability in sediments is a time-dependent parameter. Experiments with commercial protein solutions mimic the kinetic patterns observed with digestive fluids, which corroborates our previous study that complexation by dissolved amino acids (AA) in digestive fluids leads to metal solubilization (Chen & Mayer 1998b; Environ Sci Technol 32:770-778). The relative importance of the fast and slow reactions appears to depend on the ratio of ligands in gut fluids to the amount of bound metal in sediments. High ligand to solid metal ratios result in more metals released in fast reactions and thus higher lability of sedimentary metals. Multiple extractions of a sediment with digestive fluid of A. marina confirm the potential importance of incomplete reactions within a single deposit-feeding event, and make clear that bioavailability to a single animal is Likely different from that to a community of organisms. The complex kinetic patterns lead to the counterintuitive prediction that toxification of digestive enzymes by solubilized metals will occur more readily in species that dissolve less metals.
An Early-Warning System for Hypo-/Hyperglycemic Events Based on Fusion of Adaptive Prediction Models
Resumo:
Introduction: Early warning of future hypoglycemic and hyperglycemic events can improve the safety of type 1 diabetes mellitus (T1DM) patients. The aim of this study is to design and evaluate a hypoglycemia / hyperglycemia early warning system (EWS) for T1DM patients under sensor-augmented pump (SAP) therapy. Methods: The EWS is based on the combination of data-driven online adaptive prediction models and a warning algorithm. Three modeling approaches have been investigated: (i) autoregressive (ARX) models, (ii) auto-regressive with an output correction module (cARX) models, and (iii) recurrent neural network (RNN) models. The warning algorithm performs postprocessing of the models′ outputs and issues alerts if upcoming hypoglycemic/hyperglycemic events are detected. Fusion of the cARX and RNN models, due to their complementary prediction performances, resulted in the hybrid autoregressive with an output correction module/recurrent neural network (cARN)-based EWS. Results: The EWS was evaluated on 23 T1DM patients under SAP therapy. The ARX-based system achieved hypoglycemic (hyperglycemic) event prediction with median values of accuracy of 100.0% (100.0%), detection time of 10.0 (8.0) min, and daily false alarms of 0.7 (0.5). The respective values for the cARX-based system were 100.0% (100.0%), 17.5 (14.8) min, and 1.5 (1.3) and, for the RNN-based system, were 100.0% (92.0%), 8.4 (7.0) min, and 0.1 (0.2). The hybrid cARN-based EWS presented outperforming results with 100.0% (100.0%) prediction accuracy, detection 16.7 (14.7) min in advance, and 0.8 (0.8) daily false alarms. Conclusion: Combined use of cARX and RNN models for the development of an EWS outperformed the single use of each model, achieving accurate and prompt event prediction with few false alarms, thus providing increased safety and comfort.
Resumo:
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
Trend analysis of MODIS NDVI time series for detecting land degradation and regeneration in Mongolia
Resumo:
BACKGROUND Heart failure with preserved ejection fraction (HFpEF) represents a growing health burden associated with substantial mortality and morbidity. Consequently, risk prediction is of highest importance. Endothelial dysfunction has been recently shown to play an important role in the complex pathophysiology of HFpEF. We therefore aimed to assess von Willebrand factor (vWF), a marker of endothelial damage, as potential biomarker for risk assessment in patients with HFpEF. METHODS AND RESULTS Concentrations of vWF were assessed in 457 patients with HFpEF enrolled as part of the LUdwigshafen Risk and Cardiovascular Health (LURIC) study. All-cause mortality was observed in 40% of patients during a median follow-up time of 9.7 years. vWF significantly predicted mortality with a hazard ratio (HR) per increase of 1 SD of 1.45 (95% confidence interval, 1.26-1.68; P<0.001) and remained a significant predictor after adjustment for age, sex, body mass index, N-terminal pro-B-type natriuretic peptide (NT-proBNP), renal function, and frequent HFpEF-related comorbidities (adjusted HR per 1 SD, 1.22; 95% confidence interval, 1.05-1.42; P=0.001). Most notably, vWF showed additional prognostic value beyond that achievable with NT-proBNP indicated by improvements in C-Statistic (vWF×NT-proBNP: 0.65 versus NT-proBNP: 0.63; P for comparison, 0.004) and category-free net reclassification index (37.6%; P<0.001). CONCLUSIONS vWF is an independent predictor of long-term outcome in patients with HFpEF, which is in line with endothelial dysfunction as potential mediator in the pathophysiology of HFpEF. In particular, combined assessment of vWF and NT-proBNP improved risk prediction in this vulnerable group of patients.
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.