856 resultados para Multivariable predictive model
Resumo:
OBJECTIVE: This study explored gene expression differences in predicting response to chemoradiotherapy in esophageal cancer. PURPOSE:: A major pathological response to neoadjuvant chemoradiation is observed in about 40% of esophageal cancer patients and is associated with favorable outcomes. However, patients with tumors of similar histology, differentiation, and stage can have vastly different responses to the same neoadjuvant therapy. This dichotomy may be due to differences in the molecular genetic environment of the tumor cells. BACKGROUND DATA: Diagnostic biopsies were obtained from a training cohort of esophageal cancer patients (13), and extracted RNA was hybridized to genome expression microarrays. The resulting gene expression data was verified by qRT-PCR. In a larger, independent validation cohort (27), we examined differential gene expression by qRT-PCR. The ability of differentially-regulated genes to predict response to therapy was assessed in a multivariate leave-one-out cross-validation model. RESULTS: Although 411 genes were differentially expressed between normal and tumor tissue, only 103 genes were altered between responder and non-responder tumor; and 67 genes differentially expressed >2-fold. These included genes previously reported in esophageal cancer and a number of novel genes. In the validation cohort, 8 of 12 selected genes were significantly different between the response groups. In the predictive model, 5 of 8 genes could predict response to therapy with 95% accuracy in a subset (74%) of patients. CONCLUSIONS: This study has identified a gene microarray pattern and a set of genes associated with response to neoadjuvant chemoradiation in esophageal cancer. The potential of these genes as biomarkers of response to treatment warrants further investigation. Copyright © 2009 by Lippincott Williams & Wilkins.
Resumo:
BACKGROUND Mosquito-borne diseases are climate sensitive and there has been increasing concern over the impact of climate change on future disease risk. This paper projected the potential future risk of Barmah Forest virus (BFV) disease under climate change scenarios in Queensland, Australia. METHODS/PRINCIPAL FINDINGS We obtained data on notified BFV cases, climate (maximum and minimum temperature and rainfall), socio-economic and tidal conditions for current period 2000-2008 for coastal regions in Queensland. Grid-data on future climate projections for 2025, 2050 and 2100 were also obtained. Logistic regression models were built to forecast the otential risk of BFV disease distribution under existing climatic, socio-economic and tidal conditions. The model was applied to estimate the potential geographic distribution of BFV outbreaks under climate change scenarios. The predictive model had good model accuracy, sensitivity and specificity. Maps on potential risk of future BFV disease indicated that disease would vary significantly across coastal regions in Queensland by 2100 due to marked differences in future rainfall and temperature projections. CONCLUSIONS/SIGNIFICANCE We conclude that the results of this study demonstrate that the future risk of BFV disease would vary across coastal regions in Queensland. These results may be helpful for public health decision making towards developing effective risk management strategies for BFV disease control and prevention programs in Queensland.
Resumo:
A predictive model of terrorist activity is developed by examining the daily number of terrorist attacks in Indonesia from 1994 through 2007. The dynamic model employs a shot noise process to explain the self-exciting nature of the terrorist activities. This estimates the probability of future attacks as a function of the times since the past attacks. In addition, the excess of nonattack days coupled with the presence of multiple coordinated attacks on the same day compelled the use of hurdle models to jointly model the probability of an attack day and corresponding number of attacks. A power law distribution with a shot noise driven parameter best modeled the number of attacks on an attack day. Interpretation of the model parameters is discussed and predictive performance of the models is evaluated.
Resumo:
Objectives Given increasing trends of obesity being noted from early in life and that active lifestyles track across time, it is important that children at a very young age be active to combat a foundation of unhealthy behaviours forming. This study investigated, within a theory of planned behaviour (TPB) framework, factors which influence mothers’ decisions about their child’s 1) adequate physical activity (PA) and 2) limited screen time behaviours. Methods Mothers (N = 162) completed a main questionnaire, via on-line or paper-based administration, which comprised standard TPB items in addition to measures of planning and background demographic variables. One week later, consenting mothers completed a follow-up telephone questionnaire which assessed the decisions they had made regarding their child’s PA and screen time behaviours during the previous week. Results Hierarchical multiple regression analyses revealed support for the predictive model, explaining an overall 73% and 78% of the variance in mothers’ intention and 38% and 53% of the variance in mothers’ decisions to ensure their child engages in adequate PA and limited screen time, respectively. Attitude and subjective norms predicted intention in both target behaviours, as did intentions with behaviour. Contrary to predictions, perceived behavioural control (PBC) in PA behaviour and planning in screen time behaviour were not significant predictors of intention, neither was PBC a predictor of either behaviour. Conclusions The findings illustrate the various roles that psycho-social factors play in mothers’ decisions to ensure their child engages in active lifestyle behaviours which can help to inform future intervention programs aimed at combating very young children’s inactivity.
Resumo:
This research analyses the extent of damage to buildings in Brisbane, Ipswich and Grantham during the recent Eastern Australia flooding and explore the role planning and design/construction regulations played in these failures. It highlights weaknesses in the current systems and propose effective solutions to mitigate future damage and financial loss under current or future climates. 2010 and early 2011 saw major flooding throughout much of Eastern Australia. Queensland and Victoria were particularly hard hit, with insured losses in these states reaching $2.5 billion and many thousands of homes inundated. The Queensland cities of Brisbane and Ipswich were the worst affected; around two-thirds of all inundated property/buildings were in these two areas. Other local government areas to record high levels of inundation were Central Highlands and Rockhampton Regional Councils in Queensland, and Buloke, Campaspe, Central Gold Fields and Loddon in Victoria. Flash flooding was a problem in a number of Victorian councils, but the Lockyer Valley west of Ipswich suffered the most extensive damage with 19 lives lost and more than 100 homes completely destroyed. In all more than 28,000 properties were inundated in Queensland and around 2,500 buildings affected in Victoria. Of the residential properties affected in Brisbane, around 90% were in areas developed prior to the introduction of floodplain development controls, with many also suffering inundation during the 1974 floods. The project developed a predictive model for estimating flood loss and occupant displacement. This model can now be used for flood risk assessments or rapid assessment of impacts following a flood event.
Resumo:
Outdoor robots such as planetary rovers must be able to navigate safely and reliably in order to successfully perform missions in remote or hostile environments. Mobility prediction is critical to achieving this goal due to the inherent control uncertainty faced by robots traversing natural terrain. We propose a novel algorithm for stochastic mobility prediction based on multi-output Gaussian process regression. Our algorithm considers the correlation between heading and distance uncertainty and provides a predictive model that can easily be exploited by motion planning algorithms. We evaluate our method experimentally and report results from over 30 trials in a Mars-analogue environment that demonstrate the effectiveness of our method and illustrate the importance of mobility prediction in navigating challenging terrain.
Resumo:
Cool roof coatings are identified by their solar reflectance index. They have been reported to have multiple benefits, the extent of which are strongly dependent on the peculiarities of the local climate, building stock and electricity network. This paper presents measured and simulated data from residential, educational and commercial buildings involved in recent field trials in Australia. The purpose of the field trials was to evaluate the impact of such coatings on electricity demand and load and to assess their potential application to improve comfort whilst avoiding the need for air conditioners. Measured reductions in temperature, power (kW) and energy (kWh) were used to develop a predictive model that correlates ambient temperature distribution profiles, building demand reduction profiles and electricity network peak demand times. Combined with simulated data, the study indicates the types of buildings that could be targeted in Demand Management programs for the mutual benefit of electricity networks and building occupants.
Resumo:
Introduction Risk factor analyses for nosocomial infections (NIs) are complex. First, due to competing events for NI, the association between risk factors of NI as measured using hazard rates may not coincide with the association using cumulative probability (risk). Second, patients from the same intensive care unit (ICU) who share the same environmental exposure are likely to be more similar with regard to risk factors predisposing to a NI than patients from different ICUs. We aimed to develop an analytical approach to account for both features and to use it to evaluate associations between patient- and ICU-level characteristics with both rates of NI and competing risks and with the cumulative probability of infection. Methods We considered a multicenter database of 159 intensive care units containing 109,216 admissions (813,739 admission-days) from the Spanish HELICS-ENVIN ICU network. We analyzed the data using two models: an etiologic model (rate based) and a predictive model (risk based). In both models, random effects (shared frailties) were introduced to assess heterogeneity. Death and discharge without NI are treated as competing events for NI. Results There was a large heterogeneity across ICUs in NI hazard rates, which remained after accounting for multilevel risk factors, meaning that there are remaining unobserved ICU-specific factors that influence NI occurrence. Heterogeneity across ICUs in terms of cumulative probability of NI was even more pronounced. Several risk factors had markedly different associations in the rate-based and risk-based models. For some, the associations differed in magnitude. For example, high Acute Physiology and Chronic Health Evaluation II (APACHE II) scores were associated with modest increases in the rate of nosocomial bacteremia, but large increases in the risk. Others differed in sign, for example respiratory vs cardiovascular diagnostic categories were associated with a reduced rate of nosocomial bacteremia, but an increased risk. Conclusions A combination of competing risks and multilevel models is required to understand direct and indirect risk factors for NI and distinguish patient-level from ICU-level factors.
Resumo:
This research established innovative methods and a predictive model to evaluate water quality using the trace element and heavy metal concentrations of drinking water from the greater Brisbane area. Significantly, the combined use of Inductively Coupled Plasma - Mass Spectrometry and Chemometrics can be used worldwide to provide comprehensive, rapid and affordable analyses of elements in drinking water that can have a considerable impact on human health.
Resumo:
Several common genetic variants have recently been discovered that appear to influence white matter microstructure, as measured by diffusion tensor imaging (DTI). Each genetic variant explains only a small proportion of the variance in brain microstructure, so we set out to explore their combined effect on the white matter integrity of the corpus callosum. We measured six common candidate single-nucleotide polymorphisms (SNPs) in the COMT, NTRK1, BDNF, ErbB4, CLU, and HFE genes, and investigated their individual and aggregate effects on white matter structure in 395 healthy adult twins and siblings (age: 20-30 years). All subjects were scanned with 4-tesla 94-direction high angular resolution diffusion imaging. When combined using mixed-effects linear regression, a joint model based on five of the candidate SNPs (COMT, NTRK1, ErbB4, CLU, and HFE) explained ∼ 6% of the variance in the average fractional anisotropy (FA) of the corpus callosum. This predictive model had detectable effects on FA at 82% of the corpus callosum voxels, including the genu, body, and splenium. Predicting the brain's fiber microstructure from genotypes may ultimately help in early risk assessment, and eventually, in personalized treatment for neuropsychiatric disorders in which brain integrity and connectivity are affected.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
Spectral data were collected of intact and ground kernels using 3 instruments (using Si-PbS, Si, and InGaAs detectors), operating over different areas of the spectrum (between 400 and 2500 nm) and employing transmittance, interactance, and reflectance sample presentation strategies. Kernels were assessed on the basis of oil and water content, and with respect to the defect categories of insect damage, rancidity, discoloration, mould growth, germination, and decomposition. Predictive model performance statistics for oil content models were acceptable on all instruments (R2 > 0.98; RMSECV < 2.5%, which is similar to reference analysis error), although that for the instrument employing reflectance optics was inferior to models developed for the instruments employing transmission optics. The spectral positions for calibration coefficients were consistent with absorbance due to the third overtones of CH2 stretching. Calibration models for moisture content in ground samples were acceptable on all instruments (R2 > 0.97; RMSECV < 0.2%), whereas calibration models for intact kernels were relatively poor. Calibration coefficients were more highly weighted around 1360, 740 and 840 nm, consistent with absorbance due to overtones of O-H stretching and combination. Intact kernels with brown centres or rancidity could be discriminated from each other and from sound kernels using principal component analysis. Part kernels affected by insect damage, discoloration, mould growth, germination, and decomposition could be discriminated from sound kernels. However, discrimination among these defect categories was not distinct and could not be validated on an independent set. It is concluded that there is good potential for a low cost Si photodiode array instrument to be employed to identify some quality defects of intact macadamia kernels and to quantify oil and moisture content of kernels in the process laboratory and for oil content in-line. Further work is required to examine the robustness of predictive models across different populations, including growing districts, cultivars and times of harvest.
Resumo:
BACKGROUND: The inability to consistently guarantee internal quality of horticulture produce is of major importance to the primary producer, marketers and ultimately the consumer. Currently, commercial avocado maturity estimation is based on the destructive assessment of percentage dry matter (%DM), and sometimes percentage oil, both of which are highly correlated with maturity. In this study the utility of Fourier transform (FT) near-infrared spectroscopy (NIRS) was investigated for the first time as a non-invasive technique for estimating %DM of whole intact 'Hass' avocado fruit. Partial least squares regression models were developed from the diffuse reflectance spectra to predict %DM, taking into account effects of intra-seasonal variation and orchard conditions. RESULTS: It was found that combining three harvests (early, mid and late) from a single farm in the major production district of central Queensland yielded a predictive model for %DM with a coefficient of determination for the validation set of 0.76 and a root mean square error of prediction of 1.53% for DM in the range 19.4-34.2%. CONCLUSION: The results of the study indicate the potential of FT-NIRS in diffuse reflectance mode to non-invasively predict %DM of whole 'Hass' avocado fruit. When the FT-NIRS system was assessed on whole avocados, the results compared favourably against data from other NIRS systems identified in the literature that have been used in research applications on avocados.
Resumo:
The feasibility of state-wide eradication of 41 invasive plant taxa currently listed as ‘Class 1 declared pests’ under the Queensland Land Protection (Pest and Stock Route Management) Act 2002 was assessed using the predictive model ‘WeedSearch’. Results indicated that all but one species (Alternanthera philoxeroides) could be eradicated, provided sufficient funding and labour were available. Slightly less than one quarter (24.4%) (n = 10) of Class 1 weed taxa could be eradicated for less than $100 000 per taxon. An additional 43.9% (n = 18) could be eradicated for between $100 000 and $1M per taxon. Hence, 68.3% of Class 1 weed taxa (n = 28) could be eradicated for less than $1M per taxon. Eradication of 29.3% (n = 12) is predicted to cost more than $1M per taxon. Comparison of these WeedSearch outputs with either empirical analysis or results from a previous application of the model suggests that these costs may, in fact, be underestimates. Considering the likelihood that each weed will cost the state many millions of dollars in long-term losses (e.g. losses to primary production, environmental impacts and control costs), eradication seems a wise investment. Even where predicted costs are over $1M, eradication can still offer highly favourable benefit:cost ratios. The total (cumulative) cost of eradication of all 41 weed taxa is substantial; for all taxa, the estimated cost of eradication in the first year alone is $8 618 000. This study provides important information for policy makers, who must decide where to invest public funding.