986 resultados para PREDICTION SERVER
Resumo:
Relatório de Estágio apresentado para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Gestão do Território.
Resumo:
Background: Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods: A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score >= 8 in men and >= 5 in women. Results: 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions: The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.
Resumo:
In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.
Resumo:
This project focuses on the study of different explanatory models for the behavior of CDS security, such as Fixed-Effect Model, GLS Random-Effect Model, Pooled OLS and Quantile Regression Model. After determining the best fitness model, trading strategies with long and short positions in CDS have been developed. Due to some specifications of CDS, I conclude that the quantile regression is the most efficient model to estimate the data. The P&L and Sharpe Ratio of the strategy are analyzed using a backtesting analogy, where I conclude that, mainly for non-financial companies, the model allows traders to take advantage of and profit from arbitrages.
Resumo:
AIMS/HYPOTHESIS: Several susceptibility genes for type 2 diabetes have been discovered recently. Individually, these genes increase the disease risk only minimally. The goals of the present study were to determine, at the population level, the risk of diabetes in individuals who carry risk alleles within several susceptibility genes for the disease and the added value of this genetic information over the clinical predictors. METHODS: We constructed an additive genetic score using the most replicated single-nucleotide polymorphisms (SNPs) within 15 type 2 diabetes-susceptibility genes, weighting each SNP with its reported effect. We tested this score in the extensively phenotyped population-based cross-sectional CoLaus Study in Lausanne, Switzerland (n = 5,360), involving 356 diabetic individuals. RESULTS: The clinical predictors of prevalent diabetes were age, BMI, family history of diabetes, WHR, and triacylglycerol/HDL-cholesterol ratio. After adjustment for these variables, the risk of diabetes was 2.7 (95% CI 1.8-4.0, p = 0.000006) for individuals with a genetic score within the top quintile, compared with the bottom quintile. Adding the genetic score to the clinical covariates improved the area under the receiver operating characteristic curve slightly (from 0.86 to 0.87), yet significantly (p = 0.002). BMI was similar in these two extreme quintiles. CONCLUSIONS/INTERPRETATION: In this population, a simple weighted 15 SNP-based genetic score provides additional information over clinical predictors of prevalent diabetes. At this stage, however, the clinical benefit of this genetic information is limited.
Resumo:
OBJECTIVE: To develop predictive models for early triage of burn patients based on hypersusceptibility to repeated infections. BACKGROUND: Infection remains a major cause of mortality and morbidity after severe trauma, demanding new strategies to combat infections. Models for infection prediction are lacking. METHODS: Secondary analysis of 459 burn patients (≥16 years old) with 20% or more total body surface area burns recruited from 6 US burn centers. We compared blood transcriptomes with a 180-hour cutoff on the injury-to-transcriptome interval of 47 patients (≤1 infection episode) to those of 66 hypersusceptible patients [multiple (≥2) infection episodes (MIE)]. We used LASSO regression to select biomarkers and multivariate logistic regression to built models, accuracy of which were assessed by area under receiver operating characteristic curve (AUROC) and cross-validation. RESULTS: Three predictive models were developed using covariates of (1) clinical characteristics; (2) expression profiles of 14 genomic probes; (3) combining (1) and (2). The genomic and clinical models were highly predictive of MIE status [AUROCGenomic = 0.946 (95% CI: 0.906-0.986); AUROCClinical = 0.864 (CI: 0.794-0.933); AUROCGenomic/AUROCClinical P = 0.044]. Combined model has an increased AUROCCombined of 0.967 (CI: 0.940-0.993) compared with the individual models (AUROCCombined/AUROCClinical P = 0.0069). Hypersusceptible patients show early alterations in immune-related signaling pathways, epigenetic modulation, and chromatin remodeling. CONCLUSIONS: Early triage of burn patients more susceptible to infections can be made using clinical characteristics and/or genomic signatures. Genomic signature suggests new insights into the pathophysiology of hypersusceptibility to infection may lead to novel potential therapeutic or prophylactic targets.
Resumo:
The updated Vienna Prediction Model for estimating recurrence risk after an unprovoked venous thromboembolism (VTE) has been developed to identify individuals at low risk for VTE recurrence in whom anticoagulation (AC) therapy may be stopped after 3 months. We externally validated the accuracy of the model to predict recurrent VTE in a prospective multicenter cohort of 156 patients aged ≥65 years with acute symptomatic unprovoked VTE who had received 3 to 12 months of AC. Patients with a predicted 12-month risk within the lowest quartile based on the updated Vienna Prediction Model were classified as low risk. The risk of recurrent VTE did not differ between low- vs higher-risk patients at 12 months (13% vs 10%; P = .77) and 24 months (15% vs 17%; P = 1.0). The area under the receiver operating characteristic curve for predicting VTE recurrence was 0.39 (95% confidence interval [CI], 0.25-0.52) at 12 months and 0.43 (95% CI, 0.31-0.54) at 24 months. In conclusion, in elderly patients with unprovoked VTE who have stopped AC, the updated Vienna Prediction Model does not discriminate between patients who develop recurrent VTE and those who do not. This study was registered at www.clinicaltrials.gov as #NCT00973596.
Resumo:
Trabecular bone score (TBS) is a gray-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a bone mineral density (BMD)-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual-level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables, and outcomes during follow-up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities, and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1 SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% confidence interval [CI] 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR = 1.32, 95% CI 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95% CI 1.65-1.87 versus 1.70, 95% CI 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. © 2015 American Society for Bone and Mineral Research.
Resumo:
The main objective of this master’s thesis is to examine if Weibull analysis is suitable method for warranty forecasting in the Case Company. The Case Company has used Reliasoft’s Weibull++ software, which is basing on the Weibull method, but the Company has noticed that the analysis has not given right results. This study was conducted making Weibull simulations in different profit centers of the Case Company and then comparing actual cost and forecasted cost. Simula-tions were made using different time frames and two methods for determining future deliveries. The first sub objective is to examine, which parameters of simulations will give the best result to each profit center. The second sub objective of this study is to create a simple control model for following forecasted costs and actual realized costs. The third sub objective is to document all Qlikview-parameters of profit centers. This study is a constructive research, and solutions for company’s problems are figured out in this master’s thesis. In the theory parts were introduced quality issues, for example; what is quality, quality costing and cost of poor quality. Quality is one of the major aspects in the Case Company, so understand-ing the link between quality and warranty forecasting is important. Warranty management was also introduced and other different tools for warranty forecasting. The Weibull method and its mathematical properties and reliability engineering were introduced. The main results of this master’s thesis are that the Weibull analysis forecasted too high costs, when calculating provision. Although, some forecasted values of profit centers were lower than actual values, the method works better for planning purposes. One of the reasons is that quality improving or alternatively quality decreasing is not showing in the results of the analysis in the short run. The other reason for too high values is that the products of the Case Company are com-plex and analyses were made in the profit center-level. The Weibull method was developed for standard products, but products of the Case Company consists of many complex components. According to the theory, this method was developed for homogeneous-data. So the most im-portant notification is that the analysis should be made in the product level, not the profit center level, when the data is more homogeneous.
Resumo:
This study examined the effect of expHcitly instructing students to use a repertoire of reading comprehension strategies. Specifically, this study examined whether providing students with a "predictive story-frame" which combined the use of prediction and summarization strategies improved their reading comprehension relative to providing students with generic instruction on prediction and summarization. Results were examined in terms of instructional condition and reading ability. Students from 2 grade 4 classes participated in this study. The reading component of the Canadian Achievement Tests, Second Edition (CAT/2) was used to identify students as either "average or above average" or "below average" readers. Students received either strategic predication and summarization instruction (story-frame) or generic prediction and summarization instruction (notepad). Students were provided with new but comparable stories for each session. For both groups, the researcher modelled the strategic tools and provided guided practice, independent practice, and independent reading sessions. Comprehension was measured with an immediate and 1-week delayed comprehension test for each of the 4 stories, hi addition, students participated in a 1- week delayed interview, where they were asked to retell the story and to answer questions about the central elements (character, setting, problem, solution, beginning, middle, and ending events) of each story. There were significant differences, with medium to large effect sizes, in comprehension and recall scores as a fimction of both instructional condition and reading ability. Students in the story-frame condition outperformed students in the notepad condition, and average to above average readers performed better than below average readers. Students in the story-frame condition outperformed students in the notepad condition on the comprehension tests and on the oral retellings when teacher modelling and guidance were present. In the cued recall sessions, students in the story-frame instructional condition recalled more correct information and generated fewer errors than students in the notepad condition. Average to above average readers performed better than below average readers across comprehension and retelling measures. The majority of students in both instructional conditions reported that they would use their strategic tool again.
Resumo:
Personality traits and personal values are two important domains of individual differences. Traits are enduring and distinguishable patterns of behaviour whereas values are societally taught, stable, individual preferences that guide behaviour in order to reach a specific end state. The purpose of the present study was to investigate the relations between self and peer report within the domains of personality traits and values, to examine the correlations between values and traits, and to explore the amount of incremental validity of traits and values in predicting behaviour. Two hundred and fiftytwo men and women from a university setting completed self and peer reports on three questionnaires. In order to assess personality traits, the HEXACO-PI (Lee & Ashton, 2004) was used to identify levels of 6 major dimensions of personality in participants. To assess values, the Schwartz Value Survey (Schwartz, 1992) was used to identify the importance each participant placed on each of Schwartz's 10 value types. To measure behaviour, a Behavior Scale, created by Bardi and Schwartz (2003), consisting of items designed to measure the frequency of value-expressive behaviour was used. As expected, correlations between self and peer reports for the personality scales were high indicating that personality traits are easily observable to other people. Correlations between self and peer reports for the values and behaviour scales were only moderate, suggesting that some goals, and behaviours expressive of those goals, may not always be observable to others. Consistent with previous research, there were many strong correlations between traits and values. In addition to the similarities with past research, the present study found that the personality factor Honesty-Humility was correlated strongly with values scales (with five correlations exceeding .25). In the prediction of behaviour, it was found that both personahty and values were able to account for significant and similar amounts of variance. Personality outpredicted values for some behaviours, but the opposite was true of other behaviours. Each domain provided incremental validity beyond the other domain. The impUcations for these findings, along with limitations, and possibilities for future research are also discussed.
Resumo:
In studies of cognitive processing, the allocation of attention has been consistently linked to subtle, phasic adjustments in autonomic control. Both autonomic control of heart rate and control of the allocation of attention are known to decline with age. It is not known, however, whether characteristic individual differences in autonomic control and the ability to control attention are closely linked. To test this, a measure of parasympathetic function, vagal tone (VT) was computed from cardiac recordings from older and younger adults taken before and during performance of two attentiondemanding tasks - the Eriksen visual flanker task and the source memory task. Both tasks elicited event-related potentials (ERPs) that accompany errors, i.e., error-related negativities (ERNs) and error positivities (Pe's). The ERN is a negative deflection in the ERP signal, time-locked to responses made on incorrect trials, likely generated in the anterior cingulate. It is followed immediately by the Pe, a broad, positive deflection which may reflect conscious awareness of having committed an error. Age-attenuation ofERN amplitude has previously been found in paradigms with simple stimulus-response mappings, such as the flanker task, but has rarely been examined in more complex, conceptual tasks. Until now, there have been no reports of its being investigated in a source monitoring task. Age-attenuation of the ERN component was observed in both tasks. Results also indicated that the ERNs generated in these two tasks were generally comparable for young adults. For older adults, however, the ERN from the source monitoring task was not only shallower, but incorporated more frontal processing, apparently reflecting task demands. The error positivities elicited by 3 the two tasks were not comparable, however, and age-attenuation of the Pe was seen only in the more perceptual flanker task. For younger adults, it was Pe scalp topography that seemed to reflect task demands, being maximal over central parietal areas in the flanker task, but over very frontal areas in the source monitoring task. With respect to vagal tone, in the flanker task, neither the number of errors nor ERP amplitudes were predicted by baseline or on-task vagal tone measures. However, in the more difficult source memory task, lower VT was marginally associated with greater numbers of source memory errors in the older group. Thus, for older adults, relatively low levels of parasympathetic control over cardiac response coincided with poorer source memory discrimination. In both groups, lower levels of baseline VT were associated with larger amplitude ERNs, and smaller amplitude Pe's. Thus, low VT was associated in a conceptual task with a greater "emergency response" to errors, and at the same time, reduced awareness of having made them. The efficiency of an individual's complex cognitive processing was therefore associated with the flexibility of parasympathetic control of heart rate, in response to a cognitively challenging task.