862 resultados para suicide risk prediction model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Identifying risks relevant to a software project and planning measures to deal with them are critical to the success of the project. Current practices in risk assessment mostly rely on high-level, generic guidance or the subjective judgements of experts. In this paper, we propose a novel approach to risk assessment using historical data associated with a software project. Specifically, our approach identifies patterns of past events that caused project delays, and uses this knowledge to identify risks in the current state of the project. A set of risk factors characterizing “risky” software tasks (in the form of issues) were extracted from five open source projects: Apache, Duraspace, JBoss, Moodle, and Spring. In addition, we performed feature selection using a sparse logistic regression model to select risk factors with good discriminative power. Based on these risk factors, we built predictive models to predict if an issue will cause a project delay. Our predictive models are able to predict both the risk impact (i.e. the extend of the delay) and the likelihood of a risk occurring. The evaluation results demonstrate the effectiveness of our predictive models, achieving on average 48%-81% precision, 23%-90% recall, 29%-71% F-measure, and 70%-92% Area Under the ROC Curve. Our predictive models also have low error rates: 0.39-0.75 for Macro-averaged Mean Cost-Error and 0.7-1.2 for Macro-averaged Mean Absolute Error.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: The study was undertaken to evaluate the contribution of a process which uses clinical trial data plus linked de-identified administrative health data to forecast potential risk of adverse events associated with the use of newly released drugs by older Australian patients. METHODS: The study uses publicly available data from the clinical trials of a newly released drug to ascertain which patient age groups, gender, comorbidities and co-medications were excluded in the trials. It then uses linked de-identified hospital morbidity and medications dispensing data to investigate the comorbidities and co-medications of patients who suffer from the target morbidity of the new drug and who are the likely target population for the drug. The clinical trial information and the linked morbidity and medication data are compared to assess which patient groups could potentially be at risk of an adverse event associated with use of the new drug. RESULTS: Applying the model in a retrospective real-world scenario identified that the majority of the sample group of Australian patients aged 65 years and over with the target morbidity of the newly released COX-2-selective NSAID rofecoxib also suffered from a major morbidity excluded in the trials of that drug, indicating a substantial potential risk of adverse events amongst those patients. This risk was borne out in post-release morbidity and mortality associated with use of that drug. CONCLUSIONS: Clinical trial data and linked administrative health data can together support a prospective assessment of patient groups who could be at risk of an adverse event if they are prescribed a newly released drug in the context of their age, gender, comorbidities and/or co-medications. Communication of this independent risk information to prescribers has the potential to reduce adverse events in the period after the release of the new drug, which is when the risk is greatest.Note: The terms 'adverse drug reaction' and 'adverse drug event' have come to be used interchangeably in the current literature. For consistency, the authors have chosen to use the wider term 'adverse drug event' (ADE).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigate feature stability in the context of clinical prognosis derived from high-dimensional electronic medical records. To reduce variance in the selected features that are predictive, we introduce Laplacian-based regularization into a regression model. The Laplacian is derived on a feature graph that captures both the temporal and hierarchic relations between hospital events, diseases, and interventions. Using a cohort of patients with heart failure, we demonstrate better feature stability and goodness-of-fit through feature graph stabilization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Due to the increase in water demand and hydropower energy, it is getting more important to operate hydraulic structures in an efficient manner while sustaining multiple demands. Especially, companies, governmental agencies, consultant offices require effective, practical integrated tools and decision support frameworks to operate reservoirs, cascades of run-of-river plants and related elements such as canals by merging hydrological and reservoir simulation/optimization models with various numerical weather predictions, radar and satellite data. The model performance is highly related with the streamflow forecast, related uncertainty and its consideration in the decision making. While deterministic weather predictions and its corresponding streamflow forecasts directly restrict the manager to single deterministic trajectories, probabilistic forecasts can be a key solution by including uncertainty in flow forecast scenarios for dam operation. The objective of this study is to compare deterministic and probabilistic streamflow forecasts on an earlier developed basin/reservoir model for short term reservoir management. The study is applied to the Yuvacık Reservoir and its upstream basin which is the main water supply of Kocaeli City located in the northwestern part of Turkey. The reservoir represents a typical example by its limited capacity, downstream channel restrictions and high snowmelt potential. Mesoscale Model 5 and Ensemble Prediction System data are used as a main input and the flow forecasts are done for 2012 year using HEC-HMS. Hydrometeorological rule-based reservoir simulation model is accomplished with HEC-ResSim and integrated with forecasts. Since EPS based hydrological model produce a large number of equal probable scenarios, it will indicate how uncertainty spreads in the future. Thus, it will provide risk ranges in terms of spillway discharges and reservoir level for operator when it is compared with deterministic approach. The framework is fully data driven, applicable, useful to the profession and the knowledge can be transferred to other similar reservoir systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for large-scale systems. Nonetheless, a critical obstacle, which needs to be overcome in MPC, is the large computational burden when a large-scale system is considered or a long prediction horizon is involved. In order to solve this problem, we use an adaptive prediction accuracy (APA) approach that can reduce the computational burden almost by half. The proposed MPC scheme with this scheme is tested on the northern Dutch water system, which comprises Lake IJssel, Lake Marker, the River IJssel and the North Sea Canal. The simulation results show that by using the MPC-APA scheme, the computational time can be reduced to a large extent and a flood protection problem over longer prediction horizons can be well solved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Standard models of moral hazard predict a negative relationship between risk and incentives, but the empirical work has not confirmed this prediction. In this paper, we propose a model with adverse selection followed by moral hazard, where effort and the degree of risk aversion are private information of an agent who can control the mean and the variance of profits. For a given contract, more risk-averse agents suppIy more effort in risk reduction. If the marginal utility of incentives decreases with risk aversion, more risk-averse agents prefer lower-incentive contractsj thus, in the optimal contract, incentives are positively correlated with endogenous risk. In contrast, if risk aversion is high enough, the possibility of reduction in risk makes the marginal utility of incentives increasing in risk aversion and, in this case, risk and incentives are negatively related.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

When the joint assumption of optimal risk sharing and coincidence of beliefs is added to the collective model of Browning and Chiappori (1998) income pooling and symmetry of the pseudo-Hicksian matrix are shown to be restored. Because these are also the features of the unitary model usually rejected in empirical studies one may argue that these assumptions are at odds with evidence. We argue that this needs not be the case. The use of cross-section data to generate price and income variation is based Oil a definition of income pooling or symmetry suitable for testing the unitary model, but not the collective model with risk sharing. AIso, by relaxing assumptions on beliefs, we show that symmetry and income pooling is lost. However, with usual assumptions on existence of assignable goods, we show that beliefs are identifiable. More importantly, if di:fferences in beliefs are not too extreme, the risk sharing hypothesis is still testable.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

O objetivo deste estudo é propor a implementação de um modelo estatístico para cálculo da volatilidade, não difundido na literatura brasileira, o modelo de escala local (LSM), apresentando suas vantagens e desvantagens em relação aos modelos habitualmente utilizados para mensuração de risco. Para estimação dos parâmetros serão usadas as cotações diárias do Ibovespa, no período de janeiro de 2009 a dezembro de 2014, e para a aferição da acurácia empírica dos modelos serão realizados testes fora da amostra, comparando os VaR obtidos para o período de janeiro a dezembro de 2014. Foram introduzidas variáveis explicativas na tentativa de aprimorar os modelos e optou-se pelo correspondente americano do Ibovespa, o índice Dow Jones, por ter apresentado propriedades como: alta correlação, causalidade no sentido de Granger, e razão de log-verossimilhança significativa. Uma das inovações do modelo de escala local é não utilizar diretamente a variância, mas sim a sua recíproca, chamada de “precisão” da série, que segue uma espécie de passeio aleatório multiplicativo. O LSM captou todos os fatos estilizados das séries financeiras, e os resultados foram favoráveis a sua utilização, logo, o modelo torna-se uma alternativa de especificação eficiente e parcimoniosa para estimar e prever volatilidade, na medida em que possui apenas um parâmetro a ser estimado, o que representa uma mudança de paradigma em relação aos modelos de heterocedasticidade condicional.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

O objetivo do artigo foi avaliar o uso da lógica fuzzy para estimar possibilidade de óbito neonatal. Desenvolveu-se um modelo computacional com base na teoria dos conjuntos fuzzy, tendo como variáveis peso ao nascer, idade gestacional, escore de Apgar e relato de natimorto. Empregou-se o método de inferência de Mamdani, e a variável de saída foi o risco de morte neonatal. Criaram-se 24 regras de acordo com as variáveis de entrada, e a validação do modelo utilizou um banco de dados real de uma cidade brasileira. A acurácia foi estimada pela curva ROC; os riscos foram comparados pelo teste t de Student. O programa MATLAB 6.5 foi usado para construir o modelo. Os riscos médios foram menores para os que sobreviveram (p < 0,001). A acurácia do modelo foi 0,90. A maior acurácia foi com possibilidade de risco igual ou menor que 25% (sensibilidade = 0,70, especificidade = 0,98, valor preditivo negativo = 0,99 e valor preditivo positivo = 0,22). O modelo mostrou acurácia e valor preditivo negativo bons, podendo ser utilizado em hospitais gerais.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data were collected and analysed from seven field sites in Australia, Brazil and Colombia on weather conditions and the severity of anthracnose disease of the tropical pasture legume Stylosanthes scabra caused by Colletotrichum gloeosporioides. Disease severity and weather data were analysed using artificial neural network (ANN) models developed using data from some or all field sites in Australia and/or South America to predict severity at other sites. Three series of models were developed using different weather summaries. of these, ANN models with weather for the day of disease assessment and the previous 24 h period had the highest prediction success, and models trained on data from all sites within one continent correctly predicted disease severity in the other continent on more than 75% of days; the overall prediction error was 21.9% for the Australian and 22.1% for the South American model. of the six cross-continent ANN models trained on pooled data for five sites from two continents to predict severity for the remaining sixth site, the model developed without data from Planaltina in Brazil was the most accurate, with >85% prediction success, and the model without Carimagua in Colombia was the least accurate, with only 54% success. In common with multiple regression models, moisture-related variables such as rain, leaf surface wetness and variables that influence moisture availability such as radiation and wind on the day of disease severity assessment or the day before assessment were the most important weather variables in all ANN models. A set of weights from the ANN models was used to calculate the overall risk of anthracnose for the various sites. Sites with high and low anthracnose risk are present in both continents, and weather conditions at centres of diversity in Brazil and Colombia do not appear to be more conducive than conditions in Australia to serious anthracnose development.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Preservation of rivers and water resources is crucial in most environmental policies and many efforts are made to assess water quality. Environmental monitoring of large river networks are based on measurement stations. Compared to the total length of river networks, their number is often limited and there is a need to extend environmental variables that are measured locally to the whole river network. The objective of this paper is to propose several relevant geostatistical models for river modeling. These models use river distance and are based on two contrasting assumptions about dependency along a river network. Inference using maximum likelihood, model selection criterion and prediction by kriging are then developed. We illustrate our approach on two variables that differ by their distributional and spatial characteristics: summer water temperature and nitrate concentration. The data come from 141 to 187 monitoring stations in a network on a large river located in the Northeast of France that is more than 5000 km long and includes Meuse and Moselle basins. We first evaluated different spatial models and then gave prediction maps and error variance maps for the whole stream network.