877 resultados para real life data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new multi-resource multi-stage mine production timetabling problem for optimising the open-pit drilling, blasting and excavating operations under equipment capacity constraints. The flow process is analysed based on the real-life data from an Australian iron ore mine site. The objective of the model is to maximise the throughput and minimise the total idle times of equipment at each stage. The following comprehensive mining attributes and constraints are considered: types of equipment; operating capacities of equipment; ready times of equipment; speeds of equipment; block-sequence-dependent movement times; equipment-assignment-dependent operational times; etc. The model also provides the availability and usage of equipment units at multiple operational stages such as drilling, blasting and excavating stages. The problem is formulated by mixed integer programming and solved by ILOG-CPLEX optimiser. The proposed model is validated with extensive computational experiments to improve mine production efficiency at the operational level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy. 1

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microscopic simulation models are often evaluated based on visual inspection of the results. This paper presents formal econometric techniques to compare microscopic simulation (MS) models with real-life data. A related result is a methodology to compare different MS models with each other. For this purpose, possible parameters of interest, such as mean returns, or autocorrelation patterns, are classified and characterized. For each class of characteristics, the appropriate techniques are presented. We illustrate the methodology by comparing the MS model developed by He and Li [J. Econ. Dynam. Control, 2007, 31, 3396-3426, Quant. Finance, 2008, 8, 59-79] with actual data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of joint modelling approaches is becoming increasingly popular when an association exists between survival and longitudinal processes. Widely recognized for their gain in efficiency, joint models also offer a reduction in bias compared with naïve methods. With the increasing popularity comes a constantly expanding literature on joint modelling approaches. The aim of this paper is to give an overview of recent literature relating to joint models, in particular those that focus on the time-to-event survival process. A discussion is provided on the range of survival submodels that have been implemented in a joint modelling framework. A particular focus is given to the recent advancements in software used to build these models. Illustrated through the use of two different real-life data examples that focus on the survival of end-stage renal disease patients, the use of the JM and joineR packages within R are demonstrated. The possible future direction for this field of research is also discussed. © 2013 International Statistical Institute.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contexte : Les stratégies pharmacologiques pour traiter la schizophrénie reçoivent une attention croissante due au développement de nouvelles pharmacothérapies plus efficaces, mieux tolérées mais plus coûteuses. La schizophrénie est une maladie chronique présentant différents états spécifiques et définis par leur sévérité. Objectifs : Ce programme de recherche vise à: 1) Évaluer les facteurs associés au risque d'être dans un état spécifique de la schizophrénie, afin de construire les fonctions de risque de la modélisation du cours naturel de la schizophrénie; 2) Développer et valider un modèle de Markov avec microsimulations de Monte-Carlo, afin de simuler l'évolution naturelle des patients qui sont nouvellement diagnostiqués pour la schizophrénie, en fonction du profil individuel des facteurs de risque; 3) Estimer le coût direct de la schizophrénie (pour les soins de santé et autres non reliés aux soins de santé) dans la perspective gouvernementale et simuler l’impact clinique et économique du développement d’un traitement dans une cohorte de patients nouvellement diagnostiqués avec la schizophrénie, suivis pendant les cinq premières années post-diagnostic. Méthode : Pour le premier objectif de ce programme de recherche, un total de 14 320 patients nouvellement diagnostiqués avec la schizophrénie ont été identifiés dans les bases de données de la RAMQ et de Med-Echo. Les six états spécifiques de la schizophrénie ont été définis : le premier épisode (FE), l'état de dépendance faible (LDS), l’état de dépendance élevée (HDS), l’état stable (Stable), l’état de bien-être (Well) et l'état de décès (Death). Pour évaluer les facteurs associés au risque de se trouver dans chacun des états spécifiques de la schizophrénie, nous avons construit 4 fonctions de risque en se basant sur l'analyse de risque proportionnel de Cox pour des risques compétitifs. Pour le deuxième objectif, nous avons élaboré et validé un modèle de Markov avec microsimulations de Monte-Carlo intégrant les six états spécifiques de la schizophrénie. Dans le modèle, chaque sujet avait ses propres probabilités de transition entre les états spécifiques de la schizophrénie. Ces probabilités ont été estimées en utilisant la méthode de la fonction d'incidence cumulée. Pour le troisième objectif, nous avons utilisé le modèle de Markov développé précédemment. Ce modèle inclut les coûts directs de soins de santé, estimés en utilisant les bases de données de la Régie de l'assurance maladie du Québec et Med-Echo, et les coûts directs autres que pour les soins de santé, estimés à partir des enquêtes et publications de Statistique Canada. Résultats : Un total de 14 320 personnes nouvellement diagnostiquées avec la schizophrénie ont été identifiées dans la cohorte à l'étude. Le suivi moyen des sujets était de 4,4 (± 2,6) ans. Parmi les facteurs associés à l’évolution de la schizophrénie, on peut énumérer l’âge, le sexe, le traitement pour la schizophrénie et les comorbidités. Après une période de cinq ans, nos résultats montrent que 41% des patients seront considérés guéris, 13% seront dans un état stable et 3,4% seront décédés. Au cours des 5 premières années après le diagnostic de schizophrénie, le coût direct moyen de soins de santé et autres que les soins de santé a été estimé à 36 701 $ canadiens (CAN) (95% CI: 36 264-37 138). Le coût des soins de santé a représenté 56,2% du coût direct, le coût de l'aide sociale 34,6% et le coût associé à l’institutionnalisation dans les établissements de soins de longue durée 9,2%. Si un nouveau traitement était disponible et offrait une augmentation de 20% de l'efficacité thérapeutique, le coût direct des soins de santé et autres que les soins de santé pourrait être réduit jusqu’à 14,2%. Conclusion : Nous avons identifié des facteurs associés à l’évolution de la schizophrénie. Le modèle de Markov que nous avons développé est le premier modèle canadien intégrant des probabilités de transition ajustées pour le profil individuel des facteurs de risque, en utilisant des données réelles. Le modèle montre une bonne validité interne et externe. Nos résultats indiquent qu’un nouveau traitement pourrait éventuellement réduire les hospitalisations et le coût associé aux établissements de soins de longue durée, augmenter les chances des patients de retourner sur le marché du travail et ainsi contribuer à la réduction du coût de l'aide sociale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artigo apresenta resultados de um estudo da atual condição de operação do sistema interligado brasileiro com relação à estabilidade de tensão. As análises são realizadas a partir de dados reais do planejamento e da operação. O sistema é dividido em 4 regiões: Norte, Nordeste, Sul e Sudeste/Centro-Oeste. A margem de estabilidade é obtida para cada uma destas regiões. Os resultados mostram que o sistema interligado brasileiro apresenta margem de estabilidade muito abaixo da sugerida pelos critérios existentes, a qual é limitada pela região Sudeste/Centro-Oeste. Uma análise detalhada para esta área crítica do sistema é realizada. Além de uma avaliação do comportamento da margem de estabilidade de tensão durante um dia típico de semana, esta trabalho apresenta uma análise de contingências. Utilizando a metodologia de análise modal verifica-se a abrangência das contingências, mensurando o impacto de cada contingência como local, de área ou sistêmico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Limited data are available on the clinical impact of varicella in the ambulatory setting. Our goal was to determine real-life data on the clinical outcomes, medical management, and resource utilization in patients with varicella in Switzerland, a country without a universal immunization program against varicella. A total of 236 patients (222 = 94% primarily healthy individuals) with a clinical diagnosis of varicella were recruited by pediatricians and general practitioners. Age range of patients was 0-47 years with a median of 5 years. The great majority of patients (179 = 76%) were

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND We investigated the rate of severe hypoglycemic events and confounding factors in patients with type-2-diabetes treated with sulfonylurea (SU) at specialized diabetes centers, documented in the German/Austrian DPV-Wiss-database. METHODS Data from 29,485 SU-treated patients were analyzed (median[IQR] age 70.8[62.2-77.8]yrs, diabetes-duration 8.2[4.3-12.8]yrs). The primary objective was to estimate the event-rate of severe hypoglycemia (requiring external help, causing unconsciousness/coma/convulsion and/or emergency.hospitalization). Secondary objectives included exploration of confounding risk-factors through group-comparison and Poisson-regression. RESULTS Severe hypoglycemic events were reported in 826(2.8%) of all patients during their most recent year of SU-treatment. Of these, n = 531(1.8%) had coma, n = 501(1.7%) were hospitalized at least once. The adjusted event-rate of severe hypoglycemia [95%CI] was 3.9[3.7-4.2] events/100 patient-years (coma: 1.9[1.8-2.1]; hospitalization: 1.6[1.5-1.8]). Adjusted event-rates by diabetes-treatment were 6.7 (SU + insulin), 4.9 (SU + insulin + other OAD), 3.1 (SU + other OAD), and 3.8 (SU only). Patients with ≥1 severe event were older (p < 0.001) and had longer diabetes-duration (p = 0.020) than patients without severe events. Participation in educational diabetes-programs and indirect measures of insulin-resistance (increased BMI, plasma-triglycerides) were associated with fewer events (all p < 0.001). Impaired renal function was common (N = 3,113 eGFR ≤30 mL/min) and associated with an increased rate of severe events (≤30 mL/min: 7.7; 30-60 mL/min: 4.8; >60 mL/min: 3.9). CONCLUSIONS These real-life data showed a rate of severe hypoglycemia of 3.9/100 patient-years in SU-treated patients from specialized diabetes centers. Higher risk was associated with known risk-factors including lack of diabetes-education, older age, and decreased eGFR, but also with lower BMI and lower triglyceride-levels, suggesting that SU-treatment in those patients should be considered with caution. This article is protected by copyright. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new multi-depot combined vehicle and crew scheduling algorithm, and uses it, in conjunction with a heuristic vehicle routing algorithm, to solve the intra-city mail distribution problem faced by Australia Post. First we describe the Australia Post mail distribution problem and outline the heuristic vehicle routing algorithm used to find vehicle routes. We present a new multi-depot combined vehicle and crew scheduling algorithm based on set covering with column generation. The paper concludes with a computational investigation examining the affect of different types of vehicle routing solutions on the vehicle and crew scheduling solution, comparing the different levels of integration possible with the new vehicle and crew scheduling algorithm and comparing the results of sequential versus simultaneous vehicle and crew scheduling, using real life data for Australia Post distribution networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In multimedia retrieval, a query is typically interactively refined towards the ‘optimal’ answers by exploiting user feedback. However, in existing work, in each iteration, the refined query is re-evaluated. This is not only inefficient but fails to exploit the answers that may be common between iterations. In this paper, we introduce a new approach called SaveRF (Save random accesses in Relevance Feedback) for iterative relevance feedback search. SaveRF predicts the potential candidates for the next iteration and maintains this small set for efficient sequential scan. By doing so, repeated candidate accesses can be saved, hence reducing the number of random accesses. In addition, efficient scan on the overlap before the search starts also tightens the search space with smaller pruning radius. We implemented SaveRF and our experimental study on real life data sets show that it can reduce the I/O cost significantly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Often the designer of ROLAP applications follows up with the question “can I create a little joiner table with just the two dimension keys and then connect that table to the fact table?” In a classic dimensional model there are two options - (a) both dimensions are modeled independently or (b) two dimensions are combined into a super-dimension with a single key. The second approach is not widely used in ROLAP environments but it is an important sparsity handling method in MOLAP systems. In ROLAP this design technique can also bring storage and performance benefits, although the model becomes more complicated. The dependency between dimensions is a key factor that the designers have to consider when choosing between the two options. In this paper we present the results of our storage and performance experiments over a real life data cubes in reference to these design approaches. Some conclusions are drawn.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes some confidence intervals for the mean of a positively skewed distribution. The following confidence intervals are considered: Student-t, Johnson-t, median-t, mad-t, bootstrap-t, BCA, T1 , T3 and six new confidence intervals, the median bootstrap-t, mad bootstrap-t, median T1, mad T1 , median T3 and the mad T3. A simulation study has been conducted and average widths, coefficient of variation of widths, and coverage probabilities were recorded and compared across confidence intervals. To compare confidence intervals, the width and coverage probabilities were compared so that smaller widths indicated a better confidence interval when coverage probabilities were the same. Results showed that the median T1 and median T3 outperformed other confidence intervals in terms of coverage probability and the mad bootstrap-t, mad-t, and mad T3 outperformed others in terms of width. Some real life data are considered to illustrate the findings of the thesis.