8 resultados para Prediction algorithms

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Recently telecommunication industry benefits from infrastructure sharing, one of the most fundamental enablers of cloud computing, leading to emergence of the Mobile Virtual Network Operator (MVNO) concept. The most momentous intents by this approach are the support of on-demand provisioning and elasticity of virtualized mobile network components, based on data traffic load. To realize it, during operation and management procedures, the virtualized services need be triggered in order to scale-up/down or scale-out/in an instance. In this paper we propose an architecture called MOBaaS (Mobility and Bandwidth Availability Prediction as a Service), comprising two algorithms in order to predict user(s) mobility and network link bandwidth availability, that can be implemented in cloud based mobile network structure and can be used as a support service by any other virtualized mobile network services. MOBaaS can provide prediction information in order to generate required triggers for on-demand deploying, provisioning, disposing of virtualized network components. This information can be used for self-adaptation procedures and optimal network function configuration during run-time operation, as well. Through the preliminary experiments with the prototype implementation on the OpenStack platform, we evaluated and confirmed the feasibility and the effectiveness of the prediction algorithms and the proposed architecture.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Treatment with growth hormone (GH) has become standard practice for replacement in GH-deficient children or pharmacotherapy in a variety of disorders with short stature. However, even today, the reported adult heights achieved often remain below the normal range. In addition, the treatment is expensive and may be associated with long-term risks. Thus, a discussion of the factors relevant for achieving an optimal individual outcome in terms of growth, costs, and risks is required. In the present review, the heterogenous approaches of treatment with GH are discussed, considering the parameters available for an evaluation of the short- and long-term outcomes at different stages of treatment. This discourse introduces the potential of the newly emerging prediction algorithms in comparison to other more conventional approaches for the planning and evaluation of the response to GH. In rare disorders such as those with short stature, treatment decisions cannot easily be deduced from personal experience. An interactive approach utilizing the derived experience from large cohorts for the evaluation of the individual patient and the required decision-making may facilitate the use of GH. Such an approach should also lead to avoiding unnecessary long-term treatment in unresponsive individuals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Electroencephalograms (EEG) are often contaminated with high amplitude artifacts limiting the usability of data. Methods that reduce these artifacts are often restricted to certain types of artifacts, require manual interaction or large training data sets. Within this paper we introduce a novel method, which is able to eliminate many different types of artifacts without manual intervention. The algorithm first decomposes the signal into different sub-band signals in order to isolate different types of artifacts into specific frequency bands. After signal decomposition with principal component analysis (PCA) an adaptive threshold is applied to eliminate components with high variance corresponding to the dominant artifact activity. Our results show that the algorithm is able to significantly reduce artifacts while preserving the EEG activity. Parameters for the algorithm do not have to be identified for every patient individually making the method a good candidate for preprocessing in automatic seizure detection and prediction algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction of glycemic profile is an important task for both early recognition of hypoglycemia and enhancement of the control algorithms for optimization of insulin infusion rate. Adaptive models for glucose prediction and recognition of hypoglycemia based on statistical and artificial intelligence techniques are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic systems, especially in real-life applications, are often determined by inter-/intra-variability, uncertainties and time-varying components. Physiological systems are probably the most representative example in which population variability, vital signal measurement noise and uncertain dynamics render their explicit representation and optimization a rather difficult task. Systems characterized by such challenges often require the use of adaptive algorithmic solutions able to perform an iterative structural and/or parametrical update process towards optimized behavior. Adaptive optimization presents the advantages of (i) individualization through learning of basic system characteristics, (ii) ability to follow time-varying dynamics and (iii) low computational cost. In this chapter, the use of online adaptive algorithms is investigated in two basic research areas related to diabetes management: (i) real-time glucose regulation and (ii) real-time prediction of hypo-/hyperglycemia. The applicability of these methods is illustrated through the design and development of an adaptive glucose control algorithm based on reinforcement learning and optimal control and an adaptive, personalized early-warning system for the recognition and alarm generation against hypo- and hyperglycemic events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prevention of psychoses has been intensively investigated within the past two decades, and particularly, prediction has been much advanced. Depending on the applied risk indicators, current criteria are associated with average, yet significantly heterogeneous transition rates of ≥30 % within 3 years, further increasing with longer follow-up periods. Risk stratification offers a promising approach to advance current prediction as it can help to reduce heterogeneity of transition rates and to identify subgroups with specific needs and response patterns, enabling a targeted intervention. It may also be suitable to improve risk enrichment. Current results suggest the future implementation of multi-step risk algorithms combining sensitive risk detection by cognitive basic symptoms (COGDIS) and ultra-high-risk (UHR) criteria with additional individual risk estimation by a prognostic index that relies on further predictors such as additional clinical indicators, functional impairment, neurocognitive deficits, and EEG and structural MRI abnormalities, but also considers resilience factors. Simply combining COGDIS and UHR criteria in a second step of risk stratification produced already a 4-year hazard rate of 0.66. With regard to prevention, two recent meta-analyses demonstrated that preventive measures enable a reduction in 12-month transition rates by 54-56 % with most favorable numbers needed to treat of 9-10. Unfortunately, psychosocial functioning, another important target of preventive efforts, did not improve. However, these results are based on a relatively small number of trials; and more methodologically sound studies and a stronger consideration of individual profiles of clinical needs by modular intervention programs are required

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE Algorithms to predict the future long-term risk of patients with stable coronary artery disease (CAD) are rare. The VIenna and Ludwigshafen CAD (VILCAD) risk score was one of the first scores specifically tailored for this clinically important patient population. The aim of this study was to refine risk prediction in stable CAD creating a new prediction model encompassing various pathophysiological pathways. Therefore, we assessed the predictive power of 135 novel biomarkers for long-term mortality in patients with stable CAD. DESIGN, SETTING AND SUBJECTS We included 1275 patients with stable CAD from the LUdwigshafen RIsk and Cardiovascular health study with a median follow-up of 9.8 years to investigate whether the predictive power of the VILCAD score could be improved by the addition of novel biomarkers. Additional biomarkers were selected in a bootstrapping procedure based on Cox regression to determine the most informative predictors of mortality. RESULTS The final multivariable model encompassed nine clinical and biochemical markers: age, sex, left ventricular ejection fraction (LVEF), heart rate, N-terminal pro-brain natriuretic peptide, cystatin C, renin, 25OH-vitamin D3 and haemoglobin A1c. The extended VILCAD biomarker score achieved a significantly improved C-statistic (0.78 vs. 0.73; P = 0.035) and net reclassification index (14.9%; P < 0.001) compared to the original VILCAD score. Omitting LVEF, which might not be readily measureable in clinical practice, slightly reduced the accuracy of the new BIO-VILCAD score but still significantly improved risk classification (net reclassification improvement 12.5%; P < 0.001). CONCLUSION The VILCAD biomarker score based on routine parameters complemented by novel biomarkers outperforms previous risk algorithms and allows more accurate classification of patients with stable CAD, enabling physicians to choose more personalized treatment regimens for their patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.