961 resultados para rules application algorithms
Resumo:
Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.
Resumo:
Checking the admissibility of quasiequations in a finitely generated (i.e., generated by a finite set of finite algebras) quasivariety Q amounts to checking validity in a suitable finite free algebra of the quasivariety, and is therefore decidable. However, since free algebras may be large even for small sets of small algebras and very few generators, this naive method for checking admissibility in Q is not computationally feasible. In this paper, algorithms are introduced that generate a minimal (with respect to a multiset well-ordering on their cardinalities) finite set of algebras such that the validity of a quasiequation in this set corresponds to admissibility of the quasiequation in Q. In particular, structural completeness (validity and admissibility coincide) and almost structural completeness (validity and admissibility coincide for quasiequations with unifiable premises) can be checked. The algorithms are illustrated with a selection of well-known finitely generated quasivarieties, and adapted to handle also admissibility of rules in finite-valued logics.
Resumo:
Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.
Resumo:
Voluntary control of information processing is crucial to allocate resources and prioritize the processes that are most important under a given situation; the algorithms underlying such control, however, are often not clear. We investigated possible algorithms of control for the performance of the majority function, in which participants searched for and identified one of two alternative categories (left or right pointing arrows) as composing the majority in each stimulus set. We manipulated the amount (set size of 1, 3, and 5) and content (ratio of left and right pointing arrows within a set) of the inputs to test competing hypotheses regarding mental operations for information processing. Using a novel measure based on computational load, we found that reaction time was best predicted by a grouping search algorithm as compared to alternative algorithms (i.e., exhaustive or self-terminating search). The grouping search algorithm involves sampling and resampling of the inputs before a decision is reached. These findings highlight the importance of investigating the implications of voluntary control via algorithms of mental operations.
Resumo:
The variability of results from different automated methods of detection and tracking of extratropical cyclones is assessed in order to identify uncertainties related to the choice of method. Fifteen international teams applied their own algorithms to the same dataset - the period 1989-2009 of interim European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERAInterim) data. This experiment is part of the community project Intercomparison of Mid Latitude Storm Diagnostics (IMILAST; see www.proclim.ch/imilast/index.html). The spread of results for cyclone frequency, intensity, life cycle, and track location is presented to illustrate the impact of using different methods. Globally, methods agree well for geographical distribution in large oceanic regions, interannual variability of cyclone numbers, geographical patterns of strong trends, and distribution shape for many life cycle characteristics. In contrast, the largest disparities exist for the total numbers of cyclones, the detection of weak cyclones, and distribution in some densely populated regions. Consistency between methods is better for strong cyclones than for shallow ones. Two case studies of relatively large, intense cyclones reveal that the identification of the most intense part of the life cycle of these events is robust between methods, but considerable differences exist during the development and the dissolution phases.
Resumo:
The Internet revolution and the digital environment have spurred a significant amount of innovative activity that has had spillover effects on many sectors of the economy. For a growing group of countries – both developed and developing – digital goods and services have become an important engine of economic growth and a clear priority in their future-oriented economic strategies. Neither the rapid technological developments associated with digitization, nor their increased societal significance have so far been reflected in international economic law in a comprehensive manner. The law of the World Trade Organization (WTO) in particular, has not reacted in any proactive manner. A pertinent question that arises is whether the WTO rules are still useful and able to accommodate the new digital economy or whether they have been rendered outdated and incapable of dealing with this important development? The present think-piece seeks answers to these questions and maps the key issues and challenges which the WTO faces. In appraisal of the current state of affairs, developments in venues other than the WTO, and proposals tabled by stakeholders, some recommendations for the ways forward are made.
Resumo:
BACKGROUND: Patients undergoing laparoscopic Roux-en-Y gastric bypass (LRYGB) often have substantial comorbidities, which must be taken into account to appropriately assess expected postoperative outcomes. The Charlson/Deyo and Elixhauser indices are widely used comorbidity measures, both of which also have revised algorithms based on enhanced ICD-9-CM coding. It is currently unclear which of the existing comorbidity measures best predicts early postoperative outcomes following LRYGB. METHODS: Using the Nationwide Inpatient Sample, patients 18 years or older undergoing LRYGB for obesity between 2001 and 2008 were identified. Comorbidities were assessed according to the original and enhanced Charlson/Deyo and Elixhauser indices. Using multivariate logistic regression, the following early postoperative outcomes were assessed: overall postoperative complications, length of hospital stay, and conversion to open surgery. Model performance for the four comorbidity indices was assessed and compared using C-statistics and the Akaike's information criterion (AIC). RESULTS: A total of 70,287 patients were included. Mean age was 43.1 years (SD, 10.8), 81.6 % were female and 60.3 % were White. Both the original and enhanced Elixhauser indices modestly outperformed the Charlson/Deyo in predicting the surgical outcomes. All four models had similar C-statistics, but the original Elixhauser index was associated with the smallest AIC for all of the surgical outcomes. CONCLUSIONS: The original Elixhauser index is the best predictor of early postoperative outcomes in our cohort of patients undergoing LRYGB. However, differences between the Charlson/Deyo and Elixhauser indices are modest, and each of these indices provides clinically relevant insight for predicting early postoperative outcomes in this high-risk patient population.
Resumo:
Background Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. Methods We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship ‘Prevalence = Incidence x Duration’ in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship ‘incident = true incident + false incident’ and also to the IIR derived from the BED incidence assay. Results Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R2 = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. Conclusions IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes between annual HIV notification cohorts.
Resumo:
Venous thromboembolism (VTE) is a potentially lethal clinical condition that is suspected in patients with common clinical complaints, in many and varied, clinical care settings. Once VTE is diagnosed, optimal therapeutic management (thrombolysis, IVC filters, type and duration of anticoagulants) and ideal therapeutic management settings (outpatient, critical care) are also controversial. Clinical prediction tools, including clinical decision rules and D-Dimer, have been developed, and some validated, to assist clinical decision making along the diagnostic and therapeutic management paths for VTE. Despite these developments, practice variation is high and there remain many controversies in the use of the clinical prediction tools. In this narrative review, we highlight challenges and controversies in VTE diagnostic and therapeutic management with a focus on clinical decision rules and D-Dimer.
Resumo:
Once more, agriculture threatened to prevent all progress in multilateral trade rule-making at the Ninth WTO Ministerial Conference in December 2013. But this time, the “magic of Bali” worked. After the clock had been stopped mainly because of the food security file, the ministers adopted a comprehensive package of decisions and declarations mainly in respect of development issues. Five are about agriculture. Decision 38 on Public Stockholding for Food Security Purposes contains a “peace clause” which will now be shielding certain stockpile programmes from subsidy complaints in formal litigation. This article provides contextual background and analyses this decision from a legal perspective. It finds that, at best, Decision 38 provides a starting point for a WTO Work Programme for food security, for review at the Eleventh Ministerial Conference which will probably take place in 2017. At worst, it may unduly widen the limited window for government-financed competition existing under present rules in the WTO Agreement on Agriculture – yet without increasing global food security or even guaranteeing that no subsidy claims will be launched, or entertained, under the WTO dispute settlement mechanism. Hence, the Work Programme should find more coherence between farm support and socio-economic and trade objectives when it comes to stockpiles. This also encompasses a review of the present WTO rules applying to other forms of food reserves and to regional or “virtual” stockpiles. Another “low hanging fruit” would be a decision to exempt food aid purchases from export restrictions.
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
OBJECTIVES In this phantom CT study, we investigated whether images reconstructed using filtered back projection (FBP) and iterative reconstruction (IR) with reduced tube voltage and current have equivalent quality. We evaluated the effects of different acquisition and reconstruction parameter settings on image quality and radiation doses. Additionally, patient CT studies were evaluated to confirm our phantom results. METHODS Helical and axial 256 multi-slice computed tomography scans of the phantom (Catphan(®)) were performed with varying tube voltages (80-140kV) and currents (30-200mAs). 198 phantom data sets were reconstructed applying FBP and IR with increasing iterations, and soft and sharp kernels. Further, 25 chest and abdomen CT scans, performed with high and low exposure per patient, were reconstructed with IR and FBP. Two independent observers evaluated image quality and radiation doses of both phantom and patient scans. RESULTS In phantom scans, noise reduction was significantly improved using IR with increasing iterations, independent from tissue, scan-mode, tube-voltage, current, and kernel. IR did not affect high-contrast resolution. Low-contrast resolution was also not negatively affected, but improved in scans with doses <5mGy, although object detectability generally decreased with the lowering of exposure. At comparable image quality levels, CTDIvol was reduced by 26-50% using IR. In patients, applying IR vs. FBP resulted in good to excellent image quality, while tube voltage and current settings could be significantly decreased. CONCLUSIONS Our phantom experiments demonstrate that image quality levels of FBP reconstructions can also be achieved at lower tube voltages and tube currents when applying IR. Our findings could be confirmed in patients revealing the potential of IR to significantly reduce CT radiation doses.