997 resultados para Randomized algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The efficacy of the human papillomavirus type 16 (HPV-16)/HPV-18 AS04-adjuvanted vaccine against cervical infections with HPV in the Papilloma Trial against Cancer in Young Adults (PATRICIA) was evaluated using a combination of the broad-spectrum L1-based SPF10 PCR-DNA enzyme immunoassay (DEIA)/line probe assay (LiPA25) system with type-specific PCRs for HPV-16 and -18. Broad-spectrum PCR assays may underestimate the presence of HPV genotypes present at relatively low concentrations in multiple infections, due to competition between genotypes. Therefore, samples were retrospectively reanalyzed using a testing algorithm incorporating the SPF10 PCR-DEIA/LiPA25 plus a novel E6-based multiplex type-specific PCR and reverse hybridization assay (MPTS12 RHA), which permits detection of a panel of nine oncogenic HPV genotypes (types 16, 18, 31, 33, 35, 45, 52, 58, and 59). For the vaccine against HPV types 16 and 18, there was no major impact on estimates of vaccine efficacy (VE) for incident or 6-month or 12-month persistent infections when the MPTS12 RHA was included in the testing algorithm versus estimates with the protocol-specified algorithm. However, the alternative testing algorithm showed greater sensitivity than the protocol-specified algorithm for detection of some nonvaccine oncogenic HPV types. More cases were gained in the control group than in the vaccine group, leading to higher point estimates of VE for 6-month and 12-month persistent infections for the nonvaccine oncogenic types included in the MPTS12 RHA assay (types 31, 33, 35, 45, 52, 58, and 59). This post hoc analysis indicates that the per-protocol testing algorithm used in PATRICIA underestimated the VE against some nonvaccine oncogenic HPV types and that the choice of the HPV DNA testing methodology is important for the evaluation of VE in clinical trials. (This study has been registered at ClinicalTrials.gov under registration no. NCT00122681.).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To compare the Full Threshold (FT) and SITA Standard (SS) strategies in glaucomatous patients undergoing automated perimetry for the first time. METHODS: Thirty-one glaucomatous patients who had never undergone perimetry underwent automated perimetry (Humphrey, program 30-2) with both FT and SS on the same day, with an interval of at least 15 minutes. The order of the examination was randomized, and only one eye per patient was analyzed. Three analyses were performed: a) all the examinations, regardless of the order of application; b) only the first examinations; c) only the second examinations. In order to calculate the sensitivity of both strategies, the following criteria were used to define abnormality: glaucoma hemifield test (GHT) outside normal limits, pattern standard deviation (PSD) <5%, or a cluster of 3 adjacent points with p<5% at the pattern deviation probability plot. RESULTS: When the results of all examinations were analyzed regardless of the order in which they were performed, the number of depressed points with p<0.5% in the pattern deviation probability map was significantly greater with SS (p=0.037), and the sensitivities were 87.1% for SS and 77.4% for FT (p=0.506). When only the first examinations were compared, there were no statistically significant differences regarding the number of depressed points, but the sensitivity of SS (100%) was significantly greater than that obtained with FT (70.6%) (p=0.048). When only the second examinations were compared, there were no statistically significant differences regarding the number of depressed points, and the sensitivities of SS (76.5%) and FT (85.7%) (p=0.664). CONCLUSION: SS may have a higher sensitivity than FT in glaucomatous patients undergoing automated perimetry for the first time. However, this difference tends to disappear in subsequent examinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The objective of this study is to evaluate blood glucose (BG) control efficacy and safety of 3 insulin protocols in medical intensive care unit (MICU) patients. Methods: This was a multicenter randomized controlled trial involving 167 MICU patients with at least one BG measurement +/- 150 mg/dL and one or more of the following: mechanical ventilation, systemic inflammatory response syndrome, trauma, or burns. The interventions were computer-assisted insulin protocol (CAIP), with insulin infusion maintaining BG between 100 and 130 mg/dL; Leuven protocol, with insulin maintaining BG between 80 and 110 mg/dL; or conventional treatment-subcutaneous insulin if glucose > 150 mg/dL. The main efficacy outcome was the mean of patients` median BG, and the safety outcome was the incidence of hypoglycemia (<= 40 mg/dL). Results: The mean of patients` median BG was 125.0, 127.1, and 158.5 mg/dL for CAIP, Leuven, and conventional treatment, respectively (P = .34, CAIP vs Leuven; P < .001, CAIP vs conventional). In CAIP, 12 patients (21.4%) had at least one episode of hypoglycemia vs 24 (41.4%) in Leuven and 2 (3.8%) in conventional treatment (P = .02, CAIP vs Leuven; P = .006, CAIP vs conventional). Conclusions: The CAIP is safer than and as effective as the standard strict protocol for controlling glucose in MICU patients. Hypoglycemia was rare under conventional treatment. However, BG levels were higher than with IV insulin protocols. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction New evidence from randomized controlled and etiology of fever studies, the availability of reliable RDT for malaria, and novel technologies call for revision of the IMCI strategy. We developed a new algorithm based on (i) a systematic review of published studies assessing the safety and appropriateness of RDT and antibiotic prescription, (ii) results from a clinical and microbiological investigation of febrile children aged <5 years, (iii) international expert IMCI opinions. The aim of this study was to assess the safety of the new algorithm among patients in urban and rural areas of Tanzania.Materials and Methods The design was a controlled noninferiority study. Enrolled children aged 2-59 months with any illness were managed either by a study clinician using the new Almanach algorithm (two intervention health facilities), or clinicians using standard practice, including RDT (two control HF). At day 7 and day 14, all patients were reassessed. Patients who were ill in between or not cured at day 14 were followed until recovery or death. Primary outcome was rate of complications, secondary outcome rate of antibiotic prescriptions.Results 1062 children were recruited. Main diagnoses were URTI 26%, pneumonia 19% and gastroenteritis (9.4%). 98% (531/541) were cured at D14 in the Almanach arm and 99.6% (519/521) in controls. Rate of secondary hospitalization was 0.2% in each. One death occurred in controls. None of the complications was due to withdrawal of antibiotics or antimalarials at day 0. Rate of antibiotic use was 19% in the Almanach arm and 84% in controls.Conclusion Evidence suggests that the new algorithm, primarily aimed at the rational use of drugs, is as safe as standard practice and leads to a drastic reduction of antibiotic use. The Almanach is currently being tested for clinician adherence to proposed procedures when used on paper or a mobile phone

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large amount of biological data has been produced in the last years. Important knowledge can be extracted from these data by the use of data analysis techniques. Clustering plays an important role in data analysis, by organizing similar objects from a dataset into meaningful groups. Several clustering algorithms have been proposed in the literature. However, each algorithm has its bias, being more adequate for particular datasets. This paper presents a mathematical formulation to support the creation of consistent clusters for biological data. Moreover. it shows a clustering algorithm to solve this formulation that uses GRASP (Greedy Randomized Adaptive Search Procedure). We compared the proposed algorithm with three known other algorithms. The proposed algorithm presented the best clustering results confirmed statistically. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

QUESTIONS UNDER STUDY: After years of advocating ABC (Airway-Breathing-Circulation), current guidelines of cardiopulmonary resuscitation (CPR) recommend CAB (Circulation-Airway-Breathing). This trial compared ABC with CAB as initial approach to CPR from the arrival of rescuers until the completion of the first resuscitation cycle. METHODS: 108 teams, consisting of two physicians each, were randomized to receive a graphical display of either the ABC algorithm or the CAB algorithm. Subsequently teams had to treat a simulated cardiac arrest. Data analysis was performed using video recordings obtained during simulations. The primary endpoint was the time to completion of the first resuscitation cycle of 30 compressions and two ventilations. RESULTS: The time to execution of the first resuscitation measure was 32 ± 12 seconds in ABC teams and 25 ± 10 seconds in CAB teams (P = 0.002). 18/53 ABC teams (34%) and none of the 55 CAB teams (P = 0.006) applied more than the recommended two initial rescue breaths which caused a longer duration of the first cycle of 30 compressions and two ventilations in ABC teams (31 ± 13 vs.23 ± 6 sec; P = 0.001). Overall, the time to completion of the first resuscitation cycle was longer in ABC teams (63 ± 17 vs. 48 ± 10 sec; P <0.0001).CONCLUSIONS: This randomized controlled trial found CAB superior to ABC with an earlier start of CPR and a shorter time to completion of the first 30:2 resuscitation cycle. These findings endorse the change from ABC to CAB in international resuscitation guidelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND E-learning and blended learning approaches gain more and more popularity in emergency medicine curricula. So far, little data is available on the impact of such approaches on procedural learning and skill acquisition and their comparison with traditional approaches. OBJECTIVE This study investigated the impact of a blended learning approach, including Web-based virtual patients (VPs) and standard pediatric basic life support (PBLS) training, on procedural knowledge, objective performance, and self-assessment. METHODS A total of 57 medical students were randomly assigned to an intervention group (n=30) and a control group (n=27). Both groups received paper handouts in preparation of simulation-based PBLS training. The intervention group additionally completed two Web-based VPs with embedded video clips. Measurements were taken at randomization (t0), after the preparation period (t1), and after hands-on training (t2). Clinical decision-making skills and procedural knowledge were assessed at t0 and t1. PBLS performance was scored regarding adherence to the correct algorithm, conformance to temporal demands, and the quality of procedural steps at t1 and t2. Participants' self-assessments were recorded in all three measurements. RESULTS Procedural knowledge of the intervention group was significantly superior to that of the control group at t1. At t2, the intervention group showed significantly better adherence to the algorithm and temporal demands, and better procedural quality of PBLS in objective measures than did the control group. These aspects differed between the groups even at t1 (after VPs, prior to practical training). Self-assessments differed significantly only at t1 in favor of the intervention group. CONCLUSIONS Training with VPs combined with hands-on training improves PBLS performance as judged by objective measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, road safety and traffic congestion are major concerns worldwide. This is why research on vehicular communication is very vital. In static scenarios vehicles behave typically like in an office network where nodes transmit without moving and with no defined position. This paper analyses the impact of context information on existing popular rate adaptation algorithms. Our simulation was done in MATLAB by observing the impact of context information on these algorithms. Simulation was performed for both static and mobile cases.Our simulations are based on IEEE 802.11p wireless standard. For static scenarios vehicles do not move and without defined positions, while for the mobile case, vehicles are mobile with uniformly selected speed and randomized positions. Network performance are analysed using context information. Our results show that in mobility when context information is used, the system performance can be improved for all three rate adaptation algorithms. That can be explained by that with range checking, when many vehicles are out of communication range, less vehicles contend for network resources, thereby increasing the network performances. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of finding the optimal join ordering executing a query to a relational database management system is a combinatorial optimization problem, which makes deterministic exhaustive solution search unacceptable for queries with a great number of joined relations. In this work an adaptive genetic algorithm with dynamic population size is proposed for optimizing large join queries. The performance of the algorithm is compared with that of several classical non-deterministic optimization algorithms. Experiments have been performed optimizing several random queries against a randomly generated data dictionary. The proposed adaptive genetic algorithm with probabilistic selection operator outperforms in a number of test runs the canonical genetic algorithm with Elitist selection as well as two common random search strategies and proves to be a viable alternative to existing non-deterministic optimization approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this clinical study was to determine the efficacy of Uncaria tomentosa (cat's claw) against denture stomatitis (DS). Fifty patients with DS were randomly assigned into 3 groups to receive 2% miconazole, placebo, or 2% U tomentosa gel. DS level was recorded immediately, after 1 week of treatment, and 1 week after treatment. The clinical effectiveness of each treatment was measured using Newton's criteria. Mycologic samples from palatal mucosa and prosthesis were obtained to determinate colony forming units per milliliter (CFU/mL) and fungal identification at each evaluation period. Candida species were identified with HiCrome Candida and API 20C AUX biochemical test. DS severity decreased in all groups (P < .05). A significant reduction in number of CFU/mL after 1 week (P < .05) was observed for all groups and remained after 14 days (P > .05). C albicans was the most prevalent microorganism before treatment, followed by C tropicalis, C glabrata, and C krusei, regardless of the group and time evaluated. U tomentosa gel had the same effect as 2% miconazole gel. U tomentosa gel is an effective topical adjuvant treatment for denture stomatitis.