39 resultados para Proximal algorithms
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
This paper describes the first phase of a project attempting to construct an efficient general-purpose nonlinear optimizer using an augmented Lagrangian outer loop with a relative error criterion, and an inner loop employing a state-of-the art conjugate gradient solver. The outer loop can also employ double regularized proximal kernels, a fairly recent theoretical development that leads to fully smooth subproblems. We first enhance the existing theory to show that our approach is globally convergent in both the primal and dual spaces when applied to convex problems. We then present an extensive computational evaluation using the CUTE test set, showing that some aspects of our approach are promising, but some are not. These conclusions in turn lead to additional computational experiments suggesting where to next focus our theoretical and computational efforts.
Resumo:
The objective of this study was to evaluate the effectiveness of a therapeutic sealant to arrest non-cavitated proximal carious lesion progression. The study population comprised 44 adolescents who had bitewing radiographs taken for caries diagnosis. Non-cavitated lesions extending up to half of dentin thickness were included in the sample. In the experimental group (n = 33), the proximal caries-lesion surfaces were sealed with an adhesive (OptiBond Solo, Kerr) after tooth separation. The control group (n = 11) received no treatment, except for oral hygiene instructions including use of dental floss. Follow-up radiographs were taken after one year and were analyzed in comparison with baseline radiographs. In a blind study setting, visual readings were performed by two examiners, blinded to whether the examined radiograph was baseline or follow-up, and whether it concerned a test or control lesion. The efficacy of sealing treatment was evaluated by the McNemar test (0.05). About 22% of the sealed lesions showed reduction, 61% showed no change and 16% showed progression. For the control lesions, the corresponding values were 27%, 36% and 36% respectively. The number of lesions that showed reduction and no changes were merged and therefore 83.3% of the sealed lesions and 63.6% of the control lesions were considered clinically successful. No statistical significance was detected (p > 0.05). In the course of 1 year, sealing proximal caries lesions was not shown to be superior to lesion monitoring.
Resumo:
The objectives of the present study were to identify the cis-elements of the promoter absolutely required for the efficient rat NHE3 gene transcription and to locate positive and negative regulatory elements in the 5’-flanking sequence (5’FS), which might modulate the gene expression in proximal tubules, and to compare this result to those reported for intestinal cell lines. We analyzed the promoter activity of different 5’FS segments of the rat NHE3 gene, in the OKP renal proximal tubule cell line by measuring the activity of the reporter gene luciferase. Because the segment spanning the first 157 bp of 5’FS was the most active it was studied in more detail by sequential deletions, point mutations, and gel shift assays. The essential elements for gene transcription are in the region -85 to -33, where we can identify consensual binding sites for Sp1 and EGR-1, which are relevant to NHE3 gene basal transcription. Although a low level of transcription is still possible when the first 25 bp of the 5’FS are used as promoter, efficient transcription only occurs with 44 bp of 5’FS. There are negative regulatory elements in the segments spanning -1196 to -889 and -467 to -152, and positive enhancers between -889 and -479 bp of 5’FS. Transcription factors in the OKP cell nuclear extract efficiently bound to DNA elements of rat NHE3 promoter as demonstrated by gel shift assays, suggesting a high level of similarity between transcription factors of both species, including Sp1 and EGR-1.
Resumo:
AIM: To evaluate the effects of meal size and three segmentations on intragastric distribution of the meal and gastric motility, by scintigraphy. METHODS: Twelve healthy volunteers were randomly assessed, twice, by scintigraphy. The test meal consisted of 60 or 180 mL of yogurt labeled with 64 MBq (99m)Tc-tin colloid. Anterior and posterior dynamic frames were simultaneously acquired for 18 min and all data were analyzed in MatLab. Three proximal-distal segmentations using regions of interest were adopted for both meals. RESULTS: Intragastric distribution of the meal between the proximal and distal compartments was strongly influenced by the way in which the stomach was divided, showing greater proximal retention after the 180 mL. An important finding was that both dominant frequencies (1 and 3 cpm) were simultaneously recorded in the proximal and distal stomach; however, the power ratio of those dominant frequencies varied in agreement with the segmentation adopted and was independent of the meal size. CONCLUSION: It was possible to simultaneously evaluate the static intragastric distribution and phasic contractility from the same recording using our scintigraphic approach. (C) 2010 Baishideng. All rights reserved.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
Voltage and current waveforms of a distribution or transmission power system are not pure sinusoids. There are distortions in these waveforms that can be represented as a combination of the fundamental frequency, harmonics and high frequency transients. This paper presents a novel approach to identifying harmonics in power system distorted waveforms. The proposed method is based on Genetic Algorithms, which is an optimization technique inspired by genetics and natural evolution. GOOAL, a specially designed intelligent algorithm for optimization problems, was successfully implemented and tested. Two kinds of representations concerning chromosomes are utilized: binary and real. The results show that the proposed method is more precise than the traditional Fourier Transform, especially considering the real representation of the chromosomes.
Resumo:
This paper presents a strategy for the solution of the WDM optical networks planning. Specifically, the problem of Routing and Wavelength Allocation (RWA) in order to minimize the amount of wavelengths used. In this case, the problem is known as the Min-RWA. Two meta-heuristics (Tabu Search and Simulated Annealing) are applied to take solutions of good quality and high performance. The key point is the degradation of the maximum load on the virtual links in favor of minimization of number of wavelengths used; the objective is to find a good compromise between the metrics of virtual topology (load in Gb/s) and of the physical topology (quantity of wavelengths). The simulations suggest good results when compared to some existing in the literature.
Resumo:
This technical note develops information filter and array algorithms for a linear minimum mean square error estimator of discrete-time Markovian jump linear systems. A numerical example for a two-mode Markovian jump linear system, to show the advantage of using array algorithms to filter this class of systems, is provided.
Resumo:
The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.
Resumo:
In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and set-valued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the Iterated Partial Evaluation and Structured Variational 2U algorithms are, respectively, based on Localized Partial Evaluation and variational techniques. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
The flowshop scheduling problem with blocking in-process is addressed in this paper. In this environment, there are no buffers between successive machines: therefore intermediate queues of jobs waiting in the system for their next operations are not allowed. Heuristic approaches are proposed to minimize the total tardiness criterion. A constructive heuristic that explores specific characteristics of the problem is presented. Moreover, a GRASP-based heuristic is proposed and Coupled with a path relinking strategy to search for better outcomes. Computational tests are presented and the comparisons made with an adaptation of the NEH algorithm and with a branch-and-bound algorithm indicate that the new approaches are promising. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009
Resumo:
This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.