18 resultados para branch and bound algorithm
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.
Resumo:
Neste artigo propomos um algoritmo branch and cut com novas inequações específicas ao problema de planejamento da expansão de redes de transmissão de energia elétrica. Todas as inequações propostas neste trabalho são válidas tanto para os modelos lineares como para os modelos não lineares do problema. Os testes computacionais têm mostrado a eficiência do método proposto neste trabalho quando aplicado a subsistemas reais brasileiros e ao sistema colombiano.
Resumo:
OBJECTIVE: We aimed to evaluate whether the inclusion of videothoracoscopy in a pleural empyema treatment algorithm would change the clinical outcome of such patients. METHODS: This study performed quality-improvement research. We conducted a retrospective review of patients who underwent pleural decortication for pleural empyema at our institution from 2002 to 2008. With the old algorithm (January 2002 to September 2005), open decortication was the procedure of choice, and videothoracoscopy was only performed in certain sporadic mid-stage cases. With the new algorithm (October 2005 to December 2008), videothoracoscopy became the first-line treatment option, whereas open decortication was only performed in patients with a thick pleural peel (>2 cm) observed by chest scan. The patients were divided into an old algorithm (n = 93) and new algorithm (n = 113) group and compared. The main outcome variables assessed included treatment failure (pleural space reintervention or death up to 60 days after medical discharge) and the occurrence of complications. RESULTS: Videothoracoscopy and open decortication were performed in 13 and 80 patients from the old algorithm group and in 81 and 32 patients from the new algorithm group, respectively (p < 0.01). The patients in the new algorithm group were older (41 +/- 1 vs. 46.3 +/- 16.7 years, p=0.014) and had higher Charlson Comorbidity Index scores [0(0-3) vs. 2(0-4), p = 0.032]. The occurrence of treatment failure was similar in both groups (19.35% vs. 24.77%, p= 0.35), although the complication rate was lower in the new algorithm group (48.3% vs. 33.6%, p = 0.04). CONCLUSIONS: The wider use of videothoracoscopy in pleural empyema treatment was associated with fewer complications and unaltered rates of mortality and reoperation even though more severely ill patients were subjected to videothoracoscopic surgery.
Resumo:
Isoprene is emitted from many terrestrial plants at high rates, accounting for an estimated 1/3 of annual global volatile organic compound emissions from all anthropogenic and biogenic sources combined. Through rapid photooxidation reactions in the atmosphere, isoprene is converted to a variety of oxidized hydrocarbons, providing higher order reactants for the production of organic nitrates and tropospheric ozone, reducing the availability of oxidants for the breakdown of radiatively active trace gases such as methane, and potentially producing hygroscopic particles that act as effective cloud condensation nuclei. However, the functional basis for plant production of isoprene remains elusive. It has been hypothesized that in the cell isoprene mitigates oxidative damage during the stress-induced accumulation of reactive oxygen species (ROS), but the products of isoprene-ROS reactions in plants have not been detected. Using pyruvate-2-13C leaf and branch feeding and individual branch and whole mesocosm flux studies, we present evidence that isoprene (i) is oxidized to methyl vinyl ketone and methacrolein (iox) in leaves and that iox/i emission ratios increase with temperature, possibly due to an increase in ROS production under high temperature and light stress. In a primary rainforest in Amazonia, we inferred significant in plant isoprene oxidation (despite the strong masking effect of simultaneous atmospheric oxidation), from its influence on the vertical distribution of iox uptake fluxes, which were shifted to low isoprene emitting regions of the canopy. These observations suggest that carbon investment in isoprene production is larger than that inferred from emissions alone and that models of tropospheric chemistry and biotachemistryclimate interactions should incorporate isoprene oxidation within both the biosphere and the atmosphere with potential implications for better understanding both the oxidizing power of the troposphere and forest response to climate change.
Resumo:
The irregular shape packing problem is approached. The container has a fixed width and an open dimension to be minimized. The proposed algorithm constructively creates the solution using an ordered list of items and a placement heuristic. Simulated annealing is the adopted metaheuristic to solve the optimization problem. A two-level algorithm is used to minimize the open dimension of the container. To ensure feasible layouts, the concept of collision free region is used. A collision free region represents all possible translations for an item to be placed and may be degenerated. For a moving item, the proposed placement heuristic detects the presence of exact fits (when the item is fully constrained by its surroundings) and exact slides (when the item position is constrained in all but one direction). The relevance of these positions is analyzed and a new placement heuristic is proposed. Computational comparisons on benchmark problems show that the proposed algorithm generated highly competitive solutions. Moreover, our algorithm updated some best known results. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this article, we introduce two new variants of the Assembly Line Worker Assignment and Balancing Problem (ALWABP) that allow parallelization of and collaboration between heterogeneous workers. These new approaches suppose an additional level of complexity in the Line Design and Assignment process, but also higher flexibility; which may be particularly useful in practical situations where the aim is to progressively integrate slow or limited workers in conventional assembly lines. We present linear models and heuristic procedures for these two new problems. Computational results show the efficiency of the proposed approaches and the efficacy of the studied layouts in different situations. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
OBJECTIVE: Differentiation between benign and malignant ovarian neoplasms is essential for creating a system for patient referrals. Therefore, the contributions of the tumor markers CA125 and human epididymis protein 4 (HE4) as well as the risk ovarian malignancy algorithm (ROMA) and risk malignancy index (RMI) values were considered individually and in combination to evaluate their utility for establishing this type of patient referral system. METHODS: Patients who had been diagnosed with ovarian masses through imaging analyses (n = 128) were assessed for their expression of the tumor markers CA125 and HE4. The ROMA and RMI values were also determined. The sensitivity and specificity of each parameter were calculated using receiver operating characteristic curves according to the area under the curve (AUC) for each method. RESULTS: The sensitivities associated with the ability of CA125, HE4, ROMA, or RMI to distinguish between malignant versus benign ovarian masses were 70.4%, 79.6%, 74.1%, and 63%, respectively. Among carcinomas, the sensitivities of CA125, HE4, ROMA (pre-and post-menopausal), and RMI were 93.5%, 87.1%, 80%, 95.2%, and 87.1%, respectively. The most accurate numerical values were obtained with RMI, although the four parameters were shown to be statistically equivalent. CONCLUSION: There were no differences in accuracy between CA125, HE4, ROMA, and RMI for differentiating between types of ovarian masses. RMI had the lowest sensitivity but was the most numerically accurate method. HE4 demonstrated the best overall sensitivity for the evaluation of malignant ovarian tumors and the differential diagnosis of endometriosis. All of the parameters demonstrated increased sensitivity when tumors with low malignancy potential were considered low-risk, which may be used as an acceptable assessment method for referring patients to reference centers.
Resumo:
In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification by Minchenko and Stakhovski that was called RCRCQ. We show that RCPLD is enough to ensure the convergence of an augmented Lagrangian algorithm and that it asserts the validity of an error bound. We also provide proofs and counter-examples that show the relations of RCRCQ and RCPLD with other known constraint qualifications. In particular, RCPLD is strictly weaker than CPLD and RCRCQ, while still stronger than Abadie's constraint qualification. We also verify that the second order necessary optimality condition holds under RCRCQ.
Resumo:
T-cell based vaccine approaches have emerged to counteract HIV-1/AIDS. Broad, polyfunctional and cytotoxic CD4(+) T-cell responses have been associated with control of HIV-1 replication, which supports the inclusion of CD4(+) T-cell epitopes in vaccines. A successful HIV-1 vaccine should also be designed to overcome viral genetic diversity and be able to confer immunity in a high proportion of immunized individuals from a diverse HLA-bearing population. In this study, we rationally designed a multiepitopic DNA vaccine in order to elicit broad and cross-clade CD4(+) T-cell responses against highly conserved and promiscuous peptides from the HIV-1 M-group consensus sequence. We identified 27 conserved, multiple HLA-DR-binding peptides in the HIV-1 M-group consensus sequences of Gag, Pol, Nef, Vif, Vpr, Rev and Vpu using the TEPITOPE algorithm. The peptides bound in vitro to an average of 12 out of the 17 tested HLA-DR molecules and also to several molecules such as HLA-DP, -DQ and murine IA(b) and IA(d). Sixteen out of the 27 peptides were recognized by PBMC from patients infected with different HIV-1 variants and 72% of such patients recognized at least 1 peptide. Immunization with a DNA vaccine (HIVBr27) encoding the identified peptides elicited IFN-gamma secretion against 11 out of the 27 peptides in BALB/c mice; CD4(+) and CD8(+) T-cell proliferation was observed against 8 and 6 peptides, respectively. HIVBr27 immunization elicited cross-clade T-cell responses against several HIV-1 peptide variants. Polyfunctional CD4(+) and CD8(+) T cells, able to simultaneously proliferate and produce IFN-gamma and TNF-alpha, were also observed. This vaccine concept may cope with HIV-1 genetic diversity as well as provide increased population coverage, which are desirable features for an efficacious strategy against HIV-1/AIDS.
Resumo:
Bound-constrained minimization is a subject of active research. To assess the performance of existent solvers, numerical evaluations and comparisons are carried on. Arbitrary decisions that may have a crucial effect on the conclusions of numerical experiments are highlighted in the present work. As a result, a detailed evaluation based on performance profiles is applied to the comparison of bound-constrained minimization solvers. Extensive numerical results are presented and analyzed.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
This paper presents a structural damage detection methodology based on genetic algorithms and dynamic parameters. Three chromosomes are used to codify an individual in the population. The first and second chromosomes locate and quantify damage, respectively. The third permits the self-adaptation of the genetic parameters. The natural frequencies and mode shapes are used to formulate the objective function. A numerical analysis was performed for several truss structures under different damage scenarios. The results have shown that the methodology can reliably identify damage scenarios using noisy measurements and that it results in only a few misidentified elements. (C) 2012 Civil-Comp Ltd and Elsevier Ltd. All rights reserved.
Resumo:
Chronic hepatitis C virus (HCV) infection is a worldwide health problem that may evolve to cirrhosis and hepatocellular carcinoma. Incompletely understood immune system mechanisms have been associated with impaired viral clearance. The nonclassical class I human leukocyte antigen G (HLA-G) molecule may downregulate immune system cell functions exhibiting well-recognized tolerogenic properties. HCV genotype was analyzed in chronic HCV-infected patients. Because HLA-G expression may be induced by certain viruses, we evaluated the presence of HLA-G in the liver microenvironment obtained from 89 biopsies of patients harboring chronic HCV infection and stratified according to clinical and histopathological features. Overall, data indicated that HCV genotype 1 was predominant, especially subgenotype 1a, with a prevalence of 87%. HLA-G expression was observed in 45(51%) liver specimens, and it was more frequent in milder stages of chronic hepatitis (67.4%) than in moderate (27.8%; p = 0.009) and severe (36.0%; p = 0.021) stages of the disease. Altogether, these results suggest that the expression of HLA-G in the context of HCV is a complex process modulated by many factors, which may contribute to an immunologic environment favoring viral persistence. However, because the milder forms predominantly expressed HLA-G, a protective role of this molecule may not be excluded. (C) 2012 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
Resumo:
We consider the influence of breakup channels on the complete fusion of weakly bound systems in terms of dynamic polarization potentials. It is argued that the enhancement of the cross section at sub-barrier energies may be consistent with recent experimental observations that nucleon transfer, often leading to breakup, is dominant compared to direct breakup. The main trends of the experimental complete fusion cross sections are analyzed in the framework of the DPP approach. The qualitative conclusions are supported by CDCC calculations including a sequential breakup channel, the one neutron stripping of Li-7 followed by the breakup of Li-6.
Resumo:
Background: The controversial effects promoted by cardiac resynchronization therapy (CRT) on the ventricular repolarization (VR) have motivated VR evaluation by body surface potential mapping (BSPM) in CRT patients. Methods: Fifty-two CRT patients, mean age 58.8 +/- 12.3 years, 31 male, LVEF 27.5 +/- 9.2, NYHA III-IV heart failure with QRS181.5 +/- 14.2 ms, underwent 87-lead BSPM in sinus rhythm (BASELINE) and biventricular pacing (BIV). Measurements of mean and corrected QT intervals and dispersion, mean and corrected T peak end intervals and their dispersion, and JT intervals characterized global and regional (RV, Intermediate, and LV regions) ventricular repolarization response. Results: Global QTm (P < 0.001) and QTcm (P < 0.05) were decreased in BIV; QTm was similar across regions in both modes (P = ns); QTcm values were lower in RV/LV than in Intermediate region in BASELINE and BIV (P < 0.001); only RV/Septum showed a significant difference (P < 0.01) in the BIV mode. QTD values both of BASELINE (P < 0.01) and BIV (P < 0.001) were greater in the Intermediate than in the LV region. CRT effect significantly reduced global/regional QTm and QTcm values. QTD was globally decreased in RV/LV (Intermediate: P = ns). BIV mode significantly reduced global T peak end mean and corrected intervals and their dispersion. JT values were not significant. Conclusions: Ventricular repolarization parameters QTm, QTcm, and QTD global/regional values, as assessed by BSPM, were reduced in patients under CRT with severe HF and LBBB. Greater recovery impairment in the Intermediate region was detected by the smaller variation of its dispersion.