984 resultados para Branch-cut method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional abduction imposes as a precondition the restriction that the background information may not derive the goal data. In first-order logic such precondition is, in general, undecidable. To avoid such problem, we present a first-order cut-based abduction method, which has KE-tableaux as its underlying inference system. This inference system allows for the automation of non-analytic proofs in a tableau setting, which permits a generalization of traditional abduction that avoids the undecidable precondition problem. After demonstrating the correctness of the method, we show how this method can be dynamically iterated in a process that leads to the construction of non-analytic first-order proofs and, in some terminating cases, to refutations as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To establish a model to quantitative histological analysis of the mandibular branch of the facial nerve in rats. METHODS: Eleven Wistar rats had their right and left mandibular branches of the facial nerve surgically removed and were sacrificed afterwards. Quantitative histological analysis was performed with: a) partial number of axons; b) partial area of the transversal cut of the nerve (9000 mu m(2)); c) partial density. The averages of partial density were obtained. The statistical study was established by Wilcoxon test (p=0.05). RESULTS: In relation to density of axons, comparison between sides shows no statistically significant difference (p=0.248; p=0.533). Mean partial density of distal and proximal samples was, respectively, 0.18 +/- 0.02 and 0.19 +/- 0.02 axons/mu m(2). Comparison between proximal and distal samples shows no statistically significant difference (p=0.859; p=0.182). CONCLUSION: This study has successfully established a model to histological quantitative analysis of the mandibular branch of the facial nerve in rats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a new algebraic-graph method for identification of islanding in power system grids is proposed. The proposed method identifies all the possible cases of islanding, due to the loss of a equipment, by means of a factorization of the bus-branch incidence matrix. The main features of this new method include: (i) simple implementation, (ii) high speed, (iii) real-time adaptability, (iv) identification of all islanding cases and (v) identification of the buses that compose each island in case of island formation. The method was successfully tested on large-scale systems such as the reduced south Brazilian system (45 buses/72 branches) and the south-southeast Brazilian system (810 buses/1340 branches). (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Our objective was to develop an experimental model for the noninvasive and objective evaluation of facial nerve regeneration in rats using a motor nerve conduction test (electromyography). Methods: Twenty-two rats were submitted to neurophysiological evaluation using motor nerve conduction of the mandibular branch of the facial nerve to obtain the compound muscle action potentials (CMAPs). To record the CM APs, we used two needle electrodes that were inserted into the lower lip muscle of the rat. A supramaximal electrical stimulus was applied, and the values of CMAP latency, amplitude, length, area, and stimulus intensity obtained from each side were compared by use of the Wilcoxon test. Results: There was no significant difference (all p > 0.05) in latency, amplitude, duration, area, or intensity of stimuli between the two sides. The amplitudes ranged between 1.61 and 8.30 mV, the latencies between 1.03 and 1.97 ms, and the stimulus intensities between 1.50 and 2.90 mA. Conclusions: This is a noninvasive, easy, and highly reproducible method that contributes to an improvement of the techniques previously described and may contribute to future studies of the degeneration and regeneration of the facial nerve.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative branch determination in polyolefins by solid- and melt-state 13C NMR has been investigated. Both methods were optimised toward sensitivity per unit time. While solid-state NMR was shown to give quick albeit only qualitative results, melt-state NMR allowed highly time efficient accurate branch quantification. Comparison of spectra obtained using spectrometers operating at 300, 500 and 700 MHz 1H Larmor frequency, with 4 and 7~mm MAS probeheads, showed that the best sensitivity was achieved at 500 MHz using a 7 mm 13C-1H optimised high temperature probehead. For materials available in large quantities, static melt-state NMR, using large diameter detection coils and high coil filling at 300 MHz, was shown to produce comparable results to melt-state MAS measurements in less time. While the use of J-coupling mediated polarisation transfer techniques was shown to be possible, direct polarisation via single-pulse excitation proved to be more suitable for branch quantification in the melt-state. Artificial line broadening, introduced by FID truncation, was able to be reduced by the use of π pulse-train heteronuclear dipolar decoupling. This decoupling method, when combined with an extended duty-cycle, allowed for significant improvement in resolution. Standard setup, processing and analysis techniques were developed to minimise systematic errors contributing to the measured branch contents. The final optimised melt-state MAS NMR method was shown to allow time efficient quantification of comonomer content and distribution in both polyethylene- and polypropylene-co-α-olefins. The sensitivity of the technique was demonstrated by quantifying branch concentrations of 8 branches per 100,000 CH2 for an industrial ‘linear’ polyethylene in only 13 hours. Even lower degrees of 3–8 long-chain branches per 100,000 carbons were able to be estimated in just 24 hours for a series of γ-irradiated polypropylene homopolymers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade the near-surface mounted (NSM) strengthening technique using carbon fibre reinforced polymers (CFRP) has been increasingly used to improve the load carrying capacity of concrete members. Compared to externally bonded reinforcement (EBR), the NSM system presents considerable advantages. This technique consists in the insertion of carbon fibre reinforced polymer laminate strips into pre-cut slits opened in the concrete cover of the elements to be strengthened. CFRP reinforcement is bonded to concrete with an appropriate groove filler, typically epoxy adhesive or cement grout. Up to now, research efforts have been mainly focused on several structural aspects, such as: bond behaviour, flexural and/or shear strengthening effectiveness, and energy dissipation capacity of beam-column joints. In such research works, as well as in field applications, the most widespread adhesives that are used to bond reinforcements to concrete are epoxy resins. It is largely accepted that the performance of the whole application of NSM systems strongly depends on the mechanical properties of the epoxy resins, for which proper curing conditions must be assured. Therefore, the existence of non-destructive methods that allow monitoring the curing process of epoxy resins in the NSM CFRP system is desirable, in view of obtaining continuous information that can provide indication in regard to the effectiveness of curing and the expectable bond behaviour of CFRP/adhesive/concrete systems. The experimental research was developed at the Laboratory of the Structural Division of the Civil Engineering Department of the University of Minho in Guimar\~aes, Portugal (LEST). The main objective was to develop and propose a new method for continuous quality control of the curing of epoxy resins applied in NSM CFRP strengthening systems. This objective is pursued through the adaptation of an existing technique, termed EMM-ARM (Elasticity Modulus Monitoring through Ambient Response Method) that has been developed for monitoring the early stiffness evolution of cement-based materials. The experimental program was composed of two parts: (i) direct pull-out tests on concrete specimens strengthened with NSM CFRP laminate strips were conducted to assess the evolution of bond behaviour between CFRP and concrete since early ages; and, (ii) EMM-ARM tests were carried out for monitoring the progressive stiffness development of the structural adhesive used in CFRP applications. In order to verify the capability of the proposed method for evaluating the elastic modulus of the epoxy, static E-Modulus was determined through tension tests. The results of the two series of tests were then combined and compared to evaluate the possibility of implementation of a new method for the continuous monitoring and quality control of NSM CFRP applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das Basisproblem von Arc-Routing Problemen mit mehreren Fahrzeugen ist das Capacitated Arc-Routing Problem (CARP). Praktische Anwendungen des CARP sind z.B. in den Bereichen Müllabfuhr und Briefzustellung zu finden. Das Ziel ist es, einen kostenminimalen Tourenplan zu berechnen, bei dem alle erforderlichen Kanten bedient werden und gleichzeitig die Fahrzeugkapazität eingehalten wird. In der vorliegenden Arbeit wird ein Cut-First Branch-and-Price Second Verfahren entwickelt. In der ersten Phase werden Schnittebenen generiert, die dem Master Problem in der zweiten Phase hinzugefügt werden. Das Subproblem ist ein kürzeste Wege Problem mit Ressourcen und wird gelöst um neue Spalten für das Master Problem zu liefern. Ganzzahlige CARP Lösungen werden durch ein neues hierarchisches Branching-Schema garantiert. Umfassende Rechenstudien zeigen die Effektivität dieses Algorithmus. Kombinierte Standort- und Arc-Routing Probleme ermöglichen eine realistischere Modellierung von Zustellvarianten bei der Briefzustellung. In dieser Arbeit werden jeweils zwei mathematische Modelle für Park and Loop und Park and Loop with Curbline vorgestellt. Die Modelle für das jeweilige Problem unterscheiden sich darin, wie zulässige Transfer Routen modelliert werden. Während der erste Modelltyp Subtour-Eliminationsbedingungen verwendet, werden bei dem zweiten Modelltyp Flussvariablen und Flusserhaltungsbedingungen eingesetzt. Die Rechenstudie zeigt, dass ein MIP-Solver den zweiten Modelltyp oft in kürzerer Rechenzeit lösen kann oder bei Erreichen des Zeitlimits bessere Zielfunktionswerte liefert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By measuring the total crack lengths (TCL) along a gunshot wound channel simulated in ordnance gelatine, one can calculate the energy transferred by a projectile to the surrounding tissue along its course. Visual quantitative TCL analysis of cut slices in ordnance gelatine blocks is unreliable due to the poor visibility of cracks and the likely introduction of secondary cracks resulting from slicing. Furthermore, gelatine TCL patterns are difficult to preserve because of the deterioration of the internal structures of gelatine with age and the tendency of gelatine to decompose. By contrast, using computed tomography (CT) software for TCL analysis in gelatine, cracks on 1-cm thick slices can be easily detected, measured and preserved. In this, experiment CT TCL analyses were applied to gunshots fired into gelatine blocks by three different ammunition types (9-mm Luger full metal jacket, .44 Remington Magnum semi-jacketed hollow point and 7.62 × 51 RWS Cone-Point). The resulting TCL curves reflected the three projectiles' capacity to transfer energy to the surrounding tissue very accurately and showed clearly the typical energy transfer differences. We believe that CT is a useful tool in evaluating gunshot wound profiles using the TCL method and is indeed superior to conventional methods applying physical slicing of the gelatine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The authors describe a modification of the medial branch kryorhizotomy technique for the treatment of lumbar facet joint syndrome using a fluoroscopy-based laser-guided method. A total of 32 patients suffering from lumbar facet joint syndrome confirmed by positive medial nerve block underwent conventional or laser-guided kryorhizotomy. The procedural time (20.6 +/- 1.0 vs 16.3 +/- 0.9 minutes, p < 0.01), fluoroscopy time (54.1 +/- 3.5 vs 28.2 +/- 2.4 seconds, p < 0.01), radiation dose (407.5 +/- 32.0 vs 224.1 +/- 20.3 cGy/cm(2), p < 0.01), and patient discomfort during the procedure (7.1 +/- 0.4 vs 5.2 +/- 0.4 on the visual analog scale, p < 0.01) were significantly reduced in the laser-guided group. There was a tendency for a better positioning accuracy when the laser guidance method was used (3.0 +/- 0.3 vs 2.2 +/- 0.3 mm of deviation from the target points, p > 0.05). No difference in the outcome was observed between the 2 groups of patients (visual analog scale score 3.5 +/- 0.2 vs 3.3 +/- 0.3, p > 0.05). This improved minimally invasive surgical technique offers advantages to conventional fluoroscopy-based kryorhizotomy.