998 resultados para minimization methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The classical Lojasiewicz inequality and its extensions for partial differential equation problems (Simon) and to o-minimal structures (Kurdyka) have a considerable impact on the analysis of gradient-like methods and related problems: minimization methods, complexity theory, asymptotic analysis of dissipative partial differential equations, tame geometry. This paper provides alternative characterizations of this type of inequalities for nonsmooth lower semicontinuous functions defined on a metric or a real Hilbert space. In a metric context, we show that a generalized form of the Lojasiewicz inequality (hereby called the Kurdyka- Lojasiewicz inequality) relates to metric regularity and to the Lipschitz continuity of the sublevel mapping, yielding applications to discrete methods (strong convergence of the proximal algorithm). In a Hilbert setting we further establish that asymptotic properties of the semiflow generated by -∂f are strongly linked to this inequality. This is done by introducing the notion of a piecewise subgradient curve: such curves have uniformly bounded lengths if and only if the Kurdyka- Lojasiewicz inequality is satisfied. Further characterizations in terms of talweg lines -a concept linked to the location of the less steepest points at the level sets of f- and integrability conditions are given. In the convex case these results are significantly reinforced, allowing in particular to establish the asymptotic equivalence of discrete gradient methods and continuous gradient curves. On the other hand, a counterexample of a convex C2 function in R2 is constructed to illustrate the fact that, contrary to our intuition, and unless a specific growth condition is satisfied, convex functions may fail to fulfill the Kurdyka- Lojasiewicz inequality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work was developed aiming to evaluate the environmental impacts of the public use in natural touristic attractive at Altinópolis city (SP), using the Visitor Impact Management method (VTM). In each analyzed natural resource a specific questionnaire was elaborated in accordance with the appropriate pointers that allowed to determinate environmental quality of each point. The results indicated that only two tourist areas need special attention for their preservation. Minimization methods, monitoring and educational practices are proposals in order to tourist practices be made with environmental responsibility.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Existe normalmente el propósito de obtener la mejor solución posible cuando se plantea un problema estructural, entendiendo como mejor la solución que cumpliendo los requisitos estructurales, de uso, etc., tiene un coste físico menor. En una primera aproximación se puede representar el coste físico por medio del peso propio de la estructura, lo que permite plantear la búsqueda de la mejor solución como la de menor peso. Desde un punto de vista práctico, la obtención de buenas soluciones—es decir, soluciones cuyo coste sea solo ligeramente mayor que el de la mejor solución— es una tarea tan importante como la obtención de óptimos absolutos, algo en general difícilmente abordable. Para disponer de una medida de la eficiencia que haga posible la comparación entre soluciones se propone la siguiente definición de rendimiento estructural: la razón entre la carga útil que hay que soportar y la carga total que hay que contabilizar (la suma de la carga útil y el peso propio). La forma estructural puede considerarse compuesta por cuatro conceptos, que junto con el material, definen una estructura: tamaño, esquema, proporción, y grueso.Galileo (1638) propuso la existencia de un tamaño insuperable para cada problema estructural— el tamaño para el que el peso propio agota una estructura para un esquema y proporción dados—. Dicho tamaño, o alcance estructural, será distinto para cada material utilizado; la única información necesaria del material para su determinación es la razón entre su resistencia y su peso especifico, una magnitud a la que denominamos alcance del material. En estructuras de tamaño muy pequeño en relación con su alcance estructural la anterior definición de rendimiento es inútil. En este caso —estructuras de “talla nula” en las que el peso propio es despreciable frente a la carga útil— se propone como medida del coste la magnitud adimensional que denominamos número de Michell, que se deriva de la “cantidad” introducida por A. G. M. Michell en su artículo seminal de 1904, desarrollado a partir de un lema de J. C. Maxwell de 1870. A finales del siglo pasado, R. Aroca combino las teorías de Galileo y de Maxwell y Michell, proponiendo una regla de diseño de fácil aplicación (regla GA), que permite la estimación del alcance y del rendimiento de una forma estructural. En el presente trabajo se estudia la eficiencia de estructuras trianguladas en problemas estructurales de flexión, teniendo en cuenta la influencia del tamaño. Por un lado, en el caso de estructuras de tamaño nulo se exploran esquemas cercanos al optimo mediante diversos métodos de minoración, con el objetivo de obtener formas cuyo coste (medido con su numero deMichell) sea muy próximo al del optimo absoluto pero obteniendo una reducción importante de su complejidad. Por otro lado, se presenta un método para determinar el alcance estructural de estructuras trianguladas (teniendo en cuenta el efecto local de las flexiones en los elementos de dichas estructuras), comparando su resultado con el obtenido al aplicar la regla GA, mostrando las condiciones en las que es de aplicación. Por último se identifican las líneas de investigación futura: la medida de la complejidad; la contabilidad del coste de las cimentaciones y la extensión de los métodos de minoración cuando se tiene en cuenta el peso propio. ABSTRACT When a structural problem is posed, the intention is usually to obtain the best solution, understanding this as the solution that fulfilling the different requirements: structural, use, etc., has the lowest physical cost. In a first approximation, the physical cost can be represented by the self-weight of the structure; this allows to consider the search of the best solution as the one with the lowest self-weight. But, from a practical point of view, obtaining good solutions—i.e. solutions with higher although comparable physical cost than the optimum— can be as important as finding the optimal ones, because this is, generally, a not affordable task. In order to have a measure of the efficiency that allows the comparison between different solutions, a definition of structural efficiency is proposed: the ratio between the useful load and the total load —i.e. the useful load plus the self-weight resulting of the structural sizing—. The structural form can be considered to be formed by four concepts, which together with its material, completely define a particular structure. These are: Size, Schema, Slenderness or Proportion, and Thickness. Galileo (1638) postulated the existence of an insurmountable size for structural problems—the size for which a structure with a given schema and a given slenderness, is only able to resist its self-weight—. Such size, or structural scope will be different for every different used material; the only needed information about the material to determine such size is the ratio between its allowable stress and its specific weight: a characteristic length that we name material structural scope. The definition of efficiency given above is not useful for structures that have a small size in comparison with the insurmountable size. In this case—structures with null size, inwhich the self-weight is negligible in comparisonwith the useful load—we use as measure of the cost the dimensionless magnitude that we call Michell’s number, an amount derived from the “quantity” introduced by A. G. M. Michell in his seminal article published in 1904, developed out of a result from J. C.Maxwell of 1870. R. Aroca joined the theories of Galileo and the theories of Maxwell and Michell, obtaining some design rules of direct application (that we denominate “GA rule”), that allow the estimation of the structural scope and the efficiency of a structural schema. In this work the efficiency of truss-like structures resolving bending problems is studied, taking into consideration the influence of the size. On the one hand, in the case of structures with null size, near-optimal layouts are explored using several minimization methods, in order to obtain forms with cost near to the absolute optimum but with a significant reduction of the complexity. On the other hand, a method for the determination of the insurmountable size for truss-like structures is shown, having into account local bending effects. The results are checked with the GA rule, showing the conditions in which it is applicable. Finally, some directions for future research are proposed: the measure of the complexity, the cost of foundations and the extension of optimization methods having into account the self-weight.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tese apresenta uma abordagem para a criação rápida de modelos em diferentes geometrias (complexas ou de alta simetria) com objetivo de calcular a correspondente intensidade espalhada, podendo esta ser utilizada na descrição de experimentos de es- palhamento à baixos ângulos. A modelagem pode ser realizada com mais de 100 geome- trias catalogadas em um Banco de Dados, além da possibilidade de construir estruturas a partir de posições aleatórias distribuídas na superfície de uma esfera. Em todos os casos os modelos são gerados por meio do método de elementos finitos compondo uma única geometria, ou ainda, compondo diferentes geometrias, combinadas entre si a partir de um número baixo de parâmetros. Para realizar essa tarefa foi desenvolvido um programa em Fortran, chamado de Polygen, que permite modelar geometrias convexas em diferentes formas, como sólidos, cascas, ou ainda com esferas ou estruturas do tipo DNA nas arestas, além de usar esses modelos para simular a curva de intensidade espalhada para sistemas orientados e aleatoriamente orientados. A curva de intensidade de espalhamento é calculada por meio da equação de Debye e os parâmetros que compõe cada um dos modelos, podem ser otimizados pelo ajuste contra dados experimentais, por meio de métodos de minimização baseados em simulated annealing, Levenberg-Marquardt e algorítmicos genéticos. A minimização permite ajustar os parâmetros do modelo (ou composição de modelos) como tamanho, densidade eletrônica, raio das subunidades, entre outros, contribuindo para fornecer uma nova ferramenta para modelagem e análise de dados de espalhamento. Em outra etapa desta tese, é apresentado o design de modelos atomísticos e a sua respectiva simulação por Dinâmica Molecular. A geometria de dois sistemas auto-organizado de DNA na forma de octaedro truncado, um com linkers de 7 Adeninas e outro com linkers de ATATATA, foram escolhidas para realizar a modelagem atomística e a simulação por Dinâmica Molecular. Para este sistema são apresentados os resultados de Root Mean Square Deviations (RMSD), Root Mean Square Fluctuations (RMSF), raio de giro, torção das hélices duplas de DNA além da avaliação das ligações de Hidrogênio, todos obtidos por meio da análise de uma trajetória de 50 ns.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Deformable Template models are first applied to track the inner wall of coronary arteries in intravascular ultrasound sequences, mainly in the assistance to angioplasty surgery. A circular template is used for initializing an elliptical deformable model to track wall deformation when inflating a balloon placed at the tip of the catheter. We define a new energy function for driving the behavior of the template and we test its robustness both in real and synthetic images. Finally we introduce a framework for learning and recognizing spatio-temporal geometric constraints based on Principal Component Analysis (eigenconstraints).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose two Bayesian methods for detecting and grouping junctions. Our junction detection method evolves from the Kona approach, and it is based on a competitive greedy procedure inspired in the region competition method. Then, junction grouping is accomplished by finding connecting paths between pairs of junctions. Path searching is performed by applying a Bayesian A* algorithm that has been recently proposed. Both methods are efficient and robust, and they are tested with synthetic and real images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term outcomes after kidney transplantation remain suboptimal, despite the great achievements observed in recent years with the use of modern immunosuppressive drugs. Currently, the calcineurin inhibitors (CNI) cyclosporine and tacrolimus remain the cornerstones of immunosuppressive regimens in many centers worldwide, regardless of their well described side-effects, including nephrotoxicity. In this article, we review recent CNI-minimization strategies in kidney transplantation, while emphasizing on the importance of long-term follow-up and patient monitoring. Finally, accumulating data indicate that low-dose CNI-based regimens would provide an interesting balance between efficacy and toxicity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To improve the traditional Nyquist ghost correction approach in echo planar imaging (EPI) at high fields, via schemes based on the reversal of the EPI readout gradient polarity for every other volume throughout a functional magnetic resonance imaging (fMRI) acquisition train. MATERIALS AND METHODS: An EPI sequence in which the readout gradient was inverted every other volume was implemented on two ultrahigh-field systems. Phantom images and fMRI data were acquired to evaluate ghost intensities and the presence of false-positive blood oxygenation level-dependent (BOLD) signal with and without ghost correction. Three different algorithms for ghost correction of alternating readout EPI were compared. RESULTS: Irrespective of the chosen processing approach, ghosting was significantly reduced (up to 70% lower intensity) in both rat brain images acquired on a 9.4T animal scanner and human brain images acquired at 7T, resulting in a reduction of sources of false-positive activation in fMRI data. CONCLUSION: It is concluded that at high B(0) fields, substantial gains in Nyquist ghost correction of echo planar time series are possible by alternating the readout gradient every other volume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Coronary artery disease (CAD) continues to be one of the top public health burden. Perfusion cardiovascular magnetic resonance (CMR) is generally accepted to detect CAD, while data on its cost effectiveness are scarce. Therefore, the goal of the study was to compare the costs of a CMR-guided strategy vs two invasive strategies in a large CMR registry. METHODS: In 3'647 patients with suspected CAD of the EuroCMR-registry (59 centers/18 countries) costs were calculated for diagnostic examinations (CMR, X-ray coronary angiography (CXA) with/without FFR), revascularizations, and complications during a 1-year follow-up. Patients with ischemia-positive CMR underwent an invasive CXA and revascularization at the discretion of the treating physician (=CMR + CXA-strategy). In the hypothetical invasive arm, costs were calculated for an initial CXA and a FFR in vessels with ≥50 % stenoses (=CXA + FFR-strategy) and the same proportion of revascularizations and complications were applied as in the CMR + CXA-strategy. In the CXA-only strategy, costs included those for CXA and for revascularizations of all ≥50 % stenoses. To calculate the proportion of patients with ≥50 % stenoses, the stenosis-FFR relationship from the literature was used. Costs of the three strategies were determined based on a third payer perspective in 4 healthcare systems. RESULTS: Revascularizations were performed in 6.2 %, 4.5 %, and 12.9 % of all patients, patients with atypical chest pain (n = 1'786), and typical angina (n = 582), respectively; whereas complications (=all-cause death and non-fatal infarction) occurred in 1.3 %, 1.1 %, and 1.5 %, respectively. The CMR + CXA-strategy reduced costs by 14 %, 34 %, 27 %, and 24 % in the German, UK, Swiss, and US context, respectively, when compared to the CXA + FFR-strategy; and by 59 %, 52 %, 61 % and 71 %, respectively, versus the CXA-only strategy. In patients with typical angina, cost savings by CMR + CXA vs CXA + FFR were minimal in the German (2.3 %), intermediate in the US and Swiss (11.6 % and 12.8 %, respectively), and remained substantial in the UK (18.9 %) systems. Sensitivity analyses proved the robustness of results. CONCLUSIONS: A CMR + CXA-strategy for patients with suspected CAD provides substantial cost reduction compared to a hypothetical CXA + FFR-strategy in patients with low to intermediate disease prevalence. However, in the subgroup of patients with typical angina, cost savings were only minimal to moderate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis in focused on the minimization of experimental efforts for the prediction of pollutant propagation in rivers by mathematical modelling and knowledge re-use. Mathematical modelling is based on the well known advection-dispersion equation, while the knowledge re-use approach employs the methods of case based reasoning, graphical analysis and text mining. The thesis contribution to the pollutant transport research field consists of: (1) analytical and numerical models for pollutant transport prediction; (2) two novel techniques which enable the use of variable parameters along rivers in analytical models; (3) models for the estimation of pollutant transport characteristic parameters (velocity, dispersion coefficient and nutrient transformation rates) as functions of water flow, channel characteristics and/or seasonality; (4) the graphical analysis method to be used for the identification of pollution sources along rivers; (5) a case based reasoning tool for the identification of crucial information related to the pollutant transport modelling; (6) and the application of a software tool for the reuse of information during pollutants transport modelling research. These support tools are applicable in the water quality research field and in practice as well, as they can be involved in multiple activities. The models are capable of predicting pollutant propagation along rivers in case of both ordinary pollution and accidents. They can also be applied for other similar rivers in modelling of pollutant transport in rivers with low availability of experimental data concerning concentration. This is because models for parameter estimation developed in the present thesis enable the calculation of transport characteristic parameters as functions of river hydraulic parameters and/or seasonality. The similarity between rivers is assessed using case based reasoning tools, and additional necessary information can be identified by using the software for the information reuse. Such systems represent support for users and open up possibilities for new modelling methods, monitoring facilities and for better river water quality management tools. They are useful also for the estimation of environmental impact of possible technological changes and can be applied in the pre-design stage or/and in the practical use of processes as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frequency converters are widely used in the industry to enable better controllability and efficiency of variable speed AC motor drives. Despite these advantages, certain challenges concerning the inverter and motor interfacing have been present for decades. As insulated gate bipolar transistors entered the market, the inverter output voltage transition rate significantly increased compared with their predecessors. Inverters operate based on pulse width modulation of the output voltage, and the steep voltage edge fed by the inverter produces a motor terminal overvoltage. The overvoltage causes extra stress to the motor insulation, which may lead to a prematuremotor failure. The overvoltage is not generated by the inverter alone, but also by the sum effect of the motor cable length and the impedance mismatch between the cable and the motor. Many solutions have been shown to limit the overvoltage, and the mainstream products focus on passive filters. This doctoral thesis studies an alternative methodology for motor overvoltage reduction. The focus is on minimization of the passive filter dimensions, physical and electrical, or better yet, on operation without any filter. This is achieved by additional inverter control and modulation. The studied methods are implemented on different inverter topologies, varying in nominal voltage and current.For two-level inverters, the studied method is termed active du/dt. It consists of a small output LC filter, which is controlled by an independent modulator. The overvoltage is limited by a reduced voltage transition rate. For multilevel inverters, an overvoltage mitigation method operating without a passive filter, called edge modulation, is implemented. The method uses the capability of the inverter to produce two switching operations in the same direction to cancel the oscillating voltages of opposite phases. For parallel inverters, two methods are studied. They are both intended for two-level inverters, but the first uses individual motor cables from each inverter while the other topology applies output inductors. The overvoltage is reduced by interleaving the switching operations to produce a similar oscillation accumulation as with the edge modulation. The implementation of these methods is discussed in detail, and the necessary modifications to the control system of the inverter are presented. Each method is experimentally verified by operating industrial frequency converters with the modified control. All the methods are found feasible, and they provide sufficient overvoltage protection. The limitations and challenges brought about by the methods are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.