893 resultados para Shape optimization method
Resumo:
Breast cancer is the most common cancer among women, being a major public health problem. Worldwide, X-ray mammography is the current gold-standard for medical imaging of breast cancer. However, it has associated some well-known limitations. The false-negative rates, up to 66% in symptomatic women, and the false-positive rates, up to 60%, are a continued source of concern and debate. These drawbacks prompt the development of other imaging techniques for breast cancer detection, in which Digital Breast Tomosynthesis (DBT) is included. DBT is a 3D radiographic technique that reduces the obscuring effect of tissue overlap and appears to address both issues of false-negative and false-positive rates. The 3D images in DBT are only achieved through image reconstruction methods. These methods play an important role in a clinical setting since there is a need to implement a reconstruction process that is both accurate and fast. This dissertation deals with the optimization of iterative algorithms, with parallel computing through an implementation on Graphics Processing Units (GPUs) to make the 3D reconstruction faster using Compute Unified Device Architecture (CUDA). Iterative algorithms have shown to produce the highest quality DBT images, but since they are computationally intensive, their clinical use is currently rejected. These algorithms have the potential to reduce patient dose in DBT scans. A method of integrating CUDA in Interactive Data Language (IDL) is proposed in order to accelerate the DBT image reconstructions. This method has never been attempted before for DBT. In this work the system matrix calculation, the most computationally expensive part of iterative algorithms, is accelerated. A speedup of 1.6 is achieved proving the fact that GPUs can accelerate the IDL implementation.
Resumo:
The goal of this thesis is the investigation and optimization of the synthesis of potential fragrances. This work is projected as collaboration between the University of Applied Sciences in Merseburg and the company Miltitz Aromatics GmbH in Bitterfeld‐Wolfen (Germany). Flavoured compounds can be synthesized in different ways and by various methods. In this work, methods like the phase transfer catalysis and the Cope‐rearrangement were investigated and applied, for getting a high yield and quantity of the desired substances and without any by‐products or side reactions. This involved the study of syntheses with different process parameters such as temperature, solvent, pressure and reaction time. The main focus was on Cope‐rearrangement, which is a common method in the synthesis of new potential fragrance compounds. The substances synthesized in this work have a hepta‐1,5‐diene‐structure and that is why they can easily undergo this [3,3]‐sigma tropic rearrangement. The lead compound of all research was 2,5‐dimethyl‐2‐vinyl‐4‐hexenenitrile (Neronil). Neronil is synthesized by an alkylation of 2‐methyl‐3‐butenenitrile with prenylchloride under basic conditions in a phase‐transfer system. In this work the yield of isolated Neronil is improved from about 35% to 46% by according to the execution conditions of the reaction. Additionally the amount of side product was decreased. This synthesized hexenenitrile involved not only the aforementioned 1,5‐diene‐structure, but also a cyano group, that makes this structure a suitable base for the synthesis of new potential fragrance compounds. It was observed that Neronil can be transferred into 2,5‐dimethyl‐2‐vinyl‐4‐hexenoic acid by a hydrolysis under basic conditions. After five hours the acid can be obtained with a yield of 96%. The following esterification is realized with isobutanol to produce 2,5‐dimethyl‐2‐vinyl‐4‐hexenoic acid isobutyl ester with quantitative conversion. It was observed that the Neronil and the corresponding ester can be converted into the corresponding Cope‐product, with a conversion of 30 % and 80%. Implementing the Cope‐rearrangement, the acid was heated and an unexpected decarboxylated product is formed. To achieve the best verification of reaction development and structure, scrupulous analyses were done using GC‐MS, 1H‐NMR and 13C‐ NMR.
Resumo:
The Electromagnetism-like (EM) algorithm is a population- based stochastic global optimization algorithm that uses an attraction- repulsion mechanism to move sample points towards the optimal. In this paper, an implementation of the EM algorithm in the Matlab en- vironment as a useful function for practitioners and for those who want to experiment a new global optimization solver is proposed. A set of benchmark problems are solved in order to evaluate the performance of the implemented method when compared with other stochastic methods available in the Matlab environment. The results con rm that our imple- mentation is a competitive alternative both in term of numerical results and performance. Finally, a case study based on a parameter estimation problem of a biology system shows that the EM implementation could be applied with promising results in the control optimization area.
Resumo:
Our objective was to validate a new device dedicated to measure the light disturbances surrounding bright sources of light under different sources of potential variability. Twenty subjects were involved in the study. Light distortion was measured using an experimental prototype (light distortion analyzer, CEORLab, University of Minho, Portugal) comprising twenty-four LED arrays panel at 2 m. Sources of variability included: intrasession and intersession repeated measures, pupil size (3 versus 6 mm), defocus (þ0.50) correction for the working distance, angular resolution (15 deg versus 30 deg), temporal stimuli presentation, and pupil size. Size, shape, location, and irregularity parameters have been obtained. At a low speed of presentation of the stimuli, changes in angular resolution did not have an effect on the results of the parameters measured. Results did not change with pupil size. Intensity of the central glare source significantly influenced the outcomes. Examination time was reduced by 30% when a 30 deg angular resolution was explored instead of 15 deg. Measurements were fast and repeatable under the same experimental conditions. Size and shape parameters showed the highest consistency, whereas location and irregularity parameters showed lower consistency. The system was sensitive to changes in the intensity of the central glare source but not to pupil changes in this sample of healthy subjects.
Resumo:
The main features of most components consist of simple basic functional geometries: planes, cylinders, spheres and cones. Shape and position recognition of these geometries is essential for dimensional characterization of components, and represent an important contribution in the life cycle of the product, concerning in particular the manufacturing and inspection processes of the final product. This work aims to establish an algorithm to automatically recognize such geometries, without operator intervention. Using differential geometry large volumes of data can be treated and the basic functional geometries to be dealt recognized. The original data can be obtained by rapid acquisition methods, such as 3D survey or photography, and then converted into Cartesian coordinates. The satisfaction of intrinsic decision conditions allows different geometries to be fast identified, without operator intervention. Since inspection is generally a time consuming task, this method reduces operator intervention in the process. The algorithm was first tested using geometric data generated in MATLAB and then through a set of data points acquired by measuring with a coordinate measuring machine and a 3D scan on real physical surfaces. Comparison time spent in measuring is presented to show the advantage of the method. The results validated the suitability and potential of the algorithm hereby proposed
Resumo:
Tese de Doutoramento em Engenharia de Materiais.
Resumo:
Tese de Doutoramento em Engenharia Civil.
Resumo:
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Resumo:
A cryo-electron microscopy study of supercoiled DNA molecules freely suspended in cryo-vitrified buffer was combined with Monte Carlo simulations and gel electrophoretic analysis to investigate the role of intersegmental electrostatic repulsion in determining the shape of supercoiled DNA molecules. It is demonstrated here that a decrease of DNA-DNA repulsion by increasing concentrations of counterions causes a higher fraction of the linking number deficit to be partitioned into writhe. When counterions reach concentrations likely to be present under in vivo conditions, naturally supercoiled plasmids adopt a tightly interwound conformation. In these tightly supercoiled DNA molecules the opposing segments of interwound superhelix seem to directly contact each other. This form of supercoiling, where two DNA helices interact laterally, may represent an important functional state of DNA. In the particular case of supercoiled minicircles (178 bp) the delta Lk = -2 topoisomers undergo a sharp structural transition from almost planar circles in low salt buffers to strongly writhed "figure-eight" conformations in buffers containing neutralizing concentrations of counterions. Possible implications of this observed structural transition in DNA are discussed.
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.
Resumo:
Quantification is a major problem when using histology to study the influence of ecological factors on tree structure. This paper presents a method to prepare and to analyse transverse sections of cambial zone and of conductive phloem in bark samples. The following paper (II) presents the automated measurement procedure. Part I here describes and discusses the preparation method, and the influence of tree age on the observed structure. Highly contrasted images of samples extracted at breast height during dormancy were analysed with an automatic image analyser. Between three young (38 years) and three old (147 years) trees, age-related differences were identified by size and shape parameters, at both cell and tissue levels. In the cambial zone, older trees had larger and more rectangular fusiform initials. In the phloem, sieve tubes were also larger, but their shape did not change and the area for sap conduction was similar in both categories. Nevertheless, alterations were limited, and demanded statistical analysis to be identified and ascertained. The physiological implications of the structural changes are discussed.
Resumo:
In this paper we present a novel structure from motion (SfM) approach able to infer 3D deformable models from uncalibrated stereo images. Using a stereo setup dramatically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach first calibrates the stereo system automatically and then computes a single metric rigid structure for each frame. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points on the object which have remained rigid throughout the sequence without deforming. The selected rigid points are then used to compute frame-wise shape registration and to extract the motion parameters robustly from frame to frame. Finally, all this information is used in a global optimization stage with bundle adjustment which allows to refine the frame-wise initial solution and also to recover the non-rigid 3D model. We show results on synthetic and real data that prove the performance of the proposed method even when there is no rigid motion in the original sequence
Resumo:
Miralls deformables més i més grans, amb cada cop més actuadors estan sent utilitzats actualment en aplicacions d'òptica adaptativa. El control dels miralls amb centenars d'actuadors és un tema de gran interès, ja que les tècniques de control clàssiques basades en la seudoinversa de la matriu de control del sistema es tornen massa lentes quan es tracta de matrius de dimensions tan grans. En aquesta tesi doctoral es proposa un mètode per l'acceleració i la paral.lelitzacó dels algoritmes de control d'aquests miralls, a través de l'aplicació d'una tècnica de control basada en la reducció a zero del components més petits de la matriu de control (sparsification), seguida de l'optimització de l'ordenació dels accionadors de comandament atenent d'acord a la forma de la matriu, i finalment de la seva posterior divisió en petits blocs tridiagonals. Aquests blocs són molt més petits i més fàcils de fer servir en els càlculs, el que permet velocitats de càlcul molt superiors per l'eliminació dels components nuls en la matriu de control. A més, aquest enfocament permet la paral.lelització del càlcul, donant una com0onent de velocitat addicional al sistema. Fins i tot sense paral. lelització, s'ha obtingut un augment de gairebé un 40% de la velocitat de convergència dels miralls amb només 37 actuadors, mitjançant la tècnica proposada. Per validar això, s'ha implementat un muntatge experimental nou complet , que inclou un modulador de fase programable per a la generació de turbulència mitjançant pantalles de fase, i s'ha desenvolupat un model complert del bucle de control per investigar el rendiment de l'algorisme proposat. Els resultats, tant en la simulació com experimentalment, mostren l'equivalència total en els valors de desviació després de la compensació dels diferents tipus d'aberracions per als diferents algoritmes utilitzats, encara que el mètode proposat aquí permet una càrrega computacional molt menor. El procediment s'espera que sigui molt exitós quan s'aplica a miralls molt grans.
Resumo:
In the static field limit, the vibrational hyperpolarizability consists of two contributions due to: (1) the shift in the equilibrium geometry (known as nuclear relaxation), and (2) the change in the shape of the potential energy surface (known as curvature). Simple finite field methods have previously been developed for evaluating these static field contributions and also for determining the effect of nuclear relaxation on dynamic vibrational hyperpolarizabilities in the infinite frequency approximation. In this paper the finite field approach is extended to include, within the infinite frequency approximation, the effect of curvature on the major dynamic nonlinear optical processes