915 resultados para Simulated annealing
Resumo:
In this paper we focus on the selection of safeguards in a fuzzy risk analysis and management methodology for information systems (IS). Assets are connected by dependency relationships, and a failure of one asset may affect other assets. After computing impact and risk indicators associated with previously identified threats, we identify and apply safeguards to reduce risks in the IS by minimizing the transmission probabilities of failures throughout the asset network. However, as safeguards have associated costs, the aim is to select the safeguards that minimize costs while keeping the risk within acceptable levels. To do this, we propose a dynamic programming-based method that incorporates simulated annealing to tackle optimizations problems.
Resumo:
Los sistemas de imagen por ultrasonidos son hoy una herramienta indispensable en aplicaciones de diagnóstico en medicina y son cada vez más utilizados en aplicaciones industriales en el área de ensayos no destructivos. El array es el elemento primario de estos sistemas y su diseño determina las características de los haces que se pueden construir (forma y tamaño del lóbulo principal, de los lóbulos secundarios y de rejilla, etc.), condicionando la calidad de las imágenes que pueden conseguirse. En arrays regulares la distancia máxima entre elementos se establece en media longitud de onda para evitar la formación de artefactos. Al mismo tiempo, la resolución en la imagen de los objetos presentes en la escena aumenta con el tamaño total de la apertura, por lo que una pequeña mejora en la calidad de la imagen se traduce en un aumento significativo del número de elementos del transductor. Esto tiene, entre otras, las siguientes consecuencias: Problemas de fabricación de los arrays por la gran densidad de conexiones (téngase en cuenta que en aplicaciones típicas de imagen médica, el valor de la longitud de onda es de décimas de milímetro) Baja relación señal/ruido y, en consecuencia, bajo rango dinámico de las señales por el reducido tamaño de los elementos. Complejidad de los equipos que deben manejar un elevado número de canales independientes. Por ejemplo, se necesitarían 10.000 elementos separados λ 2 para una apertura cuadrada de 50 λ. Una forma sencilla para resolver estos problemas existen alternativas que reducen el número de elementos activos de un array pleno, sacrificando hasta cierto punto la calidad de imagen, la energía emitida, el rango dinámico, el contraste, etc. Nosotros planteamos una estrategia diferente, y es desarrollar una metodología de optimización capaz de hallar de forma sistemática configuraciones de arrays de ultrasonido adaptados a aplicaciones específicas. Para realizar dicha labor proponemos el uso de los algoritmos evolutivos para buscar y seleccionar en el espacio de configuraciones de arrays aquellas que mejor se adaptan a los requisitos fijados por cada aplicación. En la memoria se trata el problema de la codificación de las configuraciones de arrays para que puedan ser utilizados como individuos de la población sobre la que van a actuar los algoritmos evolutivos. También se aborda la definición de funciones de idoneidad que permitan realizar comparaciones entre dichas configuraciones de acuerdo con los requisitos y restricciones de cada problema de diseño. Finalmente, se propone emplear el algoritmo multiobjetivo NSGA II como herramienta primaria de optimización y, a continuación, utilizar algoritmos mono-objetivo tipo Simulated Annealing para seleccionar y retinar las soluciones proporcionadas por el NSGA II. Muchas de las funciones de idoneidad que definen las características deseadas del array a diseñar se calculan partir de uno o más patrones de radiación generados por cada solución candidata. La obtención de estos patrones con los métodos habituales de simulación de campo acústico en banda ancha requiere tiempos de cálculo muy grandes que pueden hacer inviable el proceso de optimización con algoritmos evolutivos en la práctica. Como solución, se propone un método de cálculo en banda estrecha que reduce en, al menos, un orden de magnitud el tiempo de cálculo necesario Finalmente se presentan una serie de ejemplos, con arrays lineales y bidimensionales, para validar la metodología de diseño propuesta comparando experimentalmente las características reales de los diseños construidos con las predicciones del método de optimización. ABSTRACT Currently, the ultrasound imaging system is one of the powerful tools in medical diagnostic and non-destructive testing for industrial applications. Ultrasonic arrays design determines the beam characteristics (main and secondary lobes, beam pattern, etc...) which assist to enhance the image resolution. The maximum distance between the elements of the array should be the half of the wavelength to avoid the formation of grating lobes. At the same time, the image resolution of the target in the region of interest increases with the aperture size. Consequently, the larger number of elements in arrays assures the better image quality but this improvement contains the following drawbacks: Difficulties in the arrays manufacturing due to the large connection density. Low noise to signal ratio. Complexity of the ultrasonic system to handle large number of channels. The easiest way to resolve these issues is to reduce the number of active elements in full arrays, but on the other hand the image quality, dynamic range, contrast, etc, are compromised by this solutions In this thesis, an optimization methodology able to find ultrasound array configurations adapted for specific applications is presented. The evolutionary algorithms are used to obtain the ideal arrays among the existing configurations. This work addressed problems such as: the codification of ultrasound arrays to be interpreted as individuals in the evolutionary algorithm population and the fitness function and constraints, which will assess the behaviour of individuals. Therefore, it is proposed to use the multi-objective algorithm NSGA-II as a primary optimization tool, and then use the mono-objective Simulated Annealing algorithm to select and refine the solutions provided by the NSGA I I . The acoustic field is calculated many times for each individual and in every generation for every fitness functions. An acoustic narrow band field simulator, where the number of operations is reduced, this ensures a quick calculation of the acoustic field to reduce the expensive computing time required by these functions we have employed. Finally a set of examples are presented in order to validate our proposed design methodology, using linear and bidimensional arrays where the actual characteristics of the design are compared with the predictions of the optimization methodology.
Resumo:
A 12 bp long GCN4-binding, self-complementary duplex DNA d(CATGACGTCATG)2 has been investigated by NMR spectroscopy to study the structure and dynamics of the molecule in aqueous solution. The NMR structure of the DNA obtained using simulated annealing and iterative relaxation matrix calculations compares quite closely with the X-ray structure of ATF/CREB DNA in complex with GCN4 protein (DNA-binding domain). The DNA is also seen to be curved in the free state and this has a significant bearing on recognition by the protein. The dynamic characteristics of the molecule have been studied by 13C relaxation measurements at natural abundance. A correlation has been observed between sequence-dependent dynamics and recognition by GCN4 protein.
Resumo:
A dumbbell double-stranded DNA decamer tethered with a hexaethylene glycol linker moiety (DDSDPEG), with a nick in the centre of one strand, has been synthesised. The standard NMR methods, E.COSY, TOCSY, NOESY and HMQC, were used to measure 1H, 31P and T1 spectral parameters. Molecular modelling using rMD-simulated annealing was used to compute the structure. Scalar couplings and dipolar contacts show that the molecule adopts a right-handed B-DNA helix in 38 mM phosphate buffer at pH 7. Its high melting temperature confirms the good base stacking and stability of the duplex. This is partly attributed to the presence of the PEG6 linker at both ends of the duplex that restricts the dynamics of the stem pentamers and thus stabilises the oligonucleotide. The inspection of the global parameters shows that the linker does not distort the B-DNA geometry. The computed structure suggests that the presence of the nick is not disturbing the overall tertiary structure, base pair geometry or duplex base pairing to a substantial extent. The nick has, however, a noticeable impact on the local geometry at the nick site, indicated clearly by NMR analysis and reflected in the conformational parameters of the computed structure. The 1H spectra also show much sharper resonances in the presence of K+ indicating that conformational heterogeneity of DDSDPEG is reduced in the presence of potassium as compared to sodium or caesium ions. At the same time the 1H resonances have longer T1 times. This parameter is suggested as a sensitive gauge of stabilisation.
Resumo:
Esta tese apresenta uma abordagem para a criação rápida de modelos em diferentes geometrias (complexas ou de alta simetria) com objetivo de calcular a correspondente intensidade espalhada, podendo esta ser utilizada na descrição de experimentos de es- palhamento à baixos ângulos. A modelagem pode ser realizada com mais de 100 geome- trias catalogadas em um Banco de Dados, além da possibilidade de construir estruturas a partir de posições aleatórias distribuídas na superfície de uma esfera. Em todos os casos os modelos são gerados por meio do método de elementos finitos compondo uma única geometria, ou ainda, compondo diferentes geometrias, combinadas entre si a partir de um número baixo de parâmetros. Para realizar essa tarefa foi desenvolvido um programa em Fortran, chamado de Polygen, que permite modelar geometrias convexas em diferentes formas, como sólidos, cascas, ou ainda com esferas ou estruturas do tipo DNA nas arestas, além de usar esses modelos para simular a curva de intensidade espalhada para sistemas orientados e aleatoriamente orientados. A curva de intensidade de espalhamento é calculada por meio da equação de Debye e os parâmetros que compõe cada um dos modelos, podem ser otimizados pelo ajuste contra dados experimentais, por meio de métodos de minimização baseados em simulated annealing, Levenberg-Marquardt e algorítmicos genéticos. A minimização permite ajustar os parâmetros do modelo (ou composição de modelos) como tamanho, densidade eletrônica, raio das subunidades, entre outros, contribuindo para fornecer uma nova ferramenta para modelagem e análise de dados de espalhamento. Em outra etapa desta tese, é apresentado o design de modelos atomísticos e a sua respectiva simulação por Dinâmica Molecular. A geometria de dois sistemas auto-organizado de DNA na forma de octaedro truncado, um com linkers de 7 Adeninas e outro com linkers de ATATATA, foram escolhidas para realizar a modelagem atomística e a simulação por Dinâmica Molecular. Para este sistema são apresentados os resultados de Root Mean Square Deviations (RMSD), Root Mean Square Fluctuations (RMSF), raio de giro, torção das hélices duplas de DNA além da avaliação das ligações de Hidrogênio, todos obtidos por meio da análise de uma trajetória de 50 ns.
Resumo:
Hardware/Software partitioning (HSP) is a key task for embedded system co-design. The main goal of this task is to decide which components of an application are to be executed in a general purpose processor (software) and which ones, on a specific hardware, taking into account a set of restrictions expressed by metrics. In last years, several approaches have been proposed for solving the HSP problem, directed by metaheuristic algorithms. However, due to diversity of models and metrics used, the choice of the best suited algorithm is an open problem yet. This article presents the results of applying a fuzzy approach to the HSP problem. This approach is more flexible than many others due to the fact that it is possible to accept quite good solutions or to reject other ones which do not seem good. In this work we compare six metaheuristic algorithms: Random Search, Tabu Search, Simulated Annealing, Hill Climbing, Genetic Algorithm and Evolutionary Strategy. The presented model is aimed to simultaneously minimize the hardware area and the execution time. The obtained results show that Restart Hill Climbing is the best performing algorithm in most cases.
Resumo:
El particionado hardware/software es una tarea fundamental en el co-diseño de sistemas embebidos. En ella se decide, teniendo en cuenta las métricas de diseño, qué componentes se ejecutarán en un procesador de propósito general (software) y cuáles en un hardware específico. En los últimos años se han propuesto diversas soluciones al problema del particionado dirigidas por algoritmos metaheurísticos. Sin embargo, debido a la diversidad de modelos y métricas utilizadas, la elección del algoritmo más apropiado sigue siendo un problema abierto. En este trabajo se presenta una comparación de seis algoritmos metaheurísticos: Búsqueda aleatoria (Random search), Búsqueda tabú (Tabu search), Recocido simulado (Simulated annealing), Escalador de colinas estocástico (Stochastic hill climbing), Algoritmo genético (Genetic algorithm) y Estrategia evolutiva (Evolution strategy). El modelo utilizado en la comparación está dirigido a minimizar el área ocupada y el tiempo de ejecución, las restricciones del modelo son consideradas como penalizaciones para incluir en el espacio de búsqueda otras soluciones. Los resultados muestran que los algoritmos Escalador de colinas estocástico y Estrategia evolutiva son los que mejores resultados obtienen en general, seguidos por el Algoritmo genético.
Resumo:
La partición hardware/software es una etapa clave dentro del proceso de co-diseño de los sistemas embebidos. En esta etapa se decide qué componentes serán implementados como co-procesadores de hardware y qué componentes serán implementados en un procesador de propósito general. La decisión es tomada a partir de la exploración del espacio de diseño, evaluando un conjunto de posibles soluciones para establecer cuál de estas es la que mejor balance logra entre todas las métricas de diseño. Para explorar el espacio de soluciones, la mayoría de las propuestas, utilizan algoritmos metaheurísticos; destacándose los Algoritmos Genéticos, Recocido Simulado. Esta decisión, en muchos casos, no es tomada a partir de análisis comparativos que involucren a varios algoritmos sobre un mismo problema. En este trabajo se presenta la aplicación de los algoritmos: Escalador de Colinas Estocástico y Escalador de Colinas Estocástico con Reinicio, para resolver el problema de la partición hardware/software. Para validar el empleo de estos algoritmos se presenta la aplicación de este algoritmo sobre un caso de estudio, en particular la partición hardware/software de un codificador JPEG. En todos los experimentos es posible apreciar que ambos algoritmos alcanzan soluciones comparables con las obtenidas por los algoritmos utilizados con más frecuencia.
Resumo:
A method is proposed for determining the optimal placement and controller design for multiple distributed actuators to reduce the vibrations of flexible structures. In particular, application of piezoceramic patches to a horizontally-slewing single-link flexible manipulator modeled using the assumed modes method is investigated. The optimization method uses simulated annealing and allows placement of any number of distributed actuators of unequal length, although piezoceramics of fixed equal lengths are used in the example. It also designs an linear-quadratic-regulator controller as part of the optimization procedure. The measures of performance used in the investigation to determine optimality are the total mass of the system and the time integral of the absolute value of the hub and tip position error. This study also varies the relative weightings for each of these performance measures to observe the effects on the controller designs and piezoceramic patch positions in the optimized solutions.
Resumo:
Alpha helices are key structural components of proteins and important recognition motifs in biology. New techniques for stabilizing short peptide helices could be valuable for studying protein folding, modeling proteins, creating artificial proteins, and may aid the design of inhibitors or mimics of protein function. We previously reported* that 5-15 residue peptides, corresponding to the Zn-binding domain of thermolysin, react with [Pd(en)(ONO,),]in DMF-d’ and 90% H,O 10% DzO to form a 22-membered [Pd(en)(H*ELTH*)]2+ macrocycle that is helical in solution and acts as a template in nucleating helicity in both Cand N- terminal directions within the longer sequences in DMF. ~f~~&g7$$& d&qx~m ~. y AC&q& In water, however, there was less a-helicity observed, testifying to #..q,& &$--Lb &l-- &.$;,J~p?:~~q&~+~~ ’ w w the difficulty of fixing intramolecular amide NH...OC H-bonds in 6,“;;” ( k.$ U”C.a , p d$. competition with the H-bond donor solvent water. To expand the utility of [Pd(en)(H*XXXH*)]*+ as a helix- @r4”8 & oJ#:& &G& @-qd ,‘d@-gyp promoting module in solution, we now report the result that Ac- ‘$4: %$yyy + H*ELTH*H*VTDH*-NH,(l), AC-H*ELTH*AVTDYH*ELTH*- NH, (2) and AC-H*AAAH*H*ELTH*H*VTDH*-NH* (3) react with multiple equivalents of [Pd(en)(ONO,),] to produce exclusively 4-6 respectively in both DMF-d7 and water (90% Hz0 10% D,O). Mass spectrometry, 15N- and 2D ‘H- NMR spectroscopy, and CD spectra were used to characterise the structures 4-6, and their three dimensional structures were calculated from NOE restraints using simulated annealing protocols. Results demonstrate (a) selective coordination of metal ions at (i, i+4) histidine positions in water and DMF, (b) incorporation of 2 and 3 a turn-mimicking modules [Pd(en)(HELTH)]2+ in lo-15 residue peptides, and (c) facile conversion of unstructured peptides into 3- and 4- turn helices of macrocycles, with well defined a-helicity throughout and more structure in DMF than in water.
Resumo:
A recent development of the Markov chain Monte Carlo (MCMC) technique is the emergence of MCMC samplers that allow transitions between different models. Such samplers make possible a range of computational tasks involving models, including model selection, model evaluation, model averaging and hypothesis testing. An example of this type of sampler is the reversible jump MCMC sampler, which is a generalization of the Metropolis-Hastings algorithm. Here, we present a new MCMC sampler of this type. The new sampler is a generalization of the Gibbs sampler, but somewhat surprisingly, it also turns out to encompass as particular cases all of the well-known MCMC samplers, including those of Metropolis, Barker, and Hastings. Moreover, the new sampler generalizes the reversible jump MCMC. It therefore appears to be a very general framework for MCMC sampling. This paper describes the new sampler and illustrates its use in three applications in Computational Biology, specifically determination of consensus sequences, phylogenetic inference and delineation of isochores via multiple change-point analysis.
Resumo:
sThe structure of a two-chain peptide formed by the treatment of the potent antimicrobial peptide microcin J25 (MccJ25) with thermolysin has been characterized by NMR spectroscopy and mass spectrometry. The native peptide is 21 amino acids in size and has the remarkable structural feature of a ring formed by linkage of the side chain of Glu8 to the N-terminus that is threaded by the C-terminal tail of the peptide. Thermolysin cleaves the peptide at the Phe10-Val11 amide bond, but the threading of the C-terminus through the N-terminal ring is so tight that the resultant two chains remain associated both in the solution and in the gas phases. The three-dimensional structure of the thermolysin-cleaved peptide derived using NMR spectroscopy and simulated annealing calculations has a well-defined core that comprises the N-terminal ring and the threading C-terminal tail. In contrast to the well-defined core, the newly formed termini at residues Phe10 and Val11 are disordered in solution. The C-terminal tail is associated to the ring both by hydrogen bonds stabilizing a short beta-sheet and by hydrophobic interactions. Moreover, unthreading of the tail through the ring is prevented by the bulky side chains of Phe19 and Tyr20, which flank the octapeptide ring. This noncovalent two-peptide complex that has a remarkable stability in solution and in highly denaturing conditions and that survives in the gas phase is the first example of such a two-chain peptide lacking disulfide or interchain covalent bonds.
Resumo:
A novel member of the human relaxin subclass of the insulin superfamily was recently discovered during a genomics database search and named relaxin-3. Like human relaxin-1 and relaxin-2, relaxin-3 is predicted to consist of a two-chain structure and three disulfide bonds in a disposition identical to that of insulin. To undertake detailed biophysical and biological characterization of the peptide, its chemical synthesis was undertaken. In contrast to human relaxin-1 and relaxin-2, however, relaxin-3 could not be successfully prepared by simple combination of the individual chains, thus necessitating recourse to the use of a regioselective disulfide bond formation strategy. Solid phase synthesis of the separate, selectively S-protected A and B chains followed by their purification and the subsequent stepwise formation of each of the three disulfides led to the successful acquisition of human relaxin-3. Comprehensive chemical characterization confirmed both the correct chain orientation and the integrity of the synthetic product. Relaxin-3 was found to bind to and activate native relaxin receptors in vitro and stimulate water drinking through central relaxin receptors in vivo. Recent studies have demonstrated that relaxin-3 will bind to and activate human LGR7, but not LGR8, in vitro. Secondary structural analysis showed it to adopt a less ordered confirmation than either relaxin-1 or relaxin-2, reflecting the presence in the former of a greater percentage of nonhelical forming amino acids. NMR spectroscopy and simulated annealing calculations were used to determine the three-dimensional structure of relaxin-3 and to identify key structural differences between the human relaxins.
Resumo:
In this work a superposition technique for designing gradient coils for the purpose of magnetic resonance imaging is outlined, which uses an optimized weight function superimposed upon an initial winding similar to that obtained from the target field method to generate the final wire winding. This work builds on the preliminary work performed in Part I on designing planar insertable gradient coils for high resolution imaging. The proposed superposition method for designing gradient coils results in coil patterns with relatively low inductances and the gradient coils can be used as inserts into existing magnetic resonance imaging hardware. The new scheme has the capacity to obtain images faster with more detail due to the deliver of greater magnetic held gradients. The proposed method for designing gradient coils is compared with a variant of the state-of-the-art target field method for planar gradient coils designs, and it is shown that the weighted superposition approach outperforms the well-known the classical method.
Resumo:
Although the aim of conservation planning is the persistence of biodiversity, current methods trade-off ecological realism at a species level in favour of including multiple species and landscape features. For conservation planning to be relevant, the impact of landscape configuration on population processes and the viability of species needs to be considered. We present a novel method for selecting reserve systems that maximize persistence across multiple species, subject to a conservation budget. We use a spatially explicit metapopulation model to estimate extinction risk, a function of the ecology of the species and the amount, quality and configuration of habitat. We compare our new method with more traditional, area-based reserve selection methods, using a ten-species case study, and find that the expected loss of species is reduced 20-fold. Unlike previous methods, we avoid designating arbitrary weightings between reserve size and configuration; rather, our method is based on population processes and is grounded in ecological theory.