980 resultados para Computational techniques
Resumo:
Since the advent of the computer into the engineering field, the application of the numerical methods to the solution of engineering problems has grown very rapidly. Among the different computer methods of structural analysis the Finite Element (FEM) has been predominantly used. Shells and space structures are very attractive and have been constructed to solve a large variety of functional problems (roofs, industrial building, aqueducts, reservoirs, footings etc). In this type of structures aesthetics, structural efficiency and concept play a very important role. This class of structures can be divided into three main groups, namely continuous (concrete) shells, space frames and tension (fabric, pneumatic, cable etc )structures. In the following only the current applications of the FEM to the analysis of continuous shell structures will be discussed. However, some of the comments on this class of shells can be also applied to some extend to the others, but obviously specific computational problems will be restricted to the continuous shells. Different aspects, such as, the type of elements,input-output computational techniques etc, of the analysis of shells by the FEM will be described below. Clearly, the improvements and developments occurring in general for the FEM since its first appearance in the fifties have had a significative impact on the particular class of structures under discussion.
Resumo:
Site-directed mutagenesis and combinatorial libraries are powerful tools for providing information about the relationship between protein sequence and structure. Here we report two extensions that expand the utility of combinatorial mutagenesis for the quantitative assessment of hypotheses about the determinants of protein structure. First, we show that resin-splitting technology, which allows the construction of arbitrarily complex libraries of degenerate oligonucleotides, can be used to construct more complex protein libraries for hypothesis testing than can be constructed from oligonucleotides limited to degenerate codons. Second, using eglin c as a model protein, we show that regression analysis of activity scores from library data can be used to assess the relative contributions to the specific activity of the amino acids that were varied in the library. The regression parameters derived from the analysis of a 455-member sample from a library wherein four solvent-exposed sites in an α-helix can contain any of nine different amino acids are highly correlated (P < 0.0001, R2 = 0.97) to the relative helix propensities for those amino acids, as estimated by a variety of biophysical and computational techniques.
Resumo:
Uma imagem engloba informação que precisa ser organizada para interpretar e compreender seu conteúdo. Existem diversas técnicas computacionais para extrair a principal informação de uma imagem e podem ser divididas em três áreas: análise de cor, textura e forma. Uma das principais delas é a análise de forma, por descrever características de objetos baseadas em seus pontos fronteira. Propomos um método de caracterização de imagens, por meio da análise de forma, baseada nas propriedades espectrais do laplaciano em grafos. O procedimento construiu grafos G baseados nos pontos fronteira do objeto, cujas conexões entre vértices são determinadas por limiares T_l. A partir dos grafos obtêm-se a matriz de adjacência A e a matriz de graus D, as quais definem a matriz Laplaciana L=D -A. A decomposição espectral da matriz Laplaciana (autovalores) é investigada para descrever características das imagens. Duas abordagens são consideradas: a) Análise do vetor característico baseado em limiares e a histogramas, considera dois parâmetros o intervalo de classes IC_l e o limiar T_l; b) Análise do vetor característico baseado em vários limiares para autovalores fixos; os quais representam o segundo e último autovalor da matriz L. As técnicas foram testada em três coleções de imagens: sintéticas (Genéricas), parasitas intestinais (SADPI) e folhas de plantas (CNShape), cada uma destas com suas próprias características e desafios. Na avaliação dos resultados, empregamos o modelo de classificação support vector machine (SVM), o qual avalia nossas abordagens, determinando o índice de separação das categorias. A primeira abordagem obteve um acerto de 90 % com a coleção de imagens Genéricas, 88 % na coleção SADPI, e 72 % na coleção CNShape. Na segunda abordagem, obtém-se uma taxa de acerto de 97 % com a coleção de imagens Genéricas; 83 % para SADPI e 86 % no CNShape. Os resultados mostram que a classificação de imagens a partir do espectro do Laplaciano, consegue categorizá-las satisfatoriamente.
Resumo:
Carbon nanotubes exhibit the structure and chemical properties that make them apt substrates for many adsorption applications. Of particular interest are carbon nanotube bundles, whose unique geometry is conducive to the formation of pseudo-one-dimensional phases of matter, and graphite, whose simple planar structure allows ordered phases to form in the absence of surface effects. Although both of these structures have been the focus of many research studies, knowledge gaps still remain. Much of the work with carbon nanotubes has used simple adsorbates1-43, and there is little kinetic data available. On the other hand, there are many studies of complex molecules adsorbing on graphite; however, there is almost no kinetic data reported for this substrate. We seek to close these knowledge gaps by performing a kinetic study of linear molecules of increasing length adsorbing on carbon nanotube bundles and on graphite. We elucidated the process of adsorption of complex admolecules on carbon nanotube bundles, while at the same time producing some of the first equilibrium results of the films formed by large adsorbates on these structures. We also extended the current knowledge of adsorption on graphite to include the kinetics of adsorption. The kinetic data that we have produced enables a more complete understanding of the process of adsorption of large admolecules on carbon nanotube bundles and graphite. We studied the adsorption of particles on carbon nanotube bundles and graphite using analytical and computational techniques. By employing these methods separately but in parallel, we were able to constantly compare and verify our results. We calculated and simulated the behavior of a given system throughout its evolution and then analyzed our results to determine which system parameters have the greatest effect on the kinetics of adsorption. Our analytical and computational results show good agreement with each other and with the experimental isotherm data provided by our collaborators. As a result of this project, we have gained a better understanding of the kinetics of adsorption. We have learned about the equilibration process of dimers on carbon nanotube bundles, identifying the “filling effect”, which increases the rate of total uptake, and explaining the cause of the transient “overshoot” in the coverage of the surface. We also measured the kinetic effect of particle-particle interactions between neighboring adsorbates on the lattice. For our simulations of monomers adsorbing on graphite, we succeeded in developing an analytical equation to predict the characteristic time as a function of chemical potential and of the adsorption and interaction energies of the system. We were able to further explore the processes of adsorption of dimers and trimers on graphite (again observing the filling effect and the overshoot). Finally, we were able to show that the kinetic behaviors of monomers, dimers, and trimers that have been reported in experimental results also arise organically from our model and simulations.
Resumo:
Motivation: While processing of MHC class II antigens for presentation to helper T-cells is essential for normal immune response, it is also implicated in the pathogenesis of autoimmune disorders and hypersensitivity reactions. Sequence-based computational techniques for predicting HLA-DQ binding peptides have encountered limited success, with few prediction techniques developed using three-dimensional models. Methods: We describe a structure-based prediction model for modeling peptide-DQ3.2 beta complexes. We have developed a rapid and accurate protocol for docking candidate peptides into the DQ3.2 beta receptor and a scoring function to discriminate binders from the background. The scoring function was rigorously trained, tested and validated using experimentally verified DQ3.2 beta binding and non-binding peptides obtained from biochemical and functional studies. Results: Our model predicts DQ3.2 beta binding peptides with high accuracy [area under the receiver operating characteristic (ROC) curve A(ROC) > 0.90], compared with experimental data. We investigated the binding patterns of DQ3.2 beta peptides and illustrate that several registers exist within a candidate binding peptide. Further analysis reveals that peptides with multiple registers occur predominantly for high-affinity binders.
Resumo:
This paper derives the performance union bound of space-time trellis codes in orthogonal frequency division multiplexing system (STTC-OFDM) over quasi-static frequency selective fading channels based on the distance spectrum technique. The distance spectrum is the enumeration of the codeword difference measures and their multiplicities by exhausted searching through all the possible error event paths. Exhaustive search approach can be used for low memory order STTC with small frame size. However with moderate memory order STTC and moderate frame size the computational cost of exhaustive search increases exponentially, and may become impractical for high memory order STTCs. This requires advanced computational techniques such as Genetic Algorithms (GAS). In this paper, a GA with sharing function method is used to locate the multiple solutions of the distance spectrum for high memory order STTCs. Simulation evaluates the performance union bound and the complexity comparison of non-GA aided and GA aided distance spectrum techniques. It shows that the union bound give a close performance measure at high signal-to-noise ratio (SNR). It also shows that GA sharing function method based distance spectrum technique requires much less computational time as compared with exhaustive search approach but with satisfactory accuracy.
Resumo:
Antigenic peptide is presented to a T-cell receptor (TCR) through the formation of a stable complex with a major histocompatibility complex (MHC) molecule. Various predictive algorithms have been developed to estimate a peptide's capacity to form a stable complex with a given MHC class II allele, a technique integral to the strategy of vaccine design. These have previously incorporated such computational techniques as quantitative matrices and neural networks. A novel predictive technique is described, which uses molecular modeling of predetermined crystal structures to estimate the stability of an MHC class II-peptide complex. The structures are remodeled, energy minimized, and annealed before the energetic interaction is calculated.
Resumo:
G-protein coupled receptors (GPCRs) are a superfamily of membrane integral proteins responsible for a large number of physiological functions. Approximately 50% of marketed drugs are targeted toward a GPCR. Despite showing a high degree of structural homology, there is a large variance in sequence within the GPCR superfamily which has lead to difficulties in identifying and classifying potential new GPCR proteins. Here the various computational techniques that can be used to characterize a novel GPCR protein are discussed, including both alignment-based and alignment-free approaches. In addition, the application of homology modeling to building the three-dimensional structures of GPCRs is described.
Resumo:
Modern business trends such as agile manufacturing and virtual corporations require high levels of flexibility and responsiveness to consumer demand, and require the ability to quickly and efficiently select trading partners. Automated computational techniques for supply chain formation have the potential to provide significant advantages in terms of speed and efficiency over the traditional manual approach to partner selection. Automated supply chain formation is the process of determining the participants within a supply chain and the terms of the exchanges made between these participants. In this thesis we present an automated technique for supply chain formation based upon the min-sum loopy belief propagation algorithm (LBP). LBP is a decentralised and distributed message-passing algorithm which allows participants to share their beliefs about the optimal structure of the supply chain based upon their costs, capabilities and requirements. We propose a novel framework for the application of LBP to the existing state-of-the-art case of the decentralised supply chain formation problem, and extend this framework to allow for application to further novel and established problem cases. Specifically, the contributions made by this thesis are: • A novel framework to allow for the application of LBP to the decentralised supply chain formation scenario investigated using the current state-of-the-art approach. Our experimental analysis indicates that LBP is able to match or outperform this approach for the vast majority of problem instances tested. • A new solution goal for supply chain formation in which economically motivated producers aim to maximise their profits by intelligently altering their profit margins. We propose a rational pricing strategy that allows producers to earn significantly greater profits than a comparable LBP-based profitmaking approach. • An LBP-based framework which allows the algorithm to be used to solve supply chain formation problems in which goods are exchanged in multiple units, a first for a fully decentralised technique. As well as multiple-unit exchanges, we also model in this scenario realistic constraints such as factory capacities and input-to-output ratios. LBP continues to be able to match or outperform an extended version of the existing state-of-the-art approach in this scenario. • Introduction of a dynamic supply chain formation scenario in which participants are able to alter their properties or to enter or leave the process at any time. Our results suggest that LBP is able to deal easily with individual occurences of these alterations and that performance degrades gracefully when they occur in larger numbers.
Resumo:
Short-term changes in sea surface conditions controlling the thermohaline circulation in the northern North Atlantic are expected to be especially efficient in perturbing global climate stability. Here we assess past variability of sea surface temperature (SST) in the northeast Atlantic and Norwegian Sea during Marine Isotope Stage (MIS) 2 and, in particular, during the Last Glacial Maximum (LGM). Five high-resolution SST records were established on a meridional transect (53°N-72°N) to trace centennial-scale oscillations in SST and sea-ice cover. We used three independent computational techniques (SIMMAX modern analogue technique, Artificial Neural Networks (ANN), and Revised Analog Method (RAM)) to reconstruct SST from planktonic foraminifer census counts. SIMMAX and ANN reproduced short-term SST oscillations of similar magnitude and absolute levels, while RAM, owing to a restrictive analog selection, appears less suitable for reconstructing "cold end" SST. The SIMMAX and ANN SST reconstructions support the existence of a weak paleo-Norwegian Current during Dansgaard-Oeschger (DO) interstadials number 4, 3, 2, and 1. During the LGM, two warm incursions of 7°C water to occurred in the northern North Atlantic but ended north of the Iceland Faroe Ridge. A rough numerical estimate shows that the near-surface poleward heat transfer from 53° across the Iceland-Faroe Ridge up to to 72° N dropped to less than 60% of the modern value during DO interstadials and to almost zero during DO stadials. Summer sea ice was generally confined to the area north of 70°N and only rarely expanded southward along the margins of continental ice sheets. Internal LGM variability of North Atlantic (>40°N) SST in the GLAMAP 2000 compilation (Sarnthein et al., 2003, doi:10.1029/2002PA000771; Pflaumann et al., 2003, doi:10.1029/2002PA000774) indicates maximum instability in the glacial subpolar gyre and at the Iberian Margin, while in the Nordic Seas, SST was continuously low.
Resumo:
Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.
Resumo:
Evolutionary robitics is a branch of artificial intelligence concerned with the automatic generation of autonomous robots. Usually the form of the robit is predefined an various computational techniques are used to control the machine's behaviour. One aspect is the spontaneous generation of walking in legged robots and this can be used to investigate the mechanical requiements for efficient walking in bipeds. This paper demonstrates a bipedal simulator that spontaneously generates walking and running gaits. The model can be customized to represent a range of hominoid morphologies and used to predict performance paramets such as preferred speed and metabolic energy cost. Because it does not require any motion capture data it is particularly suitable for investigating locomotion in fossil animals. The predictoins for modern humans are highly accurate in terms of energy cost for a given speend and thus the values predicted for other bipeds are likely to be good estimates. To illustrate this the cost of transport is calculated for Australopithecus afarensis. The model allows the degree of maximum extension at the knee to be varied causing the model to adopt walking gaits varying from chimpanzee-like to human=like. The energy costs associated with these gait choices can thus be calculated and this information used to evaluate possible locomotor strategies in early hominids
Resumo:
For 40 years, at the University of Bologna, a group of researchers coordinated by professor Claudio Zannoni has been studying liquid crystals by employing computational techniques. They have developed effective models of these interesting, and still far from being completely understood, systems. They were able to reproduce with simulations important features of some liquid crystal molecules, such as transition temperature. Then they focused their attention on the interactions that these molecules have with different kinds of surface, and how these interactions affect the alignment of liquid crystals. The group studied the behaviour of liquid crystals in contact with different kinds of surfaces, from silica, either amorphous and crystalline, to organic self assembled monolayers (SAMs) and even some common polymers, such as polymethylmethacrylate (PMMA) and polystyrene (PS). Anyway, a library of typical surfaces is still far from being complete, and a lot of work must be done to investigate the cases which have not been analyzed yet. A hole that must be filled is represented by polydimethylsiloxane (PDMS), a polymer on which the interest of industry has enormously grown up in the last years, thanks to its peculiar features, allowing it to be employed in many fields of applications. It has been observed experimentally that PDMS causes 4-cyano-4’-pentylbiphenyl (well known as 5CB), one of the most common liquid crystal molecules, to align homeotropically (i.e. perpendicular) with respect to a surface made of this polymer. Even though some hypothesis have been presented to rationalize the effect, a clear explanation of this phenomenon has not been given yet. This dissertation shows the work I did during my internship in the group of professor Zannoni. The challenge that I had to tackle was to investigate, via Molecular Dynamics (MD) simulations, the reasons of 5CB homeotropic alignment on a PDMS surface, as the group had previously done for other surfaces.
Resumo:
Hematological cancers are a heterogeneous family of diseases that can be divided into leukemias, lymphomas, and myelomas, often called “liquid tumors”. Since they cannot be surgically removable, chemotherapy represents the mainstay of their treatment. However, it still faces several challenges like drug resistance and low response rate, and the need for new anticancer agents is compelling. The drug discovery process is long-term, costly, and prone to high failure rates. With the rapid expansion of biological and chemical "big data", some computational techniques such as machine learning tools have been increasingly employed to speed up and economize the whole process. Machine learning algorithms can create complex models with the aim to determine the biological activity of compounds against several targets, based on their chemical properties. These models are defined as multi-target Quantitative Structure-Activity Relationship (mt-QSAR) and can be used to virtually screen small and large chemical libraries for the identification of new molecules with anticancer activity. The aim of my Ph.D. project was to employ machine learning techniques to build an mt-QSAR classification model for the prediction of cytotoxic drugs simultaneously active against 43 hematological cancer cell lines. For this purpose, first, I constructed a large and diversified dataset of molecules extracted from the ChEMBL database. Then, I compared the performance of different ML classification algorithms, until Random Forest was identified as the one returning the best predictions. Finally, I used different approaches to maximize the performance of the model, which achieved an accuracy of 88% by correctly classifying 93% of inactive molecules and 72% of active molecules in a validation set. This model was further applied to the virtual screening of a small dataset of molecules tested in our laboratory, where it showed 100% accuracy in correctly classifying all molecules. This result is confirmed by our previous in vitro experiments.