902 resultados para Computational time


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Composites are engineered materials that take advantage of the particular properties of each of its two or more constituents. They are designed to be stronger, lighter and to last longer which can lead to the creation of safer protection gear, more fuel efficient transportation methods and more affordable materials, among other examples. This thesis proposes a numerical and analytical verification of an in-house developed multiscale model for predicting the mechanical behavior of composite materials with various configurations subjected to impact loading. This verification is done by comparing the results obtained with analytical and numerical solutions with the results found when using the model. The model takes into account the heterogeneity of the materials that can only be noticed at smaller length scales, based on the fundamental structural properties of each of the composite’s constituents. This model can potentially reduce or eliminate the need of costly and time consuming experiments that are necessary for material characterization since it relies strictly upon the fundamental structural properties of each of the composite’s constituents. The results from simulations using the multiscale model were compared against results from direct simulations using over-killed meshes, which considered all heterogeneities explicitly in the global scale, indicating that the model is an accurate and fast tool to model composites under impact loads. Advisor: David H. Allen

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RATIONALE: Oxazolines have attracted the attention of researchers worldwide due to their versatility as carboxylic acid protecting groups, chiral auxiliaries, and ligands for asymmetric catalysis. Electrospray ionization tandem mass spectrometric (ESI-MS/MS) analysis of five 2-oxazoline derivatives has been conducted, in order to understand the influence of the side chain on the gas-phase dissociation of these protonated compounds under collision-induced dissociation (CID) conditions. METHODS: Mass spectrometric analyses were conducted in a quadrupole time-of-flight (Q-TOF) spectrometer fitted with electrospray ionization source. Protonation sites have been proposed on the basis of the gas-phase basicity, proton affinity, atomic charges, and a molecular electrostatic potential map obtained on the basis of the quantum chemistry calculations at the B3LYP/6-31 + G(d, p) and G2(MP2) levels. RESULTS: Analysis of the atomic charges, gas-phase basicity and proton affinities values indicates that the nitrogen atom is a possible proton acceptor site. On the basis of these results, two main fragmentation processes have been suggested: one taking place via neutral elimination of the oxazoline moiety (99 u) and another occurring by sequential elimination of neutral fragments with 72 u and 27 u. These processes should lead to formation of R+. CONCLUSIONS: The ESI-MS/MS experiments have shown that the side chain could affect the dissociation mechanism of protonated 2-oxazoline derivatives. For the compound that exhibits a hydroxyl at the lateral chain, water loss has been suggested to happen through an E2-type elimination, in an exothermic step. Copyright (C) 2012 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many recent survival studies propose modeling data with a cure fraction, i.e., data in which part of the population is not susceptible to the event of interest. This event may occur more than once for the same individual (recurrent event). We then have a scenario of recurrent event data in the presence of a cure fraction, which may appear in various areas such as oncology, finance, industries, among others. This paper proposes a multiple time scale survival model to analyze recurrent events using a cure fraction. The objective is analyzing the efficiency of certain interventions so that the studied event will not happen again in terms of covariates and censoring. All estimates were obtained using a sampling-based approach, which allows information to be input beforehand with lower computational effort. Simulations were done based on a clinical scenario in order to observe some frequentist properties of the estimation procedure in the presence of small and moderate sample sizes. An application of a well-known set of real mammary tumor data is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work evaluates the spatial distribution of normalised rates of droplet breakage and droplet coalescence in liquidliquid dispersions maintained in agitated tanks at operation conditions normally used to perform suspension polymerisation reactions. Particularly, simulations are performed with multiphase computational fluid dynamics (CFD) models to represent the flow field in liquidliquid styrene suspension polymerisation reactors for the first time. CFD tools are used first to compute the spatial distribution of the turbulent energy dissipation rates (e) inside the reaction vessel; afterwards, normalised rates of droplet breakage and particle coalescence are computed as functions of e. Surprisingly, multiphase simulations showed that the rates of energy dissipation can be very high near the free vortex surfaces, which has been completely neglected in previous works. The obtained results indicate the existence of extremely large energy dissipation gradients inside the vessel, so that particle breakage occurs primarily in very small regions that surround the impeller and the free vortex surface, while particle coalescence takes place in the liquid bulk. As a consequence, particle breakage should be regarded as an independent source term or a boundary phenomenon. Based on the obtained results, it can be very difficult to justify the use of isotropic assumptions to formulate particle population balances in similar systems, even when multiple compartment models are used to describe the fluid dynamic behaviour of the agitated vessel. (C) 2011 Canadian Society for Chemical Engineering

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose three novel mathematical models for the two-stage lot-sizing and scheduling problems present in many process industries. The problem shares a continuous or quasi-continuous production feature upstream and a discrete manufacturing feature downstream, which must be synchronized. Different time-based scale representations are discussed. The first formulation encompasses a discrete-time representation. The second one is a hybrid continuous-discrete model. The last formulation is based on a continuous-time model representation. Computational tests with state-of-the-art MIP solver show that the discrete-time representation provides better feasible solutions in short running time. On the other hand, the hybrid model achieves better solutions for longer computational times and was able to prove optimality more often. The continuous-type model is the most flexible of the three for incorporating additional operational requirements, at a cost of having the worst computational performance. Journal of the Operational Research Society (2012) 63, 1613-1630. doi:10.1057/jors.2011.159 published online 7 March 2012

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present approximate distributions for the ratio of the cumulative wavelet periodograms considering stationary and non-stationary time series generated from independent Gaussian processes. We also adapt an existing procedure to use this statistic and its approximate distribution in order to test if two regularly or irregularly spaced time series are realizations of the same generating process. Simulation studies show good size and power properties for the test statistic. An application with financial microdata illustrates the test usefulness. We conclude advocating the use of these approximate distributions instead of the ones obtained through randomizations, mainly in the case of irregular time series. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is supported by Brazilian agencies Fapesp, CAPES and CNPq

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents and uses the techniques of computational chemistry to explore two different processes induced in human skin by ultraviolet light. The first is the transformation of urocanic acid into a immunosuppressing agent, and the other is the enzymatic action of the 8-oxoguanine glycosylase enzyme. The photochemistry of urocanic acid is investigated by time-dependent density functional theory. Vertical absorption spectra of the molecule in different forms and environments is assigned and candidate states for the photochemistry at different wavelengths are identified. Molecular dynamics simulations of urocanic acid in gas phase and aqueous solution reveals considerable flexibility under experimental conditions, particularly for for the cis isomer where competition between intra- and inter-molecular interactions increases flexibility. A model to explain the observed gas phase photochemistry of urocanic acid is developed and it is shown that a reinterpretation in terms of a mixture between isomers significantly enhances the agreement between theory and experiment , and resolves several peculiarities in the spectrum. A model for the photochemistry in the aqueous phase of urocanic acid is then developed, in which two excited states governs the efficiency of photoisomerization. The point of entrance into a conical intersection seam is shown to explain the wavelength dependence of photoisomerization quantum yield. Finally some mechanistic aspects of the DNA repair enzyme 8-oxoguanine glycosylase is investigated with density functional theory. It is found that the critical amino acid of the active site can provide catalytic power in several different manners, and that a recent proposal involving a SN1 type of mechanism seems the most efficient one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the post genomic era with the massive production of biological data the understanding of factors affecting protein stability is one of the most important and challenging tasks for highlighting the role of mutations in relation to human maladies. The problem is at the basis of what is referred to as molecular medicine with the underlying idea that pathologies can be detailed at a molecular level. To this purpose scientific efforts focus on characterising mutations that hamper protein functions and by these affect biological processes at the basis of cell physiology. New techniques have been developed with the aim of detailing single nucleotide polymorphisms (SNPs) at large in all the human chromosomes and by this information in specific databases are exponentially increasing. Eventually mutations that can be found at the DNA level, when occurring in transcribed regions may then lead to mutated proteins and this can be a serious medical problem, largely affecting the phenotype. Bioinformatics tools are urgently needed to cope with the flood of genomic data stored in database and in order to analyse the role of SNPs at the protein level. In principle several experimental and theoretical observations are suggesting that protein stability in the solvent-protein space is responsible of the correct protein functioning. Then mutations that are found disease related during DNA analysis are often assumed to perturb protein stability as well. However so far no extensive analysis at the proteome level has investigated whether this is the case. Also computationally methods have been developed to infer whether a mutation is disease related and independently whether it affects protein stability. Therefore whether the perturbation of protein stability is related to what it is routinely referred to as a disease is still a big question mark. In this work we have tried for the first time to explore the relation among mutations at the protein level and their relevance to diseases with a large-scale computational study of the data from different databases. To this aim in the first part of the thesis for each mutation type we have derived two probabilistic indices (for 141 out of 150 possible SNPs): the perturbing index (Pp), which indicates the probability that a given mutation effects protein stability considering all the “in vitro” thermodynamic data available and the disease index (Pd), which indicates the probability of a mutation to be disease related, given all the mutations that have been clinically associated so far. We find with a robust statistics that the two indexes correlate with the exception of all the mutations that are somatic cancer related. By this each mutation of the 150 can be coded by two values that allow a direct comparison with data base information. Furthermore we also implement computational methods that starting from the protein structure is suited to predict the effect of a mutation on protein stability and find that overpasses a set of other predictors performing the same task. The predictor is based on support vector machines and takes as input protein tertiary structures. We show that the predicted data well correlate with the data from the databases. All our efforts therefore add to the SNP annotation process and more importantly found the relationship among protein stability perturbation and the human variome leading to the diseasome.