888 resultados para Computational experiment
Resumo:
Signal peptides and transmembrane helices both contain a stretch of hydrophobic amino acids. This common feature makes it difficult for signal peptide and transmembrane helix predictors to correctly assign identity to stretches of hydrophobic residues near the N-terminal methionine of a protein sequence. The inability to reliably distinguish between N-terminal transmembrane helix and signal peptide is an error with serious consequences for the prediction of protein secretory status or transmembrane topology. In this study, we report a new method for differentiating protein N-terminal signal peptides and transmembrane helices. Based on the sequence features extracted from hydrophobic regions (amino acid frequency, hydrophobicity, and the start position), we set up discriminant functions and examined them on non-redundant datasets with jackknife tests. This method can incorporate other signal peptide prediction methods and achieve higher prediction accuracy. For Gram-negative bacterial proteins, 95.7% of N-terminal signal peptides and transmembrane helices can be correctly predicted (coefficient 0.90). Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 99% (coefficient 0.92). For eukaryotic proteins, 94.2% of N-terminal signal peptides and transmembrane helices can be correctly predicted with coefficient 0.83. Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 87% (coefficient 0.85). The method can be used to complement current transmembrane protein prediction and signal peptide prediction methods to improve their prediction accuracies. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
A research program on atmospheric boundary layer processes and local wind regimes in complex terrain was conducted in the vicinity of Lake Tekapo in the southern Alps of New Zealand, during two 1-month field campaigns in 1997 and 1999. The effects of the interaction of thermal and dynamic forcing were of specific interest, with a particular focus on the interaction of thermal forcing of differing scales. The rationale and objectives of the field and modeling program are described, along with the methodology used to achieve them. Specific research aims include improved knowledge of the role of surface forcing associated with varying energy balances across heterogeneous terrain, thermal influences on boundary layer and local wind development, and dynamic influences of the terrain through channeling effects. Data were collected using a network of surface meteorological and energy balance stations, radiosonde and pilot balloon soundings, tethered balloon and kite-based systems, sodar, and an instrumented light aircraft. These data are being used to investigate the energetics of surface heat fluxes, the effects of localized heating/cooling and advective processes on atmospheric boundary layer development, and dynamic channeling. A complementary program of numerical modeling includes application of the Regional Atmospheric Modeling System (RAMS) to case studies characterizing typical boundary layer structures and airflow patterns observed around Lake Tekapo. Some initial results derived from the special observation periods are used to illustrate progress made to date. In spite of the difficulties involved in obtaining good data and undertaking modeling experiments in such complex terrain, initial results show that surface thermal heterogeneity has a significant influence on local atmospheric structure and wind fields in the vicinity of the lake. This influence occurs particularly in the morning. However, dynamic channeling effects and the larger-scale thermal effect of the mountain region frequently override these more local features later in the day.
Resumo:
We generate and characterize continuous variable polarization entanglement between two optical beams. We first produce quadrature entanglement, and by performing local operations we transform it into a polarization basis. We extend two entanglement criteria, the inseparability criteria proposed by Duan et al (2000 Phys. Rev. Lett. 84 2722) and the Einstein–Podolsky–Rosen (EPR) paradox criteria proposed by Reid and Drummond (1988 Phys. Rev. Lett. 60 2731), to Stokes operators; and use them to characterize the entanglement. Our results for the EPR paradox criteria are visualized in terms of uncertainty balls on the Poincaré sphere. We demonstrate theoretically that using two quadrature entangled pairs it is possible to entangle three orthogonal Stokes operators between a pair of beams, although with a bound √3 times more stringent than for the quadrature entanglement.
Resumo:
O cultivo do café é uma das atividades do agronegócio de maior importância socioeconômica dentre as diferentes atividades ligadas ao comércio agrícola mundial. Uma das maiores contribuições da genética quantitativa para o melhoramento genético é a possibilidade de prever ganhos genéticos. Quando diferentes critérios de seleção são considerados, a predição de ganhos referentes a cada critério tem grande importância, pois indica os melhoristas sobre como utilizar o material genético disponível, visando obter o máximo de ganhos possível para as características de interesse. O presente trabalho foi instalado em julho de 2004, na Fazenda Experimental de Bananal do Norte, conduzida pelo Incaper, no distrito de Pacotuba, município de Cachoeiro de Itapemirim, região Sul do Estado, com o objetivo de selecionar as melhores plantas entre e dentro de progênies de meios- irmãos de Coffea canephora, por meio de diferentes critérios de seleção. Foram realizadas análises de variância individuais e conjuntas para 26 progênies de meios- irmãos Coffea canephora. O delineamento experimental utilizado foi em blocos ao acaso com quatro testemunhas adicionais com quatro repetições e parcela composta por cinco plantas, com o espaçamento de 3,0 m x 1,2 m. Neste trabalho, considerou-se os dados das últimas cinco colheitas. As características mensuradas foram: florescimento, maturação, tamanho do grão, peso, porte, vigor, ferrugem, mancha cercóspora, seca de ponteiros, escala geral, porcentagem de frutos boia e bicho mineiro. Todas as análises estatísticas foram realizadas com o aplicativo computacional em genética e estatística (GENES). Foram estimados os ganhos de seleção em função da porcentagem de seleção de 20% entre e dentro, sendo as mesmas mantidas para todas as características. Todas as características foram submetidas a seleção no sentido positivo, exceto para florescimento, porte, ferrugem, mancha cercóspora, seca de ponteiros, porcentagem de frutos boia e bicho mineiro, para obter decréscimo em suas médias originais. Os critérios de seleção estudados foram: seleção convencional entre e dentro das famílias, índice de seleção combinada, seleção massal e seleção massal estratificada. Esta dissertação é composta por dois capítulos, em que foram realizadas análises biométricas, como a obtenção de estimativas de parâmetros genéticos. Na maioria das características estudadas, verificaram-se diferenças significativas (P<0,05) para genótipos que, associados aos coeficientes de variação genotípicos e também ao coeficiente de determinação genotípico e à relação CVg/CVe, indicam a existência de variabilidade genética nos materiais genéticos para a maioria das características e condições favoráveis para obtenção de ganhos genéticos pela seleção. Essas características também foram correlacionadas. Os dados foram submetidos às análises de variância e multivariada, aplicando-se a técnica de agrupamento e UPGMA, teste de médias e estudo de correlações. Na técnica de agrupamento, foi utilizada a distância generalizada de Mahalanobis como medida de dissimilaridade, e na delimitação dos grupos, o método de Tocher. Foi encontrada diversidade genética para as características associadas à qualidade fisiológica, mobilização de reserva das sementes, dimensões e biomassa das plântulas. Quatro grupos de genótipos puderam ser formados. Peso de massa seca de sementes, redução de reserva de sementes e peso de massa seca de plântulas estão positivamente correlacionados entre si, enquanto a redução de reserva das sementes e a eficiência na conversão dessas reservas em plântulas estão negativamente correlacionadas. De acordo com os resultados obtidos, verificou-se que todas as características apresentaram níveis diferenciados de variabilidade genética e os critérios de seleção utilizados mostraram-se eficientes para o melhoramento, no qual o índice de seleção combinada é o critério de seleção que apresentou os melhores resultados em termos de ganhos, sendo indicado como critério mais apropriado para o melhoramento genético da população estudada. Nos estudos de correlações, em 70% dos casos, a correlação fenotípica foi superior à genotípica, mostrando maior influência dos fatores ambientais em relação aos genotípicos e condições propícias ao melhoramento dos diferentes caracteres. No estudo de divergência genética, observou-se que pelo agrupamento de genótipos, pela técnica de Tocher, indicou que os genótipos foram distribuídos em três grupos.
Resumo:
This study develops a theoretical model that explains the effectiveness of the balanced scorecard approach by means of a system dynamics and feedback learning perspective. Presumably, the balanced scorecard leads to a better understanding of context, allowing managers to externalize and improve their mental models. We present a set of hypotheses about the influence of the balanced scorecard approach on mental models and performance. A test based on a simulation experiment that uses a system dynamics model is performed. The experiment included three types of parameters: financial indicators; balanced scorecard indicators; and balanced scorecard indicators with the aid of a strategy map review. Two out of the three hypotheses were confirmed. It was concluded that a strategy map review positively influences mental model similarity, and mental model similarity positively influences performance.
Computational evaluation of hydraulic system behaviour with entrapped air under rapid pressurization
Resumo:
The pressurization of hydraulic systems containing entrapped air is considered a critical condition for the infrastructure's security due to transient pressure variations often occurred. The objective of the present study is the computational evaluation of trends observed in variation of maximum surge pressure resulting from rapid pressurizations. The comparison of the results with those obtained in previous studies is also undertaken. A brief state of art in this domain is presented. This research work is applied to an experimental system having entrapped air in the top of a vertical pipe section. The evaluation is developed through the elastic model based on the method of characteristics, considering a moving liquid boundary, with the results being compared with those achieved with the rigid liquid column model.
Computational evaluation of hydraulic system behaviour with entrapped air under rapid pressurization
Resumo:
The pressurization of hydraulic systems containing entrapped air is considered a critical condition for the infrastructure's security due to transient pressure variations often occurred. The objective of the present study is the computational evaluation of trends observed in variation of maximum surge pressure resulting from rapid pressurizations. The comparison of the results with those obtained in previous studies is also undertaken. A brief state of art in this domain is presented. This research work is applied to an experimental system having entrapped air in the top of a vertical pipe section. The evaluation is developed through the elastic model based on the method of characteristics, considering a moving liquid boundary, with the results being compared with those achieved with the rigid liquid column model.
Resumo:
Power system planning, control and operation require an adequate use of existing resources as to increase system efficiency. The use of optimal solutions in power systems allows huge savings stressing the need of adequate optimization and control methods. These must be able to solve the envisaged optimization problems in time scales compatible with operational requirements. Power systems are complex, uncertain and changing environments that make the use of traditional optimization methodologies impracticable in most real situations. Computational intelligence methods present good characteristics to address this kind of problems and have already proved to be efficient for very diverse power system optimization problems. Evolutionary computation, fuzzy systems, swarm intelligence, artificial immune systems, neural networks, and hybrid approaches are presently seen as the most adequate methodologies to address several planning, control and operation problems in power systems. Future power systems, with intensive use of distributed generation and electricity market liberalization increase power systems complexity and bring huge challenges to the forefront of the power industry. Decentralized intelligence and decision making requires more effective optimization and control techniques techniques so that the involved players can make the most adequate use of existing resources in the new context. The application of computational intelligence methods to deal with several problems of future power systems is presented in this chapter. Four different applications are presented to illustrate the promises of computational intelligence, and illustrate their potentials.
Resumo:
A quinoxalina e seus derivativos são uma importante classe de compostos heterocíclicos, onde os elementos N, S e O substituem átomos de carbono no anel. A fórmula molecular da quinoxalina é C8H6N2, formada por dois anéis aromáticos, benzeno e pirazina. É rara em estado natural, mas a sua síntese é de fácil execução. Modificações na estrutura da quinoxalina proporcionam uma grande variedade de compostos e actividades, tais como actividades antimicrobiana, antiparasitária, antidiabética, antiproliferativa, anti-inflamatória, anticancerígena, antiglaucoma, antidepressiva apresentando antagonismo do receptor AMPA. Estes compostos também são importantes no campo industrial devido, por exemplo, ao seu poder na inibição da corrosão do metal. A química computacional, ramo natural da química teórica é um método bem desenvolvido, utilizado para representar estruturas moleculares, simulando o seu comportamento com as equações da física quântica e clássica. Existe no mercado uma grande variedade de ferramentas informaticas utilizadas na química computacional, que permitem o cálculo de energias, geometrias, frequências vibracionais, estados de transição, vias de reação, estados excitados e uma variedade de propriedades baseadas em várias funções de onda não correlacionadas e correlacionadas. Nesta medida, a sua aplicação ao estudo das quinoxalinas é importante para a determinação das suas características químicas, permitindo uma análise mais completa, em menos tempo, e com menos custos.
Resumo:
Trabalho de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Conferência: 2nd Experiment at International Conference (Exp at)- Univ Coimbra, Coimbra, Portugal - Sep 18-20, 2013
Resumo:
Computational Intelligence (CI) includes four main areas: Evolutionary Computation (genetic algorithms and genetic programming), Swarm Intelligence, Fuzzy Systems and Neural Networks. This article shows how CI techniques overpass the strict limits of Artificial Intelligence field and can help solving real problems from distinct engineering areas: Mechanical, Computer Science and Electrical Engineering.
Resumo:
This paper presents a computational tool (PHEx) developed in Excel VBA for solving sizing and rating design problems involving Chevron type plate heat exchangers (PHE) with 1-pass-1-pass configuration. The rating methodology procedure used in the program is outlined, and a case study is presented with the purpose to show how the program can be used to develop sensitivity analysis to several dimensional parameters of PHE and to observe their effect on transferred heat and pressure drop.
Resumo:
Glass fibre-reinforced plastics (GFRP), nowadays commonly used in the construction, transportation and automobile sectors, have been considered inherently difficult to recycle due to both: cross-linked nature of thermoset resins, which cannot be remolded, and complex composition of the composite itself, which includes glass fibres, matrix and different types of inorganic fillers. Presently, most of the GFRP waste is landfilled leading to negative environmental impacts and supplementary added costs. With an increasing awareness of environmental matters and the subsequent desire to save resources, recycling would convert an expensive waste disposal into a profitable reusable material. There are several methods to recycle GFR thermostable materials: (a) incineration, with partial energy recovery due to the heat generated during organic part combustion; (b) thermal and/or chemical recycling, such as solvolysis, pyrolisis and similar thermal decomposition processes, with glass fibre recovering; and (c) mechanical recycling or size reduction, in which the material is subjected to a milling process in order to obtain a specific grain size that makes the material suitable as reinforcement in new formulations. This last method has important advantages over the previous ones: there is no atmospheric pollution by gas emission, a much simpler equipment is required as compared with ovens necessary for thermal recycling processes, and does not require the use of chemical solvents with subsequent environmental impacts. In this study the effect of incorporation of recycled GFRP waste materials, obtained by means of milling processes, on mechanical behavior of polyester polymer mortars was assessed. For this purpose, different contents of recycled GFRP waste materials, with distinct size gradings, were incorporated into polyester polymer mortars as sand aggregates and filler replacements. The effect of GFRP waste treatment with silane coupling agent was also assessed. Design of experiments and data treatment were accomplish by means of factorial design and analysis of variance ANOVA. The use of factorial experiment design, instead of the one factor at-a-time method is efficient at allowing the evaluation of the effects and possible interactions of the different material factors involved. Experimental results were promising toward the recyclability of GFRP waste materials as polymer mortar aggregates, without significant loss of mechanical properties with regard to non-modified polymer mortars.
Resumo:
The computations performed by the brain ultimately rely on the functional connectivity between neurons embedded in complex networks. It is well known that the neuronal connections, the synapses, are plastic, i.e. the contribution of each presynaptic neuron to the firing of a postsynaptic neuron can be independently adjusted. The modulation of effective synaptic strength can occur on time scales that range from tens or hundreds of milliseconds, to tens of minutes or hours, to days, and may involve pre- and/or post-synaptic modifications. The collection of these mechanisms is generally believed to underlie learning and memory and, hence, it is fundamental to understand their consequences in the behavior of neurons.(...)