930 resultados para Genomics -- Mathematical models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A central difficulty in modeling epileptogenesis using biologically plausible computational and mathematical models is not the production of activity characteristic of a seizure, but rather producing it in response to specific and quantifiable physiologic change or pathologic abnormality. This is particularly problematic when it is considered that the pathophysiological genesis of most epilepsies is largely unknown. However, several volatile general anesthetic agents, whose principle targets of action are quantifiably well characterized, are also known to be proconvulsant. The authors describe recent approaches to theoretically describing the electroencephalographic effects of volatile general anesthetic agents that may be able to provide important insights into the physiologic mechanisms that underpin seizure initiation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The inhibitory effects of toxin-producing phytoplankton (TPP) on zooplankton modulate the dynamics of marine plankton. In this article, we employ simple mathematical models to compare theoretically the dynamics of phytoplankton–zooplankton interaction in situations where the TPP are present with those where TPP are absent. We consider two sets of three-component interaction models: one that does not include the effect of TPP and the other that does. The negative effects of TPP on zooplankton is described by a non-linear interaction term. Extensive theoretical analyses of the models have been performed to understand the qualitative behaviour of the model systems around every possible equilibria. The results of local-stability analysis and numerical simulations demonstrate that the two model-systems differ qualitatively with regard to oscillations and stability. The model system that does not include TPP is asymptotically stable around the coexisting equilibria, whereas, the system that includes TPP oscillates for a range of parametric values associated with toxin-inhibition rate and competition coefficients. Our analysis suggests that the qualitative dynamics of the plankton–zooplankton interactions are very likely to alter due to the presence of TPP species, and therefore the effects of TPP should be considered carefully while modelling plankton dynamics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Industrial robotic manipulators can be found in most factories today. Their tasks are accomplished through actively moving, placing and assembling parts. This movement is facilitated by actuators that apply a torque in response to a command signal. The presence of friction and possibly backlash have instigated the development of sophisticated compensation and control methods in order to achieve the desired performance may that be accurate motion tracking, fast movement or in fact contact with the environment. This thesis presents a dual drive actuator design that is capable of physically linearising friction and hence eliminating the need for complex compensation algorithms. A number of mathematical models are derived that allow for the simulation of the actuator dynamics. The actuator may be constructed using geared dc motors, in which case the benefits of torque magnification is retained whilst the increased non-linear friction effects are also linearised. An additional benefit of the actuator is the high quality, low latency output position signal provided by the differencing of the two drive positions. Due to this and the linearised nature of friction, the actuator is well suited for low velocity, stop-start applications, micro-manipulation and even in hard-contact tasks. There are, however, disadvantages to its design. When idle, the device uses power whilst many other, single drive actuators do not. Also the complexity of the models mean that parameterisation is difficult. Management of start-up conditions still pose a challenge.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The deterpenation of bergamot essential oil can be performed by liquid liquid extraction using hydrous ethanol as the solvent. A ternary mixture composed of 1-methyl-4-prop-1-en-2-yl-cydohexene (limonene), 3,7-dimethylocta-1,6-dien-3-yl-acetate (linalyl acetate), and 3,7-dimethylocta-1,6-dien-3-ol (linalool), three major compounds commonly found in bergamot oil, was used to simulate this essential oil. Liquid liquid equilibrium data were experimentally determined for systems containing essential oil compounds, ethanol, and water at 298.2 K and are reported in this paper. The experimental data were correlated using the NRTL and UNIQUAC models, and the mean deviations between calculated and experimental data were lower than 0.0062 in all systems, indicating the good descriptive quality of the molecular models. To verify the effect of the water mass fraction in the solvent and the linalool mass fraction in the terpene phase on the distribution coefficients of the essential oil compounds, nonlinear regression analyses were performed, obtaining mathematical models with correlation coefficient values higher than 0.99. The results show that as the water content in the solvent phase increased, the kappa value decreased, regardless of the type of compound studied. Conversely, as the linalool content increased, the distribution coefficients of hydrocarbon terpene and ester also increased. However, the linalool distribution coefficient values were negatively affected when the terpene alcohol content increased in the terpene phase.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A statistical data analysis methodology was developed to evaluate the field emission properties of many samples of copper oxide nanostructured field emitters. This analysis was largely done in terms of Seppen-Katamuki (SK) charts, field strength and emission current. Some physical and mathematical models were derived to describe the effect of small electric field perturbations in the Fowler-Nordheim (F-N) equation, and then to explain the trend of the data represented in the SK charts. The field enhancement factor and the emission area parameters showed to be very sensitive to variations in the electric field for most of the samples. We have found that the anode-cathode distance is critical in the field emission characterization of samples having a non-rigid nanostructure. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mathematical models, as instruments for understanding the workings of nature, are a traditional tool of physics, but they also play an ever increasing role in biology - in the description of fundamental processes as well as that of complex systems. In this review, the authors discuss two examples of the application of group theoretical methods, which constitute the mathematical discipline for a quantitative description of the idea of symmetry, to genetics. The first one appears, in the form of a pseudo-orthogonal (Lorentz like) symmetry, in the stochastic modelling of what may be regarded as the simplest possible example of a genetic network and, hopefully, a building block for more complicated ones: a single self-interacting or externally regulated gene with only two possible states: ` on` and ` off`. The second is the algebraic approach to the evolution of the genetic code, according to which the current code results from a dynamical symmetry breaking process, starting out from an initial state of complete symmetry and ending in the presently observed final state of low symmetry. In both cases, symmetry plays a decisive role: in the first, it is a characteristic feature of the dynamics of the gene switch and its decay to equilibrium, whereas in the second, it provides the guidelines for the evolution of the coding rules.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many solutions to AI problems require the task to be represented in one of a multitude of rigorous mathematical formalisms. The construction of such mathematical models forms a difficult problem which is often left to the user of the problem solver. This void between problem solvers and the problems is studied by the eclectic field of automated modelling. Within this field, compositional modelling, a knowledge-based methodology for system modelling, has established itself as a leading approach. In general, a compositional modeller organises knowledge in a structure of composable fragments that relate to particular system components or processes. Its embedded inference mechanism chooses the appropriate fragments with respect to a given problem, instantiates and assembles them into a consistent system model. Many different types of compositional modeller exist, however, with significant differences in their knowledge representation and approach to inference. This paper examines compositional modelling. It presents a general framework for building and analysing compositional modellers. Based on this framework, a number of influential compositional modellers are examined and compared. The paper also identifies the strengths and weaknesses of compositional modelling and discusses some typical applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Drinking water utilities in urban areas are focused on finding smart solutions facing new challenges in their real-time operation because of limited water resources, intensive energy requirements, a growing population, a costly and ageing infrastructure, increasingly stringent regulations, and increased attention towards the environmental impact of water use. Such challenges force water managers to monitor and control not only water supply and distribution, but also consumer demand. This paper presents and discusses novel methodologies and procedures towards an integrated water resource management system based on advanced ICT technologies of automation and telecommunications for largely improving the efficiency of drinking water networks (DWN) in terms of water use, energy consumption, water loss minimization, and water quality guarantees. In particular, the paper addresses the first results of the European project EFFINET (FP7-ICT2011-8-318556) devoted to the monitoring and control of the DWN in Barcelona (Spain). Results are split in two levels according to different management objectives: (i) the monitoring level is concerned with all the aspects involved in the observation of the current state of a system and the detection/diagnosis of abnormal situations. It is achieved through sensors and communications technology, together with mathematical models; (ii) the control level is concerned with computing the best suitable and admissible control strategies for network actuators as to optimize a given set of operational goals related to the performance of the overall system. This level covers the network control (optimal management of water and energy) and the demand management (smart metering, efficient supply). The consideration of the Barcelona DWN as the case study will allow to prove the general applicability of the proposed integrated ICT solutions and their effectiveness in the management of DWN, with considerable savings of electricity costs and reduced water loss while ensuring the high European standards of water quality to citizens.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A quantificação da precipitação é dificultada pela extrema aleatoriedade do fenômeno na natureza. Os métodos convencionais para mensuração da precipitação atuam no sentido de espacializar a precipitação mensurada pontualmente em postos pluviométricos para toda a área de interesse e, desta forma, uma rede com elevado número de postos bem distribuídos em toda a área de interesse é necessária para um resultado satisfatório. No entanto, é notória a escassez de postos pluviométricos e a má distribuição espacial dos poucos existentes, não somente no Brasil, mas em vastas áreas do globo. Neste contexto, as estimativas da precipitação com técnicas de sensoriamento remoto e geoprocessamento pretendem potencializar a utilização dos postos pluviométricos existentes através de uma espacialização baseada em critérios físicos. Além disto, o sensoriamento remoto é a ferramenta mais capaz para gerar estimativas de precipitação nos oceanos e nas vastas áreas continentais desprovidas de qualquer tipo de informação pluviométrica. Neste trabalho investigou-se o emprego de técnicas de sensoriamento remoto e geoprocessamento para estimativas de precipitação no sul do Brasil. Três algoritmos computadorizados foram testados, sendo utilizadas as imagens dos canais 1, 3 e 4 (visível, vapor d’água e infravermelho) do satélite GOES 8 (Geostacionary Operational Environmental Satellite – 8) fornecidas pelo Centro de Previsão de Tempo e Estudos Climáticos do Instituto Nacional de Pesquisas Espaciais. A área de estudo compreendeu todo o estado do Rio Grande do Sul, onde se utilizaram os dados pluviométricos diários derivados de 142 postos no ano de 1998. Os algoritmos citados buscam identificar as nuvens precipitáveis para construir modelos estatísticos que correlacionem as precipitações diária e decendial observadas em solo com determinadas características físicas das nuvens acumuladas durante o mesmo período de tempo e na mesma posição geográfica de cada pluviômetro considerado. Os critérios de decisão que norteiam os algoritmos foram baseados na temperatura do topo das nuvens (através do infravermelho termal), reflectância no canal visível, características de vizinhança e no plano de temperatura x gradiente de temperatura Os resultados obtidos pelos modelos estatísticos são expressos na forma de mapas de precipitação por intervalo de tempo que podem ser comparados com mapas de precipitação obtidas por meios convencionais.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta Tese apresenta uma análise do comportamento térmico de um sistema de aquecimento solar operando por termossifão. Neste tipo de sistema o fluido no coletor solar é circulado por convecção natural, que acontece devido à diferença de massa específica da água ao longo circuito. Nestes sistemas a vazão mássica varia ao longo do dia e do ano, dependendo, dentre outros fatores, da irradiância solar absorvida, do perfil de temperaturas da água no sistema, da geometria, do volume e do perfil de demanda de água quente. Para uma avaliação detalhada do comportamento térmico de aquecedores solares operando por termossifão foram realizados ensaios experimentais e cálculos teóricos. Os resultados dos experimentos concordaram com aqueles apresentados na literatura e sua análise fundamentou o desenvolvimento do aplicativo TermoSim, um programa de simulação computacional do comportamento térmico de sistemas de aquecimento de água com energia solar. O tratamento matemático adotado no TermoSim compreende a modelagem dos coletores solares de acordo com a teoria de Hottel-Bliss-Whillier. O reservatório térmico é modelado com estratificação térmica, convecção e condução entre as camadas. A vazão mássica é obtida a partir do balanço da quantidade de movimento no circuito. Os modelos matemáticos empregados na construção do aplicativo TermoSim foram validados através do confronto dos resultados simulados com medidas experimentais. Foi demonstrado que a utilização destes modelos é adequada e permite reproduzir com precisão o comportamento térmico dos coletores solares e do reservatório térmico. Além do programa TermoSim, foi também desenvolvido o programa TermoDim, que é uma ferramenta para o dimensionamento de sistemas de aquecimento solar, que requer apenas o conhecimento dos parâmetros geométricos do sistema, dados meteorológicos em média mensal e informação a respeito do volume de demanda. O TermoDim é apropriado para estimar o desempenho de aquecedores solares operando por termossifão com tanques verticais e horizontais. O método de dimensionamento do TermoDim é baseado na correlação para a eficiência média mensal obtida neste trabalho a partir de um grande número de simulações.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste trabalho, nos propomos a estudar o desenvolvimento teórico de alguns modelos matemáticos básicos de doenças infecciosas causadas por macroparasitas, bem como as dificuldades neles envolvidas. Os modelos de transmissão, que descrevemos, referem-se ao grupo de parasitas com transmissão direta: os helmintos. O comportamento reprodutivo peculiar do helminto dentro do hospedeiro definitivo, no intuito de produzir estágios que serão infectivos para outros hospedeiros, faz com que a epidemiologia de infecções por helmintos seja fundamentalmente diferente de todos os outros agentes infecciosos. Uma característica importante nestes modelos é a forma sob a qual supõe-se que os parasitas estejam distribuídos nos seus hospedeiros. O tamanho da carga de parasitas (intensidade da infecção) em um hospedeiro é o determinante central da dinâmica de transmissão de helmintos, bem como da morbidade causada por estes parasitas. Estudamos a dinâmica de parasitas helmintos de ciclo de vida direto para parasitas monóicos (hermafroditas) e também para parasitas dióicos (machos-fêmeas) poligâmicos, levando em consideração uma função acasalamento apropriada, sempre distribuídos de forma binomial negativa. Através de abordagens analítica e numérica, apresentamos a análise de estabilidade dos pontos de equilíbrio do sistema. Cálculos de prevalências, bem como de efeitos da aplicação de agentes quimioterápicos e da vacinação, no controle da transmissão e da morbidade de parasitas helmintos de ciclo de vida direto, também são apresentados neste trabalho.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Unlike the methodological sciences such as mathematics and decision theory, which use the hypothetical-deductive method and may be fully expressed in complex mathematical models because their only truth criterion is logical consistency, the substantive sciences have as their truth criterion the correspondence to reality, adopt an empirical-deductive method, and are supposed to generalize from and often unreliable regularities and tendencies. Given this assumption, it is very difficult for economists to predict economic behavior, particularly major financial crises.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A composição de equipes é um tema recorrente em diferentes áreas do conhecimento. O interesse pela definição das etapas e variáveis relevantes desse processo, considerado complexo, é manifestado por pesquisadores, profissionais e desenvolvedores de Sistemas de Informação (SI). Todavia, enquanto linhas teóricas, oriundas dos estudos organizacionais, buscam a consolidação de modelos matemáticos que reflitam a relação entre variáveis de composição de equipes e o seu desempenho, teorias emergentes, como a de Combinação Social, acrescentam novos elementos à discussão. Adicionalmente, variáveis específicas de cada contexto, que no caso dessa pesquisa é a educação executiva brasileira, também são mencionadas como tendo relevância para estruturação de grupos. Dado o interesse e a variedade de vertentes teóricas que abordam esse fenômeno, essa pesquisa foi proposta para descrever como ocorre a construção de equipes docentes e identificar as variáveis consideradas relevantes neste processo. Um modelo teórico inicial foi desenvolvido e aplicado. Dada a característica da questão de pesquisa, foi utilizada uma abordagem metodológica exploratório-descritiva, baseada em estudos de casos múltiplos, realizados em quatro instituições de ensino superior brasileiras, que oferecem cursos de educação executiva. A coleta e a análise de dados foi norteada pelos métodos propostos por Huberman e Miles (1983) e Yin (2010), compreendendo a utilização de um protocolo de estudo de caso, bem como o uso de tabelas e quadros, padronizados à luz do modelo teórico inicial. Os resultados desse trabalho indicam, majoritariamente, que: as teorias de Combinação Social e as teorias de Educação adicionam elementos que são relevantes ao entendimento do processo de composição de equipes; há variáveis não estruturadas que deixam de ser consideradas em documentos utilizados na avaliação e seleção de profissionais para equipes docentes; e há variáveis de composição que só são consideradas após o fim do primeiro ciclo de atividades das equipes. Com base nos achados empíricos, a aplicação do modelo teórico foi ajustada e apresentada. As contribuições adicionais, as reflexões, as limitações e as propostas de estudos futuros são apresentadas no capítulo de conclusões.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Agricultural and agro-industrial residues are often considered both an environmental and an economical problem. Therefore, a paradigm shift is needed, assuming residues as biorefinery feedstocks. In this work cherimoya (Annona cherimola Mill.) seeds, which are lipid-rich (ca. 30%) and have a significant lignocellulosic fraction, were used as an example of a residue without any current valorization. Firstly, the lipid fraction was obtained by solvent extraction. Extraction yield varied from 13% to 28%, according to the extraction method and time, and solvent purity. This oil was converted into biodiesel (by base-catalyzed transesterification), yielding 76 g FAME/100 g oil. The obtained biodiesel is likely to be incorporated in the commercial chain, according to the EN14214 standard. The remaining lignocellulosic fraction was subjected to two alternative fractionation processes for the selective recovery of hemicellulose, aiming different products. Empirical mathematical models were developed for both processes, aiming future scale-up. Autohydrolysis rendered essentially oligosaccharides (10 gL-1) with properties indicating potential food/feed/pharmacological applications. The remaining solid was enzymatically saccharified, reaching a saccharification yield of 83%. The hydrolyzate obtained by dilute acid hydrolysis contained mostly monosaccharides, mainly xylose (26 gL-1), glucose (10 gL-1) and arabinose (3 gL-1), and had low content of microbial growth inhibitors. This hydrolyzate has proven to be appropriate to be used as culture media for exopolisaccharide production, using bacteria or microbial consortia. The maximum conversion of monosaccharides into xanthan gum was 0.87 g/g and kefiran maximum productivity was 0.07 g.(Lh)-1. This work shows the technical feasibility of using cherimoya seeds, and materials as such, as potential feedstocks, opening new perspectives for upgrading them in the biorefinery framework.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model