945 resultados para Data modeling
Resumo:
The MAP-i Doctoral Programme in Informatics, of the Universities of Minho, Aveiro and Porto
Resumo:
Doctoral Thesis Civil Engineering
Resumo:
The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto
Resumo:
Extreme value models are widely used in different areas. The Birnbaum–Saunders distribution is receiving considerable attention due to its physical arguments and its good properties. We propose a methodology based on extreme value Birnbaum–Saunders regression models, which includes model formulation, estimation, inference and checking. We further conduct a simulation study for evaluating its performance. A statistical analysis with real-world extreme value environmental data using the methodology is provided as illustration.
Resumo:
Pirarucu (Arapaima gigas) has been of the most important natural fishing resources of the Amazon region. Due to its economic importance, and the necessity to preserve the species hand, field research concerning the habits and behavior of the pirarucu has been increasing for the last 20 years. The aim of this paper is to present a mathematical model for the pirarucu population dynamics considering the species peculiarities, particularly the male parental care over the offspring. The solution of the dynamical systems indicates three possible equilibrium points for the population. The first corresponds to extinction; the third corresponds to a stable population close to the environmental carrying capacity. The second corresponds to an unstable equilibrium located between extinction and full use of the carrying capacity. It is shown that lack of males’ parental care closes the gap between the point corresponding to the unstable equilibrium and the point of stable non-trivial equilibrium. If guarding failure reaches a critical point the two points coincide and the population tends irreversibly to extinction. If some event tends to destabilize the population equilibrium, as for instance inadequate parental care, the model responds in such a way as to restore the trajectory towards the stable equilibrium point avoiding the route to extinction. The parameters introduced to solve the system of equations are partially derived from limited but reliable field data collected at the Mamirauá Sustainable Development Reserve (MSDR) in the Brazilian Amazonian Region.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
ABSTRACT The spatial distribution of forest biomass in the Amazon is heterogeneous with a temporal and spatial variation, especially in relation to the different vegetation types of this biome. Biomass estimated in this region varies significantly depending on the applied approach and the data set used for modeling it. In this context, this study aimed to evaluate three different geostatistical techniques to estimate the spatial distribution of aboveground biomass (AGB). The selected techniques were: 1) ordinary least-squares regression (OLS), 2) geographically weighted regression (GWR) and, 3) geographically weighted regression - kriging (GWR-K). These techniques were applied to the same field dataset, using the same environmental variables derived from cartographic information and high-resolution remote sensing data (RapidEye). This study was developed in the Amazon rainforest from Sucumbíos - Ecuador. The results of this study showed that the GWR-K, a hybrid technique, provided statistically satisfactory estimates with the lowest prediction error compared to the other two techniques. Furthermore, we observed that 75% of the AGB was explained by the combination of remote sensing data and environmental variables, where the forest types are the most important variable for estimating AGB. It should be noted that while the use of high-resolution images significantly improves the estimation of the spatial distribution of AGB, the processing of this information requires high computational demand.
Resumo:
The influence of the hip joint formulation on the kinematic response of the model of human gait is investigated throughout this work. To accomplish this goal, the fundamental issues of the modeling process of a planar hip joint under the framework of multibody systems are revisited. In particular, the formulations for the ideal, dry, and lubricated revolute joints are described and utilized for the interaction of femur head inside acetabulum or the hip bone. In this process, the main kinematic and dynamic aspects of hip joints are analyzed. In a simple manner, the forces that are generated during human gait, for both dry and lubricated hip joint models, are computed in terms of the system’s state variables and subsequently introduced into the dynamics equations of motion of the multibody system as external generalized forces. Moreover, a human multibody model is considered, which incorporates the different approaches for the hip articulation, namely ideal joint, dry, and lubricated models. Finally, several computational simulations based on different approaches are performed, and the main results presented and compared to identify differences among the methodologies and procedures adopted in this work. The input conditions to the models correspond to the experimental data capture from an adult male during normal gait. In general, the obtained results in terms of positions do not differ significantly when the different hip joint models are considered. In sharp contrast, the velocity and acceleration plotted vary significantly. The effect of the hip joint modeling approach is clearly measurable and visible in terms of peaks and oscillations of the velocities and accelerations. In general, with the dry hip model, intra-joint force peaks can be observed, which can be associated with the multiple impacts between the femur head and the cup. In turn, when the lubricant is present, the system’s response tends to be smoother due to the damping effects of the synovial fluid.
Resumo:
Mathematical and computational models play an essential role in understanding the cellular metabolism. They are used as platforms to integrate current knowledge on a biological system and to systematically test and predict the effect of manipulations to such systems. The recent advances in genome sequencing techniques have facilitated the reconstruction of genome-scale metabolic networks for a wide variety of organisms from microbes to human cells. These models have been successfully used in multiple biotechnological applications. Despite these advancements, modeling cellular metabolism still presents many challenges. The aim of this Research Topic is not only to expose and consolidate the state-of-the-art in metabolic modeling approaches, but also to push this frontier beyond the current edge through the introduction of innovative solutions. The articles presented in this e-book address some of the main challenges in the field, including the integration of different modeling formalisms, the integration of heterogeneous data sources into metabolic models, explicit representation of other biological processes during phenotype simulation, and standardization efforts in the representation of metabolic models and simulation results.
Resumo:
Genome-scale metabolic models are valuable tools in the metabolic engineering process, based on the ability of these models to integrate diverse sources of data to produce global predictions of organism behavior. At the most basic level, these models require only a genome sequence to construct, and once built, they may be used to predict essential genes, culture conditions, pathway utilization, and the modifications required to enhance a desired organism behavior. In this chapter, we address two key challenges associated with the reconstruction of metabolic models: (a) leveraging existing knowledge of microbiology, biochemistry, and available omics data to produce the best possible model; and (b) applying available tools and data to automate the reconstruction process. We consider these challenges as we progress through the model reconstruction process, beginning with genome assembly, and culminating in the integration of constraints to capture the impact of transcriptional regulation. We divide the reconstruction process into ten distinct steps: (1) genome assembly from sequenced reads; (2) automated structural and functional annotation; (3) phylogenetic tree-based curation of genome annotations; (4) assembly and standardization of biochemistry database; (5) genome-scale metabolic reconstruction; (6) generation of core metabolic model; (7) generation of biomass composition reaction; (8) completion of draft metabolic model; (9) curation of metabolic model; and (10) integration of regulatory constraints. Each of these ten steps is documented in detail.
Resumo:
We review recent likelihood-based approaches to modeling demand for medical care. A semi-nonparametric model along the lines of Cameron and Johansson's Poisson polynomial model, but using a negative binomial baseline model, is introduced. We apply these models, as well a semiparametric Poisson, hurdle semiparametric Poisson, and finite mixtures of negative binomial models to six measures of health care usage taken from the Medical Expenditure Panel survey. We conclude that most of the models lead to statistically similar results, both in terms of information criteria and conditional and unconditional prediction. This suggests that applied researchers may not need to be overly concerned with the choice of which of these models they use to analyze data on health care demand.
Ab initio modeling and molecular dynamics simulation of the alpha 1b-adrenergic receptor activation.
Resumo:
This work describes the ab initio procedure employed to build an activation model for the alpha 1b-adrenergic receptor (alpha 1b-AR). The first version of the model was progressively modified and complicated by means of a many-step iterative procedure characterized by the employment of experimental validations of the model in each upgrading step. A combined simulated (molecular dynamics) and experimental mutagenesis approach was used to determine the structural and dynamic features characterizing the inactive and active states of alpha 1b-AR. The latest version of the model has been successfully challenged with respect to its ability to interpret and predict the functional properties of a large number of mutants. The iterative approach employed to describe alpha 1b-AR activation in terms of molecular structure and dynamics allows further complications of the model to allow prediction and interpretation of an ever-increasing number of experimental data.
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
Resumo:
Projecte de recerca elaborat a partir d’una estada a la National Oceanography Centre of Southampton (NOCS), Gran Bretanya, entre maig i juliol del 2006. La possibilitat d’obtenir una estimació precissa de la salinitat marina (SSS) és important per a investigar i predir l’extensió del fenòmen del canvi climàtic. La missió Soil Moisture and Ocean Salinity (SMOS) va ser seleccionada per l’Agència Espacial Europea (ESA) per a obtenir mapes de salinitat de la superfície marina a escala global i amb un temps de revisita petit. Abans del llençament de SMOS es preveu l’anàlisi de la variabilitat horitzontal de la SSS i del potencial de les dades recuperades a partir de mesures de SMOS per a reproduir comportaments oceanogràfics coneguts. L’objectiu de tot plegat és emplenar el buit existent entre les fonts de dades d’entrada/auxiliars fiables i les eines desenvolupades per a simular i processar les dades adquirides segons la configuració de SMOS. El SMOS End-to-end Performance Simulator (SEPS) és un simulador adhoc desenvolupat per la Universitat Politècnica de Catalunya (UPC) per a generar dades segons la configuració de SMOS. Es va utilitzar dades d’entrada a SEPS procedents del projecte Ocean Circulation and Climate Advanced Modeling (OCCAM), utilitzat al NOCS, a diferents resolucions espacials. Modificant SEPS per a poder fer servir com a entrada les dades OCCAM es van obtenir dades de temperatura de brillantor simulades durant un mes amb diferents observacions ascendents que cobrien la zona seleccionada. Les tasques realitzades durant l’estada a NOCS tenien la finalitat de proporcionar una tècnica fiable per a realitzar la calibració externa i per tant cancel•lar el bias, una metodologia per a promitjar temporalment les diferents adquisicions durant les observacions ascendents, i determinar la millor configuració de la funció de cost abans d’explotar i investigar les posibiltats de les dades SEPS/OCCAM per a derivar la SSS recuperada amb patrons d’alta resolució.
Resumo:
The breakdown of the Bretton Woods system and the adoption of generalized oating exchange rates ushered in a new era of exchange rate volatility and uncer- tainty. This increased volatility lead economists to search for economic models able to describe observed exchange rate behavior. In the present paper we propose more general STAR transition functions which encompass both threshold nonlinearity and asymmetric e¤ects. Our framework allows for a gradual adjustment from one regime to another, and considers threshold e¤ects by encompassing other existing models, such as TAR models. We apply our methodology to three di¤erent exchange rate data-sets, one for developing countries, and o¢ cial nominal exchange rates, the sec- ond emerging market economies using black market exchange rates and the third for OECD economies.