982 resultados para PREDICTIONS
Resumo:
The use of genome-scale metabolic models has been rapidly increasing in fields such as metabolic engineering. An important part of a metabolic model is the biomass equation since this reaction will ultimately determine the predictive capacity of the model in terms of essentiality and flux distributions. Thus, in order to obtain a reliable metabolic model the biomass precursors and their coefficients must be as precise as possible. Ideally, determination of the biomass composition would be performed experimentally, but when no experimental data are available this is established by approximation to closely related organisms. Computational methods however, can extract some information from the genome such as amino acid and nucleotide compositions. The main objectives of this study were to compare the biomass composition of several organisms and to evaluate how biomass precursor coefficients affected the predictability of several genome-scale metabolic models by comparing predictions with experimental data in literature. For that, the biomass macromolecular composition was experimentally determined and the amino acid composition was both experimentally and computationally estimated for several organisms. Sensitivity analysis studies were also performed with the Escherichia coli iAF1260 metabolic model concerning specific growth rates and flux distributions. The results obtained suggest that the macromolecular composition is conserved among related organisms. Contrasting, experimental data for amino acid composition seem to have no similarities for related organisms. It was also observed that the impact of macromolecular composition on specific growth rates and flux distributions is larger than the impact of amino acid composition, even when data from closely related organisms are used.
Resumo:
Tese de Doutoramento em Psicologia Básica
Resumo:
Genome-scale metabolic models are valuable tools in the metabolic engineering process, based on the ability of these models to integrate diverse sources of data to produce global predictions of organism behavior. At the most basic level, these models require only a genome sequence to construct, and once built, they may be used to predict essential genes, culture conditions, pathway utilization, and the modifications required to enhance a desired organism behavior. In this chapter, we address two key challenges associated with the reconstruction of metabolic models: (a) leveraging existing knowledge of microbiology, biochemistry, and available omics data to produce the best possible model; and (b) applying available tools and data to automate the reconstruction process. We consider these challenges as we progress through the model reconstruction process, beginning with genome assembly, and culminating in the integration of constraints to capture the impact of transcriptional regulation. We divide the reconstruction process into ten distinct steps: (1) genome assembly from sequenced reads; (2) automated structural and functional annotation; (3) phylogenetic tree-based curation of genome annotations; (4) assembly and standardization of biochemistry database; (5) genome-scale metabolic reconstruction; (6) generation of core metabolic model; (7) generation of biomass composition reaction; (8) completion of draft metabolic model; (9) curation of metabolic model; and (10) integration of regulatory constraints. Each of these ten steps is documented in detail.
Resumo:
This research work explores a new way of presenting and representing information about patients in critical care, which is the use of a timeline to display information. This is accomplished with the development of an interactive Pervasive Patient Timeline able to give to the intensivists an access in real-time to an environment containing patients clinical information from the moment in which the patients are admitted in the Intensive Care Unit (ICU) until their discharge This solution allows the intensivists to analyse data regarding vital signs, medication, exams, data mining predictions, among others. Due to the pervasive features, intensivists can have access to the timeline anywhere and anytime, allowing them to make decisions when they need to be made. This platform is patient-centred and is prepared to support the decision process allowing the intensivists to provide better care to patients due the inclusion of clinical forecasts.
Resumo:
Football is considered nowadays one of the most popular sports. In the betting world, it has acquired an outstanding position, which moves millions of euros during the period of a single football match. The lack of profitability of football betting users has been stressed as a problem. This lack gave origin to this research proposal, which it is going to analyse the possibility of existing a way to support the users to increase their profits on their bets. Data mining models were induced with the purpose of supporting the gamblers to increase their profits in the medium/long term. Being conscience that the models can fail, the results achieved by four of the seven targets in the models are encouraging and suggest that the system can help to increase the profits. All defined targets have two possible classes to predict, for example, if there are more or less than 7.5 corners in a single game. The data mining models of the targets, more or less than 7.5 corners, 8.5 corners, 1.5 goals and 3.5 goals achieved the pre-defined thresholds. The models were implemented in a prototype, which it is a pervasive decision support system. This system was developed with the purpose to be an interface for any user, both for an expert user as to a user who has no knowledge in football games.
Resumo:
Tese de Doutoramento em Engenharia Civil.
Resumo:
Particulate fouling tests were carried out using kaolin-water suspensions flowing through an annular heat exchanger with a copper inner tube. The flow rate was changed from test to test, but the fluid temperature and pH, as well as the particle concentration, were maintained constant. In the lower range of fluid velocities (<0.5 m/s), the deposition process seemed to be controlled by mass transfer. The corresponding experimental transport fluxes were compared to the predictions obtained with several models, showing that diffusion governed particle transport. The absolute values of the mass transfer fluxes and their dependences on the Reynolds number were satisfactorily predicted by some of the models.
Resumo:
Species introductions have altered host and parasite diversity throughout the world. In the case of introduced hosts, population age appears to be a good predictor of parasite richness. Habitat alteration is another variable that may impact host-parasite interactions by affecting the availability of intermediate hosts. The house sparrow (Passer domesticus (Linnaeus, 1758)) is a good model to test these predictions. It was introduced in several parts of the world and can be found across rural-urban gradients. A total of 160 house sparrows from Porto Alegre, state of Rio Grande do Sul, Brazil, were necropsied. Thirty house sparrows (19 %) were parasitized with at least one out of five helminth species (Digenea: Tamerlania inopina Freitas, 1951 and Eumegacetes sp.; Eucestoda: Choanotaenia passerina (Fuhrmann, 1907) Fuhrmann, 1932; Nematoda: Dispharynx nasuta (Rudolphi, 1819) Stiles & Hassall, 1920 and Cardiofilaria pavlovskyi Strom, 1937). Overall, there was no difference in prevalence and intensity of infection of any parasite species, parasite richness and community diversity between adult males and females and adults and juveniles. The number of infected sparrows among seasons, the richness of helminths and the abundance of species were also similar between rural and urban landscapes. Only the prevalence of C. passerina varied seasonally (p=0.0007). A decrease in the number of parasite species from the original range of P. domesticus (13) to its port of entrance in Brazil, the city of Rio de Janeiro (nine), to Porto Alegre (five) is compatible with the hypothesis that host population age is a good predictor of parasite richness.
Resumo:
ABSTRACT One of the most important effects derived from the intensive land use is the increase of nutrient concentration in the aquatic systems due to superficial drainage. Besides, the increment of precipitations in South America connected to the global climate change could intensify these anthropic impacts due to the changes in the runoff pattern and a greater discharge of water in the streams and rivers. The pampean streams are singular environments with high natural nutrient concentrations which could be increased even more if the predictions of global climate change for the area are met. In this context, the effect of experimental nutrient addition on macroinvertebrates in a lowland stream is studied. Samplings were carried out from March 2007 to February 2009 in two reaches (fertilized and unfertilized), upstream and downstream from the input of nutrients. The addition of nutrients caused an increase in the phosphorus concentration in the fertilized reach which was not observed for nitrogen concentration. From all macroinvertebrates studied only two taxa had significant differences in their abundance after fertilization: Corbicula fluminea and Ostracoda. Our results reveal that the disturbance caused by the increase of nutrients on the benthic community depends on basal nutrients concentration. The weak response of macroinvertebrates to fertilization in the pampean streams could be due to their tolerance to high concentrations of nutrients in relation to their evolutionary history in streams naturally enriched with nutrients. Further research concerning the thresholds of nutrients affecting macroinvertebrates and about the adaptive advantages of taxa in naturally eutrophic environments is still needed. This information will allow for a better understanding of the processes of nutrient cycling and for the construction of restoration measures in natural eutrophic ecosystems.
Resumo:
This paper investigates the selection of governance forms in interfirm collaborations taking into account the predictions from transaction costs and property rights theories. Transaction costs arguments are often used to justify the introduction of hierarchical controls in collaborations, but the ownership dimension of going from “contracts” to “hierarchies” has been ignored in the past and with it the so called “costs of ownership”. The theoretical results, tested with a sample of collaborations in which participate Spanish firms, indicate that the cost of ownership may offset the benefits of hierarchical controls and therefore limit their diffusion. Evidence is also reported of possible complementarities between reputation effects and forms of ownership that go together with hierarchical controls (i.e. joint ventures), in contrast with the generally assumed substitutability between the two.
Resumo:
We report on a series of experiments that examine bidding behavior in first-price sealed bid auctions with symmetric and asymmetric bidders. To study the extent of strategic behavior, we use an experimental design that elicits bidders' complete bid functions in each round (auction) of the experiment. In the aggregate, behavior is consistent with the basic equilibrium predictions for risk neutral or homogenous risk averse bidders (extent of bid shading, average seller's revenues and deviations from equilibrium). However, when we look at the extent of best reply behavior and the shape of bid functions, we find that individual behavior is not in line with the received equilibrium models, although it exhibits strategic sophistication.
Resumo:
Inductive learning aims at finding general rules that hold true in a database. Targeted learning seeks rules for the predictions of the value of a variable based on the values of others, as in the case of linear or non-parametric regression analysis. Non-targeted learning finds regularities without a specific prediction goal. We model the product of non-targeted learning as rules that state that a certain phenomenon never happens, or that certain conditions necessitate another. For all types of rules, there is a trade-off between the rule's accuracy and its simplicity. Thus rule selection can be viewed as a choice problem, among pairs of degree of accuracy and degree of complexity. However, one cannot in general tell what is the feasible set in the accuracy-complexity space. Formally, we show that finding out whether a point belongs to this set is computationally hard. In particular, in the context of linear regression, finding a small set of variables that obtain a certain value of R2 is computationally hard. Computational complexity may explain why a person is not always aware of rules that, if asked, she would find valid. This, in turn, may explain why one can change other people's minds (opinions, beliefs) without providing new information.
Resumo:
We report on a series of experiments that test the effects of an uncertain supply on the formation of bids and prices in sequential first-price auctions with private-independent values and unit-demands. Supply is assumed uncertain when buyers do not know the exact number of units to be sold (i.e., the length of the sequence). Although we observe a non-monotone behavior when supply is certain and an important overbidding, the data qualitatively support our price trend predictions and the risk neutral Nash equilibrium model of bidding for the last stage of a sequence, whether supply is certain or not. Our study shows that behavior in these markets changes significantly with the presence of an uncertain supply, and that it can be explained by assuming that bidders formulate pessimistic beliefs about the occurrence of another stage.
Resumo:
It is common to find in experimental data persistent oscillations in the aggregate outcomes and high levels of heterogeneity in individual behavior. Furthermore, it is not unusual to find significant deviations from aggregate Nash equilibrium predictions. In this paper, we employ an evolutionary model with boundedly rational agents to explain these findings. We use data from common property resource experiments (Casari and Plott, 2003). Instead of positing individual-specific utility functions, we model decision makers as selfish and identical. Agent interaction is simulated using an individual learning genetic algorithm, where agents have constraints in their working memory, a limited ability to maximize, and experiment with new strategies. We show that the model replicates most of the patterns that can be found in common property resource experiments.
Resumo:
When two candidates of different quality compete in a one dimensional policy space, the equilibrium outcomes are asymmetric and do not correspond to the median. There are three main effects. First, the better candidate adopts more centrist policies than the worse candidate. Second, the equilibrium is statistical, in the sense that it predicts a probability distribution of outcomes rather than a single degenerate outcome. Third, the equilibrium varies systematically with the level of uncertainty about the location of the median voter. We test these three predictions using laboratory experiments, and find strong support for all three. We also observe some biases and show that they canbe explained by quantal response equilibrium.