869 resultados para Constraint based modeling
Resumo:
Eukaryotic translation initiation factor 5A (eIF5A) is a protein that is highly conserved and essential for cell viability. This factor is the only protein known to contain the unique and essential amino acid residue hypusine. This work focused on the structural and functional characterization of Saccharomyces cerevisiae eIF5A. The tertiary structure of yeast eIF5A was modeled based on the structure of its Leishmania mexicana homologue and this model was used to predict the structural localization of new site-directed and randomly generated mutations. Most of the 40 new mutants exhibited phenotypes that resulted from eIF-5A protein-folding defects. Our data provided evidence that the C-terminal alpha-helix present in yeast eIF5A is an essential structural element, whereas the eIF5A N-terminal 10 amino acid extension not present in archaeal eIF5A homologs, is not. Moreover, the mutants containing substitutions at or in the vicinity of the hypusine modification site displayed nonviable or temperature-sensitive phenotypes and were defective in hypusine modification. Interestingly, two of the temperature-sensitive strains produced stable mutant eIF5A proteins - eIF5A(K56A) and eIF5A(Q22H,L93F)- and showed defects in protein synthesis at the restrictive temperature. Our data revealed important structural features of eIF5A that are required for its vital role in cell viability and underscored an essential function of eIF5A in the translation step of gene expression.
Resumo:
Schistosomiasis is considered the second most important tropical parasitic disease, with severe socioeconomic consequences for millions of people worldwide. Schistosoma monsoni, one of the causative agents of human schistosomiasis, is unable to synthesize purine nucleotides de novo, which makes the enzymes of the purine salvage pathway important targets for antischistosomal drug development. In the present work, we describe the development of a pharmacophore model for ligands of S. mansoni purine nucleoside phosphorylase (SmPNP) as well as a pharmacophore-based virtual screening approach, which resulted in the identification of three thioxothiazolidinones (1-3) with substantial in vitro inhibitory activity against SmPNP. Synthesis, biochemical evaluation, and structure activity relationship investigations led to the successful development of a small set of thioxothiazolidinone derivatives harboring a novel chemical scaffold as new competitive inhibitors of SmPNP at the low-micromolar range. Seven compounds were identified with IC(50) values below 100 mu M. The most potent inhibitors 7, 10, and 17 with 1050 of 2, 18, and 38 mu M, respectively, could represent new potential lead compounds for further development of the therapy of schistosomiasis.
Resumo:
Regression models for the mean quality-adjusted survival time are specified from hazard functions of transitions between two states and the mean quality-adjusted survival time may be a complex function of covariates. We discuss a regression model for the mean quality-adjusted survival (QAS) time based on pseudo-observations, which has the advantage of directly modeling the effect of covariates in the QAS time. Both Monte Carlo Simulations and a real data set are studied. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Birnbaum and Saunders (1969a) introduced a probability distribution which is commonly used in reliability studies For the first time based on this distribution the so-called beta-Birnbaum-Saunders distribution is proposed for fatigue life modeling Various properties of the new model including expansions for the moments moment generating function mean deviations density function of the order statistics and their moments are derived We discuss maximum likelihood estimation of the model s parameters The superiority of the new model is illustrated by means of three failure real data sets (C) 2010 Elsevier B V All rights reserved
Resumo:
Mathematical modeling has been extensively applied to the study and development of fuel cells. In this work, the objective is to characterize a mechanistic model for the anode of a direct ethanol fuel cell and perform appropriate simulations. The software Comsol Multiphysics (R) (and the Chemical Engineering Module) was used in this work. The software Comsol Multiphysics (R) is an interactive environment for modeling scientific and engineering applications using partial differential equations (PDEs). Based on the finite element method, it provides speed and accuracy for several applications. The mechanistic model developed here can supply details of the physical system, such as the concentration profiles of the components within the anode and the coverage of the adsorbed species on the electrode surface. Also, the anode overpotential-current relationship can be obtained. To validate the anode model presented in this paper, experimental data obtained with a single fuel cell operating with an ethanol solution at the anode were used. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We report in this work the study of the interaction between formic acid and an oxidized platinum surface under open circuit conditions. The investigation was carried out with the aid of in situ infrared spectroscopy, and results analyzed in terms of a mathematical model and numerical simulations. It has been found that during the first seconds of the interaction a small amount of CO(2) is produced and absolutely no adsorbed CO was observed. A sudden drop in potential then follows, which is accompanied by a steep increase first of CO(2) production and then by adsorbed CO. The steep transient was rationalized in terms of an autocatalytic production of free platinum sites which enhances the overall rate of reaction. Modeling and simulation showed nearly quantitative agreement with the experimental observations and provided further insight into some experimentally inaccessible variables such as surface free sites. Finally, based on the understanding provided from the combined experimental and theoretical approach, we discuss the general aspects influencing the open circuit transient.
Resumo:
Using a physically based model, the microstructural evolution of Nb microalloyed steels during rolling in SSAB Tunnplåt’s hot strip mill was modeled. The model describes the evolution of dislocation density, the creation and diffusion of vacancies, dynamic and static recovery through climb and glide, subgrain formation and growth, dynamic and static recrystallization and grain growth. Also, the model describes the dissolution and precipitation of particles. The impeding effect on grain growth and recrystallization due to solute drag and particles is accounted for. During hot strip rolling of Nb steels, Nb in solid solution retards recrystallization due to solute drag and at lower temperatures strain-induced precipitation of Nb(C,N) may occur which effectively retard recrystallization. The flow stress behavior during hot rolling was calculated where the mean flow stress values were calculated using both the model and measured mill data. The model showed that solute drag has an essential effect on recrystallization during hot rolling of Nb steels.
Predictive models for chronic renal disease using decision trees, naïve bayes and case-based methods
Resumo:
Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.
Resumo:
Generalized linear mixed models are flexible tools for modeling non-normal data and are useful for accommodating overdispersion in Poisson regression models with random effects. Their main difficulty resides in the parameter estimation because there is no analytic solution for the maximization of the marginal likelihood. Many methods have been proposed for this purpose and many of them are implemented in software packages. The purpose of this study is to compare the performance of three different statistical principles - marginal likelihood, extended likelihood, Bayesian analysis-via simulation studies. Real data on contact wrestling are used for illustration.
Resumo:
The predominant knowledge-based approach to automated model construction, compositional modelling, employs a set of models of particular functional components. Its inference mechanism takes a scenario describing the constituent interacting components of a system and translates it into a useful mathematical model. This paper presents a novel compositional modelling approach aimed at building model repositories. It furthers the field in two respects. Firstly, it expands the application domain of compositional modelling to systems that can not be easily described in terms of interacting functional components, such as ecological systems. Secondly, it enables the incorporation of user preferences into the model selection process. These features are achieved by casting the compositional modelling problem as an activity-based dynamic preference constraint satisfaction problem, where the dynamic constraints describe the restrictions imposed over the composition of partial models and the preferences correspond to those of the user of the automated modeller. In addition, the preference levels are represented through the use of symbolic values that differ in orders of magnitude.
Resumo:
When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.
Resumo:
Canada releases over 150 billion litres of untreated and undertreated wastewater into the water environment every year1. To clean up urban wastewater, new Federal Wastewater Systems Effluent Regulations (WSER) on establishing national baseline effluent quality standards that are achievable through secondary wastewater treatment were enacted on July 18, 2012. With respect to the wastewater from the combined sewer overflows (CSO), the Regulations require the municipalities to report the annual quantity and frequency of effluent discharges. The City of Toronto currently has about 300 CSO locations within an area of approximately 16,550 hectares. The total sewer length of the CSO area is about 3,450 km and the number of sewer manholes is about 51,100. A system-wide monitoring of all CSO locations has never been undertaken due to the cost and practicality. Instead, the City has relied on estimation methods and modelling approaches in the past to allow funds that would otherwise be used for monitoring to be applied to the reduction of the impacts of the CSOs. To fulfill the WSER requirements, the City is now undertaking a study in which GIS-based hydrologic and hydraulic modelling is the approach. Results show the usefulness of this for 1) determining the flows contributing to the combined sewer system in the local and trunk sewers for dry weather flow, wet weather flow, and snowmelt conditions; 2) assessing hydraulic grade line and surface water depth in all the local and trunk sewers under heavy rain events; 3) analysis of local and trunk sewer capacities for future growth; and 4) reporting of the annual quantity and frequency of CSOs as per the requirements in the new Regulations. This modelling approach has also allowed funds to be applied toward reducing and ultimately eliminating the adverse impacts of CSOs rather than expending resources on unnecessary and costly monitoring.
Resumo:
Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.
Resumo:
The focus of this thesis is to discuss the development and modeling of an interface architecture to be employed for interfacing analog signals in mixed-signal SOC. We claim that the approach that is going to be presented is able to achieve wide frequency range, and covers a large range of applications with constant performance, allied to digital configuration compatibility. Our primary assumptions are to use a fixed analog block and to promote application configurability in the digital domain, which leads to a mixed-signal interface. The use of a fixed analog block avoids the performance loss common to configurable analog blocks. The usage of configurability on the digital domain makes possible the use of all existing tools for high level design, simulation and synthesis to implement the target application, with very good performance prediction. The proposed approach utilizes the concept of frequency translation (mixing) of the input signal followed by its conversion to the ΣΔ domain, which makes possible the use of a fairly constant analog block, and also, a uniform treatment of input signal from DC to high frequencies. The programmability is performed in the ΣΔ digital domain where performance can be closely achieved according to application specification. The interface performance theoretical and simulation model are developed for design space exploration and for physical design support. Two prototypes are built and characterized to validate the proposed model and to implement some application examples. The usage of this interface as a multi-band parametric ADC and as a two channels analog multiplier and adder are shown. The multi-channel analog interface architecture is also presented. The characterization measurements support the main advantages of the approach proposed.
Resumo:
Exchange rate misalignment assessment is becoming more relevant in recent period particularly after the nancial crisis of 2008. There are di erent methodologies to address real exchange rate misalignment. The real exchange misalignment is de ned as the di erence between actual real e ective exchange rate and some equilibrium norm. Di erent norms are available in the literature. Our paper aims to contribute to the literature by showing that Behavioral Equilibrium Exchange Rate approach (BEER) adopted by Clark & MacDonald (1999), Ubide et al. (1999), Faruqee (1994), Aguirre & Calderón (2005) and Kubota (2009) among others can be improved in two following manners. The rst one consists of jointly modeling real e ective exchange rate, trade balance and net foreign asset position. The second one has to do with the possibility of explicitly testing over identifying restrictions implied by economic theory and allowing the analyst to show that these restrictions are not falsi ed by the empirical evidence. If the economic based identifying restrictions are not rejected it is also possible to decompose exchange rate misalignment in two pieces, one related to long run fundamentals of exchange rate and the other related to external account imbalances. We also discuss some necessary conditions that should be satis ed for disrcarding trade balance information without compromising exchange rate misalignment assessment. A statistical (but not a theoretical) identifying strategy for calculating exchange rate misalignment is also discussed. We illustrate the advantages of our approach by analyzing the Brazilian case. We show that the traditional approach disregard important information of external accounts equilibrium for this economy.