878 resultados para physically based modeling
Resumo:
Mathematical modeling has been extensively applied to the study and development of fuel cells. In this work, the objective is to characterize a mechanistic model for the anode of a direct ethanol fuel cell and perform appropriate simulations. The software Comsol Multiphysics (R) (and the Chemical Engineering Module) was used in this work. The software Comsol Multiphysics (R) is an interactive environment for modeling scientific and engineering applications using partial differential equations (PDEs). Based on the finite element method, it provides speed and accuracy for several applications. The mechanistic model developed here can supply details of the physical system, such as the concentration profiles of the components within the anode and the coverage of the adsorbed species on the electrode surface. Also, the anode overpotential-current relationship can be obtained. To validate the anode model presented in this paper, experimental data obtained with a single fuel cell operating with an ethanol solution at the anode were used. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We report in this work the study of the interaction between formic acid and an oxidized platinum surface under open circuit conditions. The investigation was carried out with the aid of in situ infrared spectroscopy, and results analyzed in terms of a mathematical model and numerical simulations. It has been found that during the first seconds of the interaction a small amount of CO(2) is produced and absolutely no adsorbed CO was observed. A sudden drop in potential then follows, which is accompanied by a steep increase first of CO(2) production and then by adsorbed CO. The steep transient was rationalized in terms of an autocatalytic production of free platinum sites which enhances the overall rate of reaction. Modeling and simulation showed nearly quantitative agreement with the experimental observations and provided further insight into some experimentally inaccessible variables such as surface free sites. Finally, based on the understanding provided from the combined experimental and theoretical approach, we discuss the general aspects influencing the open circuit transient.
Predictive models for chronic renal disease using decision trees, naïve bayes and case-based methods
Resumo:
Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.
Resumo:
Generalized linear mixed models are flexible tools for modeling non-normal data and are useful for accommodating overdispersion in Poisson regression models with random effects. Their main difficulty resides in the parameter estimation because there is no analytic solution for the maximization of the marginal likelihood. Many methods have been proposed for this purpose and many of them are implemented in software packages. The purpose of this study is to compare the performance of three different statistical principles - marginal likelihood, extended likelihood, Bayesian analysis-via simulation studies. Real data on contact wrestling are used for illustration.
Resumo:
When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.
Resumo:
Canada releases over 150 billion litres of untreated and undertreated wastewater into the water environment every year1. To clean up urban wastewater, new Federal Wastewater Systems Effluent Regulations (WSER) on establishing national baseline effluent quality standards that are achievable through secondary wastewater treatment were enacted on July 18, 2012. With respect to the wastewater from the combined sewer overflows (CSO), the Regulations require the municipalities to report the annual quantity and frequency of effluent discharges. The City of Toronto currently has about 300 CSO locations within an area of approximately 16,550 hectares. The total sewer length of the CSO area is about 3,450 km and the number of sewer manholes is about 51,100. A system-wide monitoring of all CSO locations has never been undertaken due to the cost and practicality. Instead, the City has relied on estimation methods and modelling approaches in the past to allow funds that would otherwise be used for monitoring to be applied to the reduction of the impacts of the CSOs. To fulfill the WSER requirements, the City is now undertaking a study in which GIS-based hydrologic and hydraulic modelling is the approach. Results show the usefulness of this for 1) determining the flows contributing to the combined sewer system in the local and trunk sewers for dry weather flow, wet weather flow, and snowmelt conditions; 2) assessing hydraulic grade line and surface water depth in all the local and trunk sewers under heavy rain events; 3) analysis of local and trunk sewer capacities for future growth; and 4) reporting of the annual quantity and frequency of CSOs as per the requirements in the new Regulations. This modelling approach has also allowed funds to be applied toward reducing and ultimately eliminating the adverse impacts of CSOs rather than expending resources on unnecessary and costly monitoring.
Resumo:
Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.
Resumo:
The focus of this thesis is to discuss the development and modeling of an interface architecture to be employed for interfacing analog signals in mixed-signal SOC. We claim that the approach that is going to be presented is able to achieve wide frequency range, and covers a large range of applications with constant performance, allied to digital configuration compatibility. Our primary assumptions are to use a fixed analog block and to promote application configurability in the digital domain, which leads to a mixed-signal interface. The use of a fixed analog block avoids the performance loss common to configurable analog blocks. The usage of configurability on the digital domain makes possible the use of all existing tools for high level design, simulation and synthesis to implement the target application, with very good performance prediction. The proposed approach utilizes the concept of frequency translation (mixing) of the input signal followed by its conversion to the ΣΔ domain, which makes possible the use of a fairly constant analog block, and also, a uniform treatment of input signal from DC to high frequencies. The programmability is performed in the ΣΔ digital domain where performance can be closely achieved according to application specification. The interface performance theoretical and simulation model are developed for design space exploration and for physical design support. Two prototypes are built and characterized to validate the proposed model and to implement some application examples. The usage of this interface as a multi-band parametric ADC and as a two channels analog multiplier and adder are shown. The multi-channel analog interface architecture is also presented. The characterization measurements support the main advantages of the approach proposed.
Resumo:
Exchange rate misalignment assessment is becoming more relevant in recent period particularly after the nancial crisis of 2008. There are di erent methodologies to address real exchange rate misalignment. The real exchange misalignment is de ned as the di erence between actual real e ective exchange rate and some equilibrium norm. Di erent norms are available in the literature. Our paper aims to contribute to the literature by showing that Behavioral Equilibrium Exchange Rate approach (BEER) adopted by Clark & MacDonald (1999), Ubide et al. (1999), Faruqee (1994), Aguirre & Calderón (2005) and Kubota (2009) among others can be improved in two following manners. The rst one consists of jointly modeling real e ective exchange rate, trade balance and net foreign asset position. The second one has to do with the possibility of explicitly testing over identifying restrictions implied by economic theory and allowing the analyst to show that these restrictions are not falsi ed by the empirical evidence. If the economic based identifying restrictions are not rejected it is also possible to decompose exchange rate misalignment in two pieces, one related to long run fundamentals of exchange rate and the other related to external account imbalances. We also discuss some necessary conditions that should be satis ed for disrcarding trade balance information without compromising exchange rate misalignment assessment. A statistical (but not a theoretical) identifying strategy for calculating exchange rate misalignment is also discussed. We illustrate the advantages of our approach by analyzing the Brazilian case. We show that the traditional approach disregard important information of external accounts equilibrium for this economy.
Resumo:
Nowadays, more than half of the computer development projects fail to meet the final users' expectations. One of the main causes is insufficient knowledge about the organization of the enterprise to be supported by the respective information system. The DEMO methodology (Design and Engineering Methodology for Organizations) has been proved as a well-defined method to specify, through models and diagrams, the essence of any organization at a high level of abstraction. However, this methodology is platform implementation independent, lacking the possibility of saving and propagating possible changes from the organization models to the implemented software, in a runtime environment. The Universal Enterprise Adaptive Object Model (UEAOM) is a conceptual schema being used as a basis for a wiki system, to allow the modeling of any organization, independent of its implementation, as well as the previously mentioned change propagation in a runtime environment. Based on DEMO and UEAOM, this project aims to develop efficient and standardized methods, to enable an automatic conversion of DEMO Ontological Models, based on UEAOM specification into BPMN (Business Process Model and Notation) models of processes, using clear semantics, without ambiguities, in order to facilitate the creation of processes, almost ready for being executed on workflow systems that support BPMN.
Resumo:
The objective of this study was to estimate the spatial distribution of work accident risk in the informal work market in the urban zone of an industrialized city in southeast Brazil and to examine concomitant effects of age, gender, and type of occupation after controlling for spatial risk variation. The basic methodology adopted was that of a population-based case-control study with particular interest focused on the spatial location of work. Cases were all casual workers in the city suffering work accidents during a one-year period; controls were selected from the source population of casual laborers by systematic random sampling of urban homes. The spatial distribution of work accidents was estimated via a semiparametric generalized additive model with a nonparametric bidimensional spline of the geographical coordinates of cases and controls as the nonlinear spatial component, and including age, gender, and occupation as linear predictive variables in the parametric component. We analyzed 1,918 cases and 2,245 controls between 1/11/2003 and 31/10/2004 in Piracicaba, Brazil. Areas of significantly high and low accident risk were identified in relation to mean risk in the study region (p < 0.01). Work accident risk for informal workers varied significantly in the study area. Significant age, gender, and occupational group effects on accident risk were identified after correcting for this spatial variation. A good understanding of high-risk groups and high-risk regions underpins the formulation of hypotheses concerning accident causality and the development of effective public accident prevention policies.
Resumo:
The venom of Crotalus durissus terrificus snakes presents various substances, including a serine protease with thrombin-like activity, called gyroxin, that clots plasmatic fibrinogen and promote the fibrin formation. The aim of this study was to purify and structurally characterize the gyroxin enzyme from Crotalus durissus terrificus venom. For isolation and purification, the following methods were employed: gel filtration on Sephadex G75 column and affinity chromatography on benzamidine Sepharose 6B; 12% SDS-PAGE under reducing conditions; N-terminal sequence analysis; cDNA cloning and expression through RT-PCR and crystallization tests. Theoretical molecular modeling was performed using bioinformatics tools based on comparative analysis of other serine proteases deposited in the NCBI (National Center for Biotechnology Information) database. Protein N-terminal sequencing produced a single chain with a molecular mass of similar to 30 kDa while its full-length cDNA had 714 bp which encoded a mature protein containing 238 amino acids. Crystals were obtained from the solutions 2 and 5 of the Crystal Screen Kit (R), two and one respectively, that reveal the protein constitution of the sample. For multiple sequence alignments of gyroxin-like B2.1 with six other serine proteases obtained from snake venoms (SVSPs), the preservation of cysteine residues and their main structural elements (alpha-helices, beta-barrel and loops) was indicated. The localization of the catalytic triad in His57, Asp102 and Ser198 as well as S1 and S2 specific activity sites in Thr193 and Gli215 amino acids was pointed. The area of recognition and cleavage of fibrinogen in SVSPs for modeling gyroxin B2.1 sequence was located at Arg60, Arg72, Gln75, Arg81, Arg82, Lis85, Glu86 and Lis87 residues. Theoretical modeling of gyroxin fraction generated a classical structure consisting of two alpha-helices, two beta-barrel structures, five disulfide bridges and loops in positions 37, 60, 70, 99, 148, 174 and 218. These results provided information about the functional structure of gyroxin allowing its application in the design of new drugs.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)