997 resultados para Module Modeling
Resumo:
Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models ford a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at http://www.biocristalografia.df.ibilce.unesp.br/tools/parmodel. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Two-stage isolated converters for photovoltaic (PV) applications commonly employ a high-frequency transformer on the DC-DC side, submitting the DC-AC inverter switches to high voltages and forcing the use of IGBTs instead of low-voltage and low-loss MOSFETs. This paper shows the modeling, control and simulation of a single-phase full-bridge inverter with high-frequency transformer (HFT) that can be used as part of a two-stage converter with transformerless DC-DC side or as a single-stage converter (simple DC-AC inverter) for grid-connected PV applications. The inverter is modeled in order to obtain a small-signal transfer function used to design the PResonant current control regulator. A high-frequency step-up transformer results in reduced voltage switches and better efficiency compared with converters in which the transformer is used on the DC-DC side. Simulations and experimental results with a 200 W prototype are shown. © 2012 IEEE.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.
Resumo:
Ein eindimensionales numerisches Modell der maritimenGrenzschicht (MBL) wurde erweitert, um chemische Reaktionenin der Gasphase, von Aerosolpartikeln und Wolkentropfen zu beschreiben. Ein Schwerpunkt war dabei die Betrachtung derReaktionszyklen von Halogenen. Soweit Ergebnisse vonMesskampagnen zur Verfuegung standen, wurden diese zurValidierung des Modells benutzt. Die Ergebnisse von frueheren Boxmodellstudien konntenbestaetigt werden. Diese zeigten die saeurekatalysierteAktivierung von Brom aus Seesalzaerosolen, die Bedeutung vonHalogenradikalen fuer die Zerstoerung von O3, diepotentielle Rolle von BrO bei der Oxidation von DMS und dievon HOBr und HOCl in der Oxidation von S(IV). Es wurde gezeigt, dass die Beruecksichtigung derVertikalprofile von meteorologischen und chemischen Groessenvon grosser Bedeutung ist. Dies spiegelt sich darin wider,dass Maxima des Saeuregehaltes von Seesalzaerosolen und vonreaktiven Halogenen am Oberrand der MBL gefunden wurden.Darueber hinaus wurde die Bedeutung von Sulfataerosolen beidem aktiven Recyceln von weniger aktiven zu photolysierbarenBromspezies gezeigt. Wolken haben grosse Auswirkungen auf die Evolution und denTagesgang der Halogene. Dies ist nicht auf Wolkenschichtenbeschraenkt. Der Tagesgang der meisten Halogene ist aufgrundeiner erhoehten Aufnahme der chemischen Substanzen in die Fluessigphase veraendert. Diese Ergebnisse betonen dieWichtigkeit der genauen Dokumentation der meteorologischenBedingungen bei Messkampagnen (besonders Wolkenbedeckungsgrad und Fluessigwassergehalt), um dieErgebnisse richtig interpretieren und mit Modellresultatenvergleichen zu koennen. Dieses eindimensionale Modell wurde zusammen mit einemBoxmodell der MBL verwendet, um die Auswirkungen vonSchiffemissionen auf die MBL abzuschaetzen, wobei dieVerduennung der Abgasfahne parameterisiert wurde. DieAuswirkungen der Emissionen sind am staerksten, wenn sie insauberen Gebieten stattfinden, die Hoehe der MBL gering istund das Einmischen von Hintergrundluft schwach ist.Chemische Reaktionen auf Hintergrundaerosolen spielen nureine geringe Rolle. In Ozeangebieten mit schwachemSchiffsverkehr sind die Auswirkungen auf die Chemie der MBL beschraenkt. In staerker befahrenen Gebieten ueberlappensich die Abgasfahnen mehrerer Schiffe und sorgen fuerdeutliche Auswirkungen. Diese Abschaetzung wurde mitSimulationen verglichen, bei denen die Emissionen alskontinuierliche Quellen behandelt wurden, wie das inglobalen Chemiemodellen der Fall ist. Wenn die Entwicklungder Abgasfahne beruecksichtigt wird, sind die Auswirkungendeutlich geringer da die Lebenszeit der Abgase in der erstenPhase nach Emission deutlich reduziert ist.
Resumo:
Synthetic Biology is a relatively new discipline, born at the beginning of the New Millennium, that brings the typical engineering approach (abstraction, modularity and standardization) to biotechnology. These principles aim to tame the extreme complexity of the various components and aid the construction of artificial biological systems with specific functions, usually by means of synthetic genetic circuits implemented in bacteria or simple eukaryotes like yeast. The cell becomes a programmable machine and its low-level programming language is made of strings of DNA. This work was performed in collaboration with researchers of the Department of Electrical Engineering of the University of Washington in Seattle and also with a student of the Corso di Laurea Magistrale in Ingegneria Biomedica at the University of Bologna: Marilisa Cortesi. During the collaboration I contributed to a Synthetic Biology project already started in the Klavins Laboratory. In particular, I modeled and subsequently simulated a synthetic genetic circuit that was ideated for the implementation of a multicelled behavior in a growing bacterial microcolony. In the first chapter the foundations of molecular biology are introduced: structure of the nucleic acids, transcription, translation and methods to regulate gene expression. An introduction to Synthetic Biology completes the section. In the second chapter is described the synthetic genetic circuit that was conceived to make spontaneously emerge, from an isogenic microcolony of bacteria, two different groups of cells, termed leaders and followers. The circuit exploits the intrinsic stochasticity of gene expression and intercellular communication via small molecules to break the symmetry in the phenotype of the microcolony. The four modules of the circuit (coin flipper, sender, receiver and follower) and their interactions are then illustrated. In the third chapter is derived the mathematical representation of the various components of the circuit and the several simplifying assumptions are made explicit. Transcription and translation are modeled as a single step and gene expression is function of the intracellular concentration of the various transcription factors that act on the different promoters of the circuit. A list of the various parameters and a justification for their value closes the chapter. In the fourth chapter are described the main characteristics of the gro simulation environment, developed by the Self Organizing Systems Laboratory of the University of Washington. Then, a sensitivity analysis performed to pinpoint the desirable characteristics of the various genetic components is detailed. The sensitivity analysis makes use of a cost function that is based on the fraction of cells in each one of the different possible states at the end of the simulation and the wanted outcome. Thanks to a particular kind of scatter plot, the parameters are ranked. Starting from an initial condition in which all the parameters assume their nominal value, the ranking suggest which parameter to tune in order to reach the goal. Obtaining a microcolony in which almost all the cells are in the follower state and only a few in the leader state seems to be the most difficult task. A small number of leader cells struggle to produce enough signal to turn the rest of the microcolony in the follower state. It is possible to obtain a microcolony in which the majority of cells are followers by increasing as much as possible the production of signal. Reaching the goal of a microcolony that is split in half between leaders and followers is comparatively easy. The best strategy seems to be increasing slightly the production of the enzyme. To end up with a majority of leaders, instead, it is advisable to increase the basal expression of the coin flipper module. At the end of the chapter, a possible future application of the leader election circuit, the spontaneous formation of spatial patterns in a microcolony, is modeled with the finite state machine formalism. The gro simulations provide insights into the genetic components that are needed to implement the behavior. In particular, since both the examples of pattern formation rely on a local version of Leader Election, a short-range communication system is essential. Moreover, new synthetic components that allow to reliably downregulate the growth rate in specific cells without side effects need to be developed. In the appendix are listed the gro code utilized to simulate the model of the circuit, a script in the Python programming language that was used to split the simulations on a Linux cluster and the Matlab code developed to analyze the data.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.
Resumo:
Past and future forest composition and distribution in temperate mountain ranges is strongly influenced by temperature and snowpack. We used LANDCLIM, a spatially explicit, dynamic vegetation model, to simulate forest dynamics for the last 16,000 years and compared the simulation results to pollen and macrofossil records at five sites on the Olympic Peninsula (Washington, USA). To address the hydrological effects of climate-driven variations in snowpack on simulated forest dynamics, we added a simple snow accumulation-and-melt module to the vegetation model and compared simulations with and without the module. LANDCLIM produced realistic present-day species composition with respect to elevation and precipitation gradients. Over the last 16,000 years, simulations driven by transient climate data from an atmosphere-ocean general circulation model (AOGCM) and by a chironomid-based temperature reconstruction captured Late-glacial to Late Holocene transitions in forest communities. Overall, the reconstruction-driven vegetation simulations matched observed vegetation changes better than the AOGCM-driven simulations. This study also indicates that forest composition is very sensitive to snowpack-mediated changes in soil moisture. Simulations without the snow module showed a strong effect of snowpack on key bioclimatic variables and species composition at higher elevations. A projected upward shift of the snow line and a decrease in snowpack might lead to drastic changes in mountain forests composition and even a shift to dry meadows due to insufficient moisture availability in shallow alpine soils.
Resumo:
Hereditary deficiency of factor IXa (fIXa), a key enzyme in blood coagulation, causes hemophilia B, a severe X chromosome-linked bleeding disorder afflicting 1 in 30,000 males; clinical studies have identified nearly 500 deleterious variants. The x-ray structure of porcine fIXa described here shows the atomic origins of the disease, while the spatial distribution of mutation sites suggests a structural model for factor X activation by phospholipid-bound fIXa and cofactor VIIIa. The 3.0-A-resolution diffraction data clearly show the structures of the serine proteinase module and the two preceding epidermal growth factor (EGF)-like modules; the N-terminal Gla module is partially disordered. The catalytic module, with covalent inhibitor D-Phe-1I-Pro-2I-Arg-3I chloromethyl ketone, most closely resembles fXa but differs significantly at several positions. Particularly noteworthy is the strained conformation of Glu-388, a residue strictly conserved in known fIXa sequences but conserved as Gly among other trypsin-like serine proteinases. Flexibility apparent in electron density together with modeling studies suggests that this may cause incomplete active site formation, even after zymogen, and hence the low catalytic activity of fIXa. The principal axes of the oblong EGF-like domains define an angle of 110 degrees, stabilized by a strictly conserved and fIX-specific interdomain salt bridge. The disorder of the Gla module, whose hydrophobic helix is apparent in electron density, can be attributed to the absence of calcium in the crystals; we have modeled the Gla module in its calcium form by using prothrombin fragment 1. The arched module arrangement agrees with fluorescence energy transfer experiments. Most hemophilic mutation sites of surface fIX residues occur on the concave surface of the bent molecule and suggest a plausible model for the membrane-bound ternary fIXa-FVIIIa-fX complex structure: fIXa and an equivalently arranged fX arch across an underlying fVIIIa subdomain from opposite sides; the stabilizing fVIIIa interactions force the catalytic modules together, completing fIXa active site formation and catalytic enhancement.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
This paper presents a new approach to improving the effectiveness of autonomous systems that deal with dynamic environments. The basis of the approach is to find repeating patterns of behavior in the dynamic elements of the system, and then to use predictions of the repeating elements to better plan goal directed behavior. It is a layered approach involving classifying, modeling, predicting and exploiting. Classifying involves using observations to place the moving elements into previously defined classes. Modeling involves recording features of the behavior on a coarse grained grid. Exploitation is achieved by integrating predictions from the model into the behavior selection module to improve the utility of the robot's actions. This is in contrast to typical approaches that use the model to select between different strategies or plays. Three methods of adaptation to the dynamic features of the environment are explored. The effectiveness of each method is determined using statistical tests over a number of repeated experiments. The work is presented in the context of predicting opponent behavior in the highly dynamic and multi-agent robot soccer domain (RoboCup)
Resumo:
The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.
Resumo:
The objective of this study is to demonstrate using weak form partial differential equation (PDE) method for a finite-element (FE) modeling of a new constitutive relation without the need of user subroutine programming. The viscoelastic asphalt mixtures were modeled by the weak form PDE-based FE method as the examples in the paper. A solid-like generalized Maxwell model was used to represent the deforming mechanism of a viscoelastic material, the constitutive relations of which were derived and implemented in the weak form PDE module of Comsol Multiphysics, a commercial FE program. The weak form PDE modeling of viscoelasticity was verified by comparing Comsol and Abaqus simulations, which employed the same loading configurations and material property inputs in virtual laboratory test simulations. Both produced identical results in terms of axial and radial strain responses. The weak form PDE modeling of viscoelasticity was further validated by comparing the weak form PDE predictions with real laboratory test results of six types of asphalt mixtures with two air void contents and three aging periods. The viscoelastic material properties such as the coefficients of a Prony series model for the relaxation modulus were obtained by converting from the master curves of dynamic modulus and phase angle. Strain responses of compressive creep tests at three temperatures and cyclic load tests were predicted using the weak form PDE modeling and found to be comparable with the measurements of the real laboratory tests. It was demonstrated that the weak form PDE-based FE modeling can serve as an efficient method to implement new constitutive models and can free engineers from user subroutine programming.
Resumo:
This thesis presents a model-based software implementation for the estimation of the damage of a power module inside and automotive traction inverter with few hardware test setup performed to support the simulation with real-life data.