866 resultados para modelling and simulation
Resumo:
Parallel kinematic structures are considered very adequate architectures for positioning and orienti ng the tools of robotic mechanisms. However, developing dynamic models for this kind of systems is sometimes a difficult task. In fact, the direct application of traditional methods of robotics, for modelling and analysing such systems, usually does not lead to efficient and systematic algorithms. This work addre sses this issue: to present a modular approach to generate the dynamic model and through some convenient modifications, how we can make these methods more applicable to parallel structures as well. Kane’s formulati on to obtain the dynamic equations is shown to be one of the easiest ways to deal with redundant coordinates and kinematic constraints, so that a suitable c hoice of a set of coordinates allows the remaining of the modelling procedure to be computer aided. The advantages of this approach are discussed in the modelling of a 3-dof parallel asymmetric mechanisms.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.
Resumo:
This thesis develops an effective modeling and simulation procedure for a specific thermal energy storage system commonly used and recommended for various applications (such as an auxiliary energy storage system for solar heating based Rankine cycle power plant). This thermal energy storage system transfers heat from a hot fluid (termed as heat transfer fluid - HTF) flowing in a tube to the surrounding phase change material (PCM). Through unsteady melting or freezing process, the PCM absorbs or releases thermal energy in the form of latent heat. Both scientific and engineering information is obtained by the proposed first-principle based modeling and simulation procedure. On the scientific side, the approach accurately tracks the moving melt-front (modeled as a sharp liquid-solid interface) and provides all necessary information about the time-varying heat-flow rates, temperature profiles, stored thermal energy, etc. On the engineering side, the proposed approach is unique in its ability to accurately solve – both individually and collectively – all the conjugate unsteady heat transfer problems for each of the components of the thermal storage system. This yields critical system level information on the various time-varying effectiveness and efficiency parameters for the thermal storage system.
Resumo:
Recently, steady economic growth rates have been kept in Poland and Hungary. Money supplies are growing rather rapidly in these economies. In large, exchange rates have trends of depreciation. Then, exports and prices show the steady growth rates. It can be thought that per capita GDPs are in the same level and development stages are similar in these two countries. It is assumed that these two economies have the same export market and export goods are competing in it. If one country has an expansion of monetary policy, price increase and interest rate decrease. Then, exchange rate decrease. Exports and GDP will increase through this phenomenon. At the same time, this expanded monetary policy affects another country through the trade. This mutual relationship between two countries can be expressed by the Nash-equilibrium in the Game theory. In this paper, macro-econometric models of Polish and Hungarian economies are built and the Nash- equilibrium is introduced into them.
Resumo:
Overhead rigid conductor arrangements for current collection for railway traction have some advantages compared to other, more conventional, energy supply systems. They are simple, robust and easily maintained, not to mention their flexibility as to the required height for installation, which makes them particularly suitable for use in subway infrastructures. Nevertheless, due to the increasing speeds of new vehicles running on modern subway lines, a more efficient design is required for this kind of system. In this paper, the authors present a dynamic analysis of overhead conductor rail systems focused on the design of a new conductor profile with a dynamic behaviour superior to that of the system currently in use. This means that either an increase in running speed can be attained, which at present does not exceed 110 km/h, or an increase in the distance between the rigid catenary supports with the ensuing saving in installation costs. This study has been carried out using simulation techniques. The ANSYS programme has been used for the finite element modelling and the SIMPACK programme for the elastic multibody systems analysis.
Resumo:
The modelling of critical infrastructures (CIs) is an important issue that needs to be properly addressed, for several reasons. It is a basic support for making decisions about operation and risk reduction. It might help in understanding high-level states at the system-of-systems layer, which are not ready evident to the organisations that manage the lower level technical systems. Moreover, it is also indispensable for setting a common reference between operator and authorities, for agreeing on the incident scenarios that might affect those infrastructures. So far, critical infrastructures have been modelled ad-hoc, on the basis of knowledge and practice derived from less complex systems. As there is no theoretical framework, most of these efforts proceed without clear guides and goals and using informally defined schemas based mostly on boxes and arrows. Different CIs (electricity grid, telecommunications networks, emergency support, etc) have been modelled using particular schemas that were not directly translatable from one CI to another. If there is a desire to build a science of CIs it is because there are some observable commonalities that different CIs share. Up until now, however, those commonalities were not adequately compiled or categorized, so building models of CIs that are rooted on such commonalities was not possible. This report explores the issue of which elements underlie every CI and how those elements can be used to develop a modelling language that will enable CI modelling and, subsequently, analysis of CI interactions, with a special focus on resilience
Resumo:
Recently, we have presented some studies concerning the analysis, design and optimization of one experimental device developed in the UK - GPTAD - which has been designed to remove blood clots without the need to make contact with the clot itself, thereby potentially reducing the risk of problems such as downstream embolisation. Based on the idea of a modification of the previous device, in this work, we present a model based in the use of stents like the SolitaireTM FR, which is in contact with the clot itself. In the case of such devices, the stent is self-expandable and the extraction of the blood clot is faciliatated by the stent, which must be inside the clot. Such stents are generally inserted in position by using the guidewire inserted into the catheter. This type of modeling could potentially be useful in showing how the blood clot is moved by the various different forces involved. The modelling has been undertaken by analyzing the resistances, compliances and inertances effects. We model an artery and blood clot for range of forces for the guidewire. In each case we determine the interaction between blood clot, stent and artery.
Resumo:
Analysis of river flow using hydraulic modelling and its implications in derived environ-mental applications are inextricably connected with the way in which the river boundary shape is represented. This relationship is scale-dependent upon the modelling resolution which in turn determines the importance of a subscale performance of the model and the way subscale (surface and flow) processes are parameterised. Commonly, the subscale behaviour of the model relies upon a roughness parameterisation whose meaning depends on the dimensionality of the hydraulic model and the resolution of the topographic represen¬tation scale. This latter is, in turn, dependent on the resolution of the computational mesh as well as on the detail of measured topographic data. Flow results are affected by this interactions between scale and subscale parameterisation according to the dimensionality approach. The aim of this dissertation is the evaluation of these interactions upon hy¬draulic modelling results. Current high resolution topographic source availability induce this research which is tackled using a suitable roughness approach according to each di¬mensionality with the purpose of the interaction assessment. A 1D HEC-RAS model, a 2D raster-based diffusion-wave model with a scale-dependent distributed roughness parame-terisation and a 3D finite volume scheme with a porosity algorithm approach to incorporate complex topography have been used. Different topographic sources are assessed using a 1D scheme. LiDAR data are used to isolate the mesh resolution from the topographic content of the DEM effects upon 2D and 3D flow results. A distributed roughness parameterisation, using a roughness height approach dependent upon both mesh resolution and topographic content is developed and evaluated for the 2D scheme. Grain-size data and fractal methods are used for the reconstruction of topography with microscale information, required for some applications but not easily available. Sensitivity of hydraulic parameters to this topographic parameterisation is evaluated in a 3D scheme at different mesh resolu¬tions. Finally, the structural variability of simulated flow is analysed and related to scale interactions. Model simulations demonstrate (i) the importance of the topographic source in a 1D models; (ii) the mesh resolution approach is dominant in 2D and 3D simulations whereas in a 1D model the topographic source and even the roughness parameterisation impacts are more critical; (iii) the increment of the sensitivity to roughness parameterisa-tion in 1D and 2D schemes with detailed topographic sources and finer mesh resolutions; and (iv) the topographic content and microtopography impact throughout the vertical profile of computed 3D velocity in a depth-dependent way, whereas 2D results are not affected by topographic content variations. Finally, the spatial analysis shows that the mesh resolution controls high resolution model scale results, roughness parameterisation control 2D simulation results for a constant mesh resolution; and topographic content and micro-topography variations impacts upon the organisation of flow results depth-dependently in a 3D scheme. Resumen La topografía juega un papel fundamental en la distribución del agua y la energía en los paisajes naturales (Beven and Kirkby 1979; Wood et al. 1997). La simulación hidráulica combinada con métodos de medición del terreno por teledetección constituyen una poderosa herramienta de investigación en la comprensión del comportamiento de los flujos de agua debido a la variabilidad de la superficie sobre la que fluye. La representación e incorporación de la topografía en el esquema hidráulico tiene una importancia crucial en los resultados y determinan el desarrollo de sus aplicaciones al campo medioambiental. Cualquier simulación es una simplificación de un proceso del mundo real, y por tanto el grado de simplificación determinará el significado de los resultados simulados. Este razonamiento es particularmente difícil de trasladar a la simulación hidráulica donde aspectos de la escala tan diferentes como la escala de los procesos de flujo y de representación del contorno son considerados conjuntamente incluso en fases de parametrización (e.g. parametrización de la rugosidad). Por una parte, esto es debido a que las decisiones de escala vienen condicionadas entre ellas (e.g. la dimensionalidad del modelo condiciona la escala de representación del contorno) y por tanto interaccionan en sus resultados estrechamente. Y por otra parte, debido a los altos requerimientos numéricos y computacionales de una representación explícita de alta resolución de los procesos de flujo y discretización de la malla. Además, previo a la modelización hidráulica, la superficie del terreno sobre la que el agua fluye debe ser modelizada y por tanto presenta su propia escala de representación, que a su vez dependerá de la escala de los datos topográficos medidos con que se elabora el modelo. En última instancia, esta topografía es la que determina el comportamiento espacial del flujo. Por tanto, la escala de la topografía en sus fases de medición y modelización (resolución de los datos y representación topográfica) previas a su incorporación en el modelo hidráulico producirá a su vez un impacto que se acumulará al impacto global resultante debido a la escala computacional del modelo hidráulico y su dimensión. La comprensión de las interacciones entre las complejas geometrías del contorno y la estructura del flujo utilizando la modelización hidráulica depende de las escalas consideradas en la simplificación de los procesos hidráulicos y del terreno (dimensión del modelo, tamaño de escala computacional y escala de los datos topográficos). La naturaleza de la aplicación del modelo hidráulico (e.g. habitat físico, análisis de riesgo de inundaciones, transporte de sedimentos) determina en primer lugar la escala del estudio y por tanto el detalle de los procesos a simular en el modelo (i.e. la dimensionalidad) y, en consecuencia, la escala computacional a la que se realizarán los cálculos (i.e. resolución computacional). Esta última a su vez determina, el detalle geográfico con que deberá representarse el contorno acorde con la resolución de la malla computacional. La parametrización persigue incorporar en el modelo hidráulico la cuantificación de los procesos y condiciones físicas del sistema natural y por tanto debe incluir no solo aquellos procesos que tienen lugar a la escala de modelización, sino también aquellos que tienen lugar a un nivel subescalar y que deben ser definidos mediante relaciones de escalado con las variables modeladas explícitamente. Dicha parametrización se implementa en la práctica mediante la provisión de datos al modelo, por tanto la escala de los datos geográficos utilizados para parametrizar el modelo no sólo influirá en los resultados, sino también determinará la importancia del comportamiento subescalar del modelo y el modo en que estos procesos deban ser parametrizados (e.g. la variabilidad natural del terreno dentro de la celda de discretización o el flujo en las direcciones laterales y verticales en un modelo unidimensional). En esta tesis, se han utilizado el modelo unidimensional HEC-RAS, (HEC 1998b), un modelo ráster bidimensional de propagación de onda, (Yu 2005) y un esquema tridimensional de volúmenes finitos con un algoritmo de porosidad para incorporar la topografía, (Lane et al. 2004; Hardy et al. 2005). La geometría del contorno viene definida por la escala de representación topográfica (resolución de malla y contenido topográfico), la cual a su vez depende de la escala de la fuente cartográfica. Todos estos factores de escala interaccionan en la respuesta del modelo hidráulico a la topografía. En los últimos años, métodos como el análisis fractal y las técnicas geoestadísticas utilizadas para representar y analizar elementos geográficos (e.g. en la caracterización de superficies (Herzfeld and Overbeck 1999; Butler et al. 2001)), están promoviendo nuevos enfoques en la cuantificación de los efectos de escala (Lam et al. 2004; Atkinson and Tate 2000; Lam et al. 2006) por medio del análisis de la estructura espacial de la variable (e.g. Bishop et al. 2006; Ju et al. 2005; Myint et al. 2004; Weng 2002; Bian and Xie 2004; Southworth et al. 2006; Pozd-nyakova et al. 2005; Kyriakidis and Goodchild 2006). Estos métodos cuantifican tanto el rango de valores de la variable presentes a diferentes escalas como la homogeneidad o heterogeneidad de la variable espacialmente distribuida (Lam et al. 2004). En esta tesis, estas técnicas se han utilizado para analizar el impacto de la topografía sobre la estructura de los resultados hidráulicos simulados. Los datos de teledetección de alta resolución y técnicas GIS también están siendo utilizados para la mejor compresión de los efectos de escala en modelos medioambientales (Marceau 1999; Skidmore 2002; Goodchild 2003) y se utilizan en esta tesis. Esta tesis como corpus de investigación aborda las interacciones de esas escalas en la modelización hidráulica desde un punto de vista global e interrelacionado. Sin embargo, la estructura y el foco principal de los experimentos están relacionados con las nociones espaciales de la escala de representación en relación con una visión global de las interacciones entre escalas. En teoría, la representación topográfica debe caracterizar la superficie sobre la que corre el agua a una adecuada (conforme a la finalidad y dimensión del modelo) escala de discretización, de modo que refleje los procesos de interés. La parametrización de la rugosidad debe de reflejar los efectos de la variabilidad de la superficie a escalas de más detalle que aquellas representadas explícitamente en la malla topográfica (i.e. escala de discretización). Claramente, ambos conceptos están físicamente relacionados por un
Resumo:
Coupled device and process silumation tools, collectively known as technology computer-aided design (TCAD), have been used in the integrated circuit industry for over 30 years. These tools allow researchers to quickly converge on optimized devide designs and manufacturing processes with minimal experimental expenditures. The PV industry has been slower to adopt these tools, but is quickly developing competency in using them. This paper introduces a predictive defect engineering paradigm and simulation tool, while demonstrating its effectiveness at increasing the performance and throughput of current industrial processes. the impurity-to-efficiency (I2E) simulator is a coupled process and device simulation tool that links wafer material purity, processing parameters and cell desigh to device performance. The tool has been validated with experimental data and used successfully with partners in industry. The simulator has also been deployed in a free web-accessible applet, which is available for use by the industrial and academic communities.
Resumo:
In this paper, a simulation tool for assisting the deployment of wireless sensor network is introduced and simulation results are verified under a specific indoor environment. The simulation tool supports two modes: deterministic mode and stochastic mode. The deterministic mode is environment dependent in which the information of environment should be provided beforehand. Ray tracing method and deterministic propagation model are employed in order to increase the accuracy of the estimated coverage, connectivity and routing; the stochastic mode is useful for large scale random deployment without previous knowledge on geographic information. Dynamic Source Routing protocol (DSR) and Ad hoc On-Demand Distance Vector Routing protocol (AODV) are implemented in order to calculate the topology of WSN. Hence this tool gives direct view on the performance of WSN and assists users in finding the potential problems of wireless sensor network before real deployment. At the end, a case study is realized in Centro de Electronica Industrial (CEI), the simulation results on coverage, connectivity and routing are verified by the measurement.
Resumo:
This paper will present an open-source simulation tool, which is being developed in the frame of an European research project1. The tool, whose final version will be freely available through a website, allows the modelling and the design of different types of grid-connected PV systems, such as large grid-connected plants and building-integrated installations. The tool is based on previous software developed by the IES-UPM2, whose models and energy losses scenarios have been validated in the commissioning of PV projects3 carried out in Spain, Portugal, France and Italy, whose aggregated capacity is nearly 300MW. This link between design and commissioning is one of the key points of tool presented here, which is not usually addressed by present commercial software. The tool provides, among other simulation results, the energy yield, the analysis and breakdown of energy losses, and the estimations of financial returns adapted to the legal and financial frameworks of each European country. Besides, educational facilities will be developed and integrated in the tool, not only devoted to learn how to use this software, but also to train the users on the best design PV systems practices. The tool will also include the recommendation of several PV community experts, which have been invited to identify present necessities in the field of PV systems simulation. For example, the possibility of using meteorological forecasts as input data, or modelling the integration of large energy storage systems, such as vanadium redox or lithium-ion batteries. Finally, it is worth mentioning that during the verification and testing stages of this software development, it will be also open to the suggestions received from the different actors of the PV community, such as promoters, installers, consultants, etc.
Resumo:
Molecular modelling of human CYP1B1 based on homology with the mammalian P450, CYP2C5, of known three-dimensional structure is reported. The enzyme model has been used to investigate the likely mode of binding for selected CYP1B1 substrates, particularly with regard to the possible effects of allelic variants of CYP1B1 on metabolism. In general, it appears that the CYP1B1 model is consistent with known substrate selectivity for the enzyme, and the sites of metabolism can be rationalized in terms of specific contacts with key amino acid residues within the CYP1B1 heme locus. Further-more, a mode of binding interaction for the inhibitor, a-naphthoflavone, is presented which accords with currently available information. The current paper shows that a combination of molecular modelling and experimental determinations on the substrate metabolism for CYP1B1 allelic variants can aid in the understanding of structure-function relationships within P450 enzymes. (C) 2003 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
Patellamide D (patH(4)) is a cyclic octapeptide isolated from the ascidian Lissoclinum patella. The peptide possesses a 24-azacrown-8 macrocyclic structure containing two oxazoline and two thiazole rings, each separated by an amino acid. The present spectrophotometric, electron paramagnetic resonance (EPR) and mass spectral studies show that patellamide D reacts with CuCl, and triethylamine in acetonitrile to form mononuclear and binuclear copper(II) complexes containing chloride. Molecular modelling and EPR studies suggest that the chloride anion bridges the copper(II) ions in the binuclear complex [Cu-2(patH(2))(mu-Cl)](+). These results contrast with a previous study employing both base and methanol, the latter substituting for chloride in the copper(II) complexes en route to the stable mu-carbonato binuclear copper(II) complex [Cu-2 (patH(2))(mu-CO3)]. Solvent clearly plays an important role in both stabilising these metal ion complexes and influencing their chemical reactivities. (C) 2004 Elsevier Inc. All rights reserved.