861 resultados para Modeling and Simulation Challenges
Resumo:
The multi-scale synoptic circulation system in the southeastern Brazil (SEBRA) region is presented using a feature-oriented approach. Prevalent synoptic circulation structures, or ""features,"" are identified from previous observational studies. These features include the southward-flowing Brazil Current (BC), the eddies off Cabo Sao Tome (CST - 22 degrees S) and off Cabo Frio (CF - 23 degrees S), and the upwelling region off CF and CST. Their synoptic water-mass (T-S) structures are characterized and parameterized to develop temperature-salinity (T-S) feature models. Following [Gangopadhyay, A., Robinson, A.R., Haley, PJ., Leslie, W.J., Lozano, C.j., Bisagni, J., Yu, Z., 2003. Feature-oriented regional modeling and simulation (forms) in the gulf of maine and georges bank. Cont. Shelf Res. 23 (3-4), 317-353] methodology, a synoptic initialization scheme for feature-oriented regional modeling and simulation (FORMS) of the circulation in this region is then developed. First, the temperature and salinity feature-model profiles are placed on a regional circulation template and objectively analyzed with available background climatology in the deep region. These initialization fields are then used for dynamical simulations via the Princeton Ocean Model (POM). A few first applications of this methodology are presented in this paper. These include the BC meandering, the BC-eddy interaction and the meander-eddy-upwelling system (MEUS) simulations. Preliminary validation results include realistic wave-growth and eddy formation and sustained upwelling. Our future plan includes the application of these feature models with satellite, in-situ data and advanced data-assimilation schemes for nowcasting and forecasting the SEBRA region. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The main objective of this paper is the presentation of modelling solutions off loating devices that can be used for harnessing energy from ocean currents. It has been structured into three main parts. First, the growing current interest in marine renewable energy in general, and in extracting energy from currents in particular, is presented, showing the large number of solutions that are emerging and some of the most significant types. GESMEY generator is presented in second section. It is based on a new concept that has been patented by the Universidad Politécnica de Madrid and which is currently being developed through a collaborative agreement with the SOERMAR Foundation. The main feature of this generator is that on operation is fully submerged, and no other facilities are required to move to floating state for maintenance, which greatly increases its performance. Third part of the article is devoted to present the modelling and simulation challenges that arise in the development of devices for harnessing the energy of marine currents, along with some solutions which have been adopted within the frame of the GESMEY Project, making particular emphasis on the dynamics of the generator and its control
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. ^ There are two issues in using HLPNs—modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. ^ For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. ^ For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. ^ The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.^
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.
Resumo:
Crop models are simplified mathematical representations of the interacting biological and environmental components of the dynamic soil–plant–environment system. Sorghum crop modeling has evolved in parallel with crop modeling capability in general, since its origins in the 1960s and 1970s. Here we briefly review the trajectory in sorghum crop modeling leading to the development of advanced models. We then (i) overview the structure and function of the sorghum model in the Agricultural Production System sIMulator (APSIM) to exemplify advanced modeling concepts that suit both agronomic and breeding applications, (ii) review an example of use of sorghum modeling in supporting agronomic management decisions, (iii) review an example of the use of sorghum modeling in plant breeding, and (iv) consider implications for future roles of sorghum crop modeling. Modeling and simulation provide an avenue to explore consequences of crop management decision options in situations confronted with risks associated with seasonal climate uncertainties. Here we consider the possibility of manipulating planting configuration and density in sorghum as a means to manipulate the productivity–risk trade-off. A simulation analysis of decision options is presented and avenues for its use with decision-makers discussed. Modeling and simulation also provide opportunities to improve breeding efficiency by either dissecting complex traits to more amenable targets for genetics and breeding, or by trait evaluation via phenotypic prediction in target production regions to help prioritize effort and assess breeding strategies. Here we consider studies on the stay-green trait in sorghum, which confers yield advantage in water-limited situations, to exemplify both aspects. The possible future roles of sorghum modeling in agronomy and breeding are discussed as are opportunities related to their synergistic interaction. The potential to add significant value to the revolution in plant breeding associated with genomic technologies is identified as the new modeling frontier.
Resumo:
This paper presents the steps and the challenges for implementing analytical, physics-based models for the insulated gate bipolar transistor (IGBT) and the PIN diode in hardware and more specifically in field programmable gate arrays (FPGAs). The models can be utilised in hardware co-simulation of complex power electronic converters and entire power systems in order to reduce the simulation time without compromising the accuracy of results. Such a co-simulation allows reliable prediction of the system's performance as well as accurate investigation of the power devices' behaviour during operation. Ultimately, this will allow application-specific optimisation of the devices' structure, circuit topologies as well as enhancement of the control and/or protection schemes.
Resumo:
Business process modeling has undoubtedly emerged as a popular and relevant practice in Information Systems. Despite being an actively researched field, anecdotal evidence and experiences suggest that the focus of the research community is not always well aligned with the needs of industry. The main aim of this paper is, accordingly, to explore the current issues and the future challenges in business process modeling, as perceived by three key stakeholder groups (academics, practitioners, and tool vendors). We present the results of a global Delphi study with these three groups of stakeholders, and discuss the findings and their implications for research and practice. Our findings suggest that the critical areas of concern are standardization of modeling approaches, identification of the value proposition of business process modeling, and model-driven process execution. These areas are also expected to persist as business process modeling roadblocks in the future.
Resumo:
High fidelity simulation as a teaching and learning approach is being embraced by many schools of nursing. Our school embarked on integrating high fidelity (HF) simulation into the undergraduate clinical education program in 2011. Low and medium fidelity simulation has been used for many years, but this did not simplify the integration of HF simulation. Alongside considerations of how and where HF simulation would be integrated, issues arose with: student consent and participation for observed activities; data management of video files; staff development, and conceptualising how methods for student learning could be researched. Simulation for undergraduate student nurses commenced as a formative learning activity, undertaken in groups of eight, where four students undertake the ‘doing’ role and four are structured observers, who then take a formal role in the simulation debrief. Challenges for integrating simulation into student learning included conceptualising and developing scenarios to trigger students’ decision making and application of skills, knowledge and attitudes explicit to solving clinical ‘problems’. Developing and planning scenarios for students to ‘try out’ skills and make decisions for problem solving lay beyond choosing pre-existing scenarios inbuilt with the software. The supplied scenarios were not concept based but rather knowledge, skills and technology (of the manikin) focussed. Challenges lay in using the technology for the purpose of building conceptual mastery rather than using technology simply because it was available. As we integrated use of HF simulation into the final year of the program, focus was on building skills, knowledge and attitudes that went beyond technical skill, and provided an opportunity to bridge the gap with theory-based knowledge that students often found difficult to link to clinical reality. We wished to provide opportunities to develop experiential knowledge based on application and clinical reasoning processes in team environments where problems are encountered, and to solve them, the nurse must show leadership and direction. Other challenges included students consenting for simulations to be videotaped and ethical considerations of this. For example if one student in a group of eight did not consent, did this mean they missed the opportunity to undertake simulation, or that others in the group may be disadvantaged by being unable to review their performance. This has implications for freely given consent but also for equity of access to learning opportunities for students who wished to be taped and those who did not. Alongside this issue were the details behind data management, storage and access. Developing staff with varying levels of computer skills to use software and undertake a different approach to being the ‘teacher’ required innovation where we took an experiential approach. Considering explicit learning approaches to be trialled for learning was not a difficult proposition, but considering how to enact this as research with issues of blinding, timetabling of blinded groups, and reducing bias for testing results of different learning approaches along with gaining ethical approval was problematic. This presentation presents examples of these challenges and how we overcame them.
Resumo:
Structural Health Monitoring (SHM) systems require integration of non-destructive technologies into structural design and operational processes. Modeling and simulation of complex NDE inspection processes are important aspects in the development and deployment of SHM technologies. Ray tracing techniques are vital simulation tools to visualize the wave path inside a material. These techniques also help in optimizing the location of transducers and their orientation with respect to the zone of interrogation. It helps in increasing the chances of detection and identification of a flaw in that zone. While current state-of-the-art techniques such as ray tracing based on geometric principle help in such visualization, other information such as signal losses due to spherical or cylindrical shape of wave front are rarely taken into consideration. The problem becomes a little more complicated in the case of dispersive guided wave propagation and near-field defect scattering. We review the existing models and tools to perform ultrasonic NDE simulation in structural components. As an initial step, we develop a ray-tracing approach, where phase and spectral information are preserved. This enables one to study wave scattering beyond simple time of flight calculation of rays. Challenges in terms of theory and modelling of defects of various kinds are discussed. Various additional considerations such as signal decay and physics of scattering are reviewed and challenges involved in realistic computational implementation are discussed. Potential application of this approach to SHM system design is highlighted and by applying this to complex structural components such as airframe structures, SHM is demonstrated to provide additional value in terms of lighter weight and/or longevity enhancement resulting from an extension of the damage tolerance design principle not compromising safety and reliability.
Resumo:
This paper studies the stability of jointed rock slopes by using our improved three-dimensional discrete element methods (DEM) and physical modeling. Results show that the DEM can simulate all failure modes of rock slopes with different joint configurations. The stress in each rock block is not homogeneous and blocks rotate in failure development. Failure modes depend on the configuration of joints. Toppling failure is observed for the slope with straight joints and sliding failure is observed for the slope with staged joints. The DEM results are also compared with those of limit equilibrium method (LEM). Without considering the joints in rock masses, the LEM predicts much higher factor of safety than physical modeling and DEM. The failure mode and factor of safety predicted by the DEM are in good agreement with laboratory tests for any jointed rock slope.
Resumo:
In this paper, we study the issues of modeling, numerical methods, and simulation with comparison to experimental data for the particle-fluid two-phase flow problem involving a solid-liquid mixed medium. The physical situation being considered is a pulsed liquid fluidized bed. The mathematical model is based on the assumption of one-dimensional flows, incompressible in both particle and fluid phases, equal particle diameters, and the wall friction force on both phases being ignored. The model consists of a set of coupled differential equations describing the conservation of mass and momentum in both phases with coupling and interaction between the two phases. We demonstrate conditions under which the system is either mathematically well posed or ill posed. We consider the general model with additional physical viscosities and/or additional virtual mass forces, both of which stabilize the system. Two numerical methods, one of them is first-order accurate and the other fifth-order accurate, are used to solve the models. A change of variable technique effectively handles the changing domain and boundary conditions. The numerical methods are demonstrated to be stable and convergent through careful numerical experiments. Simulation results for realistic pulsed liquid fluidized bed are provided and compared with experimental data. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
CCR2b, a chemokine receptor for MCP-1, -2, -3, -4, plays an important role in a variety of diseases involving infection, inflammation, and/or injury, as well as being a coreceptor for HIV-1 infection. Two models of human CCR2b (hCCR2b) were generated by h
Resumo:
The chemokine receptor CCR5 is the receptor for several chemokines and major coreceptor for R5 human immunodeficiency virus type-1 strains entry into cell. Three-dimensional models of CCR5 were built by using homology modeling approach and 1 ns molecular dynamics (MD) simulation, because studies of site-directed mutagenesis and chimeric receptors have indicated that the N-terminus (Nt) and extracellular loops (ECLs) of CCR5 are important for ligands binding and viral fusion and entry, special attention was focused on disulfide bond function, conformational flexibility, hydrogen bonding, electrostatic interactions, and solvent-accessible surface area of Nt and ECLs of this protein part. We found that the extracellular segments of CCR5 formed a well-packet globular domain with complex interactions occurred between them in a majority of time of MID simulation, but Nt region could protrude from this domain sometimes. The disulfide bond Cys20-Cys269 is essential in controlling specific orientation of Nt region and maintaining conformational integrity of extracellular domain. RMS comparison analysis between conformers revealed the ECL1 of CCR5 stays relative rigid, whereas the ECL2 and Nt are rather flexible. Solvent-accessible surface area calculations indicated that the charged residues within Nt and ECL2 are often exposed to solvent. Integrating these results with available experimental data, a two-step gp120-CCR5 binding mechanism was proposed. The dynamic interaction of CCR5 extracellular domain with gp120 was emphasized. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Numerical analysis was used to study the deposition and burning characteristics of combining co-combustion with slagging combustion technologies in this paper. The pyrolysis and burning kinetic models of different fuels were implanted into the WBSF-PCC2 (wall burning and slag flow in pulverized co-combustion) computation code, and then the slagging and co-combustion characteristicsespecially the wall burning mechanism of different solid fuels and their effects on the whole burning behavior in the cylindrical combustor at different mixing ratios under the condition of keeping the heat input samewere simulated numerically. The results showed that adding wood powder at 25% mass fraction can increase the temperature at the initial stage of combustion, which is helpful to utilize the front space of the combustor. Adding wood powder at a 25% mass fraction can increase the reaction rate at the initial combustion stage; also, the coal ignitability is improved, and the burnout efficiency is enhanced by about 5% of suspension and deposition particles, which is helpful for coal particles to burn entirely and for combustion devices to minimize their dimensions or sizes. The results also showed that adding wood powder at a proper ratio is helpful to keep the combustion stability, not only because of the enhancement for the burning characteristics, but also because the running slag layer structure can be changed more continuously, which is very important for avoiding the abnormal slag accumulation in the slagging combustor. The theoretic analysis in this paper proves that unification of co-combustion and slagging combustion technologies is feasible, though more comprehensive and rigorous research is needed.