525 resultados para Computational simulation
Resumo:
We present a virtual test bed for network security evaluation in mid-scale telecommunication networks. Migration from simulation scenarios towards the test bed is supported and enables researchers to evaluate experiments in a more realistic environment. We provide a comprehensive interface to manage, run and evaluate experiments. On basis of a concrete example we show how the proposed test bed can be utilized.
Resumo:
The evolution of classic power grids to smart grids creates chances for most participants in the energy sector. Customers can save money by reducing energy consumption, energy providers can better predict energy demand and environment benefits since lower energy consumption implies lower energy production including a decrease of emissions from plants. But information and communication systems supporting smart grids can also be subject to classical or new network attacks. Attacks can result in serious damage such as harming privacy of customers, creating economical loss and even disturb the power supply/demand balance of large regions and countries. In this paper, we give an overview about the German smart measuring architecture, protocols and security. Afterwards, we present a simulation framework which enables researchers to analyze security aspects of smart measuring scenarios.
Resumo:
This work identifies the limitations of n-way data analysis techniques in multidimensional stream data, such as Internet chat room communications data, and establishes a link between data collection and performance of these techniques. Its contributions are twofold. First, it extends data analysis to multiple dimensions by constructing n-way data arrays known as high order tensors. Chat room tensors are generated by a simulator which collects and models actual communication data. The accuracy of the model is determined by the Kolmogorov-Smirnov goodness-of-fit test which compares the simulation data with the observed (real) data. Second, a detailed computational comparison is performed to test several data analysis techniques including svd [1], and multi-way techniques including Tucker1, Tucker3 [2], and Parafac [3].
Resumo:
Theme Paper for Curriculum innovation and enhancement theme AIM: This paper reports on a research project that trialled an educational strategy implemented in an undergraduate nursing curriculum. The project aimed to explore the effectiveness of ‘think aloud’ as a strategy for improving clinical reasoning for students in simulated clinical settings. BACKGROUND: Nurses are required to apply and utilise critical thinking skills to enable clinical reasoning and problem solving in the clinical setting (Lasater, 2007). Nursing students are expected to develop and display clinical reasoning skills in practice, but may struggle articulating reasons behind decisions about patient care. The ‘think aloud’ approach is an innovative learning/teaching method which can create an environment suitable for developing clinical reasoning skills in students (Banning, 2008, Lee and Ryan-Wenger, 1997). This project used the ‘think aloud’ strategy within a simulation context to provide a safe learning environment in which third year students were assisted to uncover cognitive approaches to assist in making effective patient care decisions, and improve their confidence, clinical reasoning and active critical reflection about their practice. MEHODS: In semester 2 2011 at QUT, third year nursing students undertook high fidelity simulation (some for the first time), commencing in September of 2011. There were two cohorts for strategy implementation (group 1= used think aloud as a strategy within the simulation, group 2= no specific strategy outside of nursing assessment frameworks used by all students) in relation to problem solving patient needs. The think aloud strategy was described to students in their pre-simulation briefing and allowed time for clarification of this strategy. All other aspects of the simulations remained the same, (resources, suggested nursing assessment frameworks, simulation session duration, size of simulation teams, preparatory materials). Ethics approval has been obtained for this project. RESULTS: Results of a qualitative analysis (in progress- will be completed by March 2012) of student and facilitator reports on students’ ability to meet the learning objectives of solving patient problems using clinical reasoning and experience with the ‘think aloud’ method will be presented. A comparison of clinical reasoning learning outcomes between the two groups will determine the effect on clinical reasoning for students responding to patient problems. CONCLUSIONS: In an environment of increasingly constrained clinical placement opportunities, exploration of alternate strategies to improve critical thinking skills and develop clinical reasoning and problem solving for nursing students is imperative in preparing nurses to respond to changing patient needs.
Resumo:
We introduce the Network Security Simulator (NeSSi2), an open source discrete event-based network simulator. It incorporates a variety of features relevant to network security distinguishing it from general-purpose network simulators. Compared to the predecessor NeSSi, it was extended with a three-tier plugin architecture and a generic network model to shift its focus towards simulation framework for critical infrastructures. We demonstrate the gained adaptability by different use cases
Resumo:
Good daylighting design in buildings not only provides a comfortable luminous environment, but also delivers energy savings and comfortable and healthy environments for building occupants. Yet, there is still no consensus on how to assess what constitutes good daylighting design. Currently amongst building performance guidelines, Daylighting factors (DF) or minimum illuminance values are the standard; however, previous research has shown the shortcomings of these metrics. New computer software for daylighting analysis contains new more advanced metrics for daylighting (Climate Base Daylight Metrics-CBDM). Yet, these tools (new metrics or simulation tools) are not currently understood by architects and are not used within architectural firms in Australia. A survey of architectural firms in Brisbane showed the most relevant tools used by industry. The purpose of this paper is to assess and compare these computer simulation tools and new tools available architects and designers for daylighting. The tools are assessed in terms of their ease of use (e.g. previous knowledge required, complexity of geometry input, etc.), efficiency (e.g. speed, render capabilities, etc.) and outcomes (e.g. presentation of results, etc. The study shows tools that are most accessible for architects, are those that import a wide variety of files, or can be integrated into the current 3d modelling software or package. These software’s need to be able to calculate for point in times simulations, and annual analysis. There is a current need in these software solutions for an open source program able to read raw data (in the form of spreadsheets) and show that graphically within a 3D medium. Currently, development into plug-in based software’s are trying to solve this need through third party analysis, however some of these packages are heavily reliant and their host program. These programs however which allow dynamic daylighting simulation, which will make it easier to calculate accurate daylighting no matter which modelling platform the designer uses, while producing more tangible analysis today, without the need to process raw data.
Resumo:
For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.
Resumo:
Vibration Based Damage Identification Techniques which use modal data or their functions, have received significant research interest in recent years due to their ability to detect damage in structures and hence contribute towards the safety of the structures. In this context, Strain Energy Based Damage Indices (SEDIs), based on modal strain energy, have been successful in localising damage in structuers made of homogeneous materials such as steel. However, their application to reinforced concrete (RC) structures needs further investigation due to the significant difference in the prominent damage type, the flexural crack. The work reported in this paper is an integral part of a comprehensive research program to develop and apply effective strain energy based damage indices to assess damage in reinforced concrete flexural members. This research program established (i) a suitable flexural crack simulation technique, (ii) four improved SEDI's and (iii) programmable sequentional steps to minimise effects of noise. This paper evaluates and ranks the four newly developed SEDIs and existing seven SEDIs for their ability to detect and localise flexural cracks in RC beams. Based on the results of the evaluations, it recommends the SEDIs for use with single and multiple vibration modes.
Resumo:
Triangle-shaped nanohole, nanodot, and lattice antidot structures in hexagonal boron-nitride (h-BN) monolayer sheets are characterized with density functional theory calculations utilizing the local spin density approximation. We find that such structures may exhibit very large magnetic moments and associated spin splitting. N-terminated nanodots and antidots show strong spin anisotropy around the Fermi level, that is, half-metallicity. While B-terminated nanodots are shown to lack magnetism due to edge reconstruction, B-terminated nanoholes can retain magnetic character due to the enhanced structural stability of the surrounding two-dimensional matrix. In spite of significant lattice contraction due to the presence of multiple holes, antidot super lattices are predicted to be stable, exhibiting amplified magnetism as well as greatly enhanced half-metallicity. Collectively, the results indicate new opportunities for designing h-BNbased nanoscale devices with potential applications in the areas of spintronics, light emission, and photocatalysis.
Resumo:
Aerial Vehicles (UAV) has become a significant growing segment of the global aviation industry. These vehicles are developed with the intention of operating in regions where the presence of onboard human pilots is either too risky or unnecessary. Their popularity with both the military and civilian sectors have seen the use of UAVs in a diverse range of applications, from reconnaissance and surveillance tasks for the military, to civilian uses such as aid relief and monitoring tasks. Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. This paper presents the development of a parallel Hybrid Electric Propulsion System (HEPS) on a small fixed-wing UAV incorporating an Ideal Operating Line (IOL) control strategy. A simulation model of an UAV was developed in the MATLAB Simulink environment, utilising the AeroSim Blockset and the in-built Aerosonde UAV block and its parameters. An IOL analysis of an Aerosonde engine was performed, and the most efficient (i.e. provides greatest torque output at the least fuel consumption) points of operation for this engine were determined. Simulation models of the components in a HEPS were designed and constructed in the MATLAB Simulink environment. It was demonstrated through simulation that an UAV with the current HEPS configuration was capable of achieving a fuel saving of 6.5%, compared to the ICE-only configuration. These components form the basis for the development of a complete simulation model of a Hybrid-Electric UAV (HEUAV).
Resumo:
Purpose: The measurement of broadband ultrasonic attenuation (BUA) in cancellous bone for the assessment of osteoporosis follows a parabolic-type dependence with bone volume fraction; having minima values corresponding to both entire bone and entire marrow. Langton has recently proposed that the primary BUA mechanism may be significant phase interference due to variations in propagation transit time through the test sample as detected over the phase-sensitive surface of the receive ultrasound transducer. This fundamentally simple concept assumes that the propagation of ultrasound through a complex solid : liquid composite sample such as cancellous bone may be considered by an array of parallel ‘sonic rays’. The transit time of each ray is defined by the proportion of bone and marrow propagated, being a minimum (tmin) solely through bone and a maximum (tmax) solely through marrow. A Transit Time Spectrum (TTS), ranging from tmin to tmax, may be defined describing the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit time over the surface of the receive ultrasound transducer. Phase interference may result from interaction of ‘sonic rays’ of differing transit times. The aim of this study was to test the hypothesis that there is a dependence of phase interference upon the lateral inhomogenity of transit time by comparing experimental measurements and computer simulation predictions of ultrasound propagation through a range of relatively simplistic solid:liquid models exhibiting a range of lateral inhomogeneities. Methods: A range of test models was manufactured using acrylic and water as surrogates for bone and marrow respectively. The models varied in thickness in one dimension normal to the direction of propagation, hence exhibiting a range of transit time lateral inhomogeneities, ranging from minimal (single transit time) to maximal (wedge; ultimately the limiting case where each sonic ray has a unique transit time). For the experimental component of the study, two unfocused 1 MHz ¾” broadband diameter transducers were utilized in transmission mode; ultrasound signals were recorded for each of the models. The computer simulation was performed with Matlab, where the transit time and relative amplitude of each sonic ray was calculated. The transit time for each sonic ray was defined as the sum of transit times through acrylic and water components. The relative amplitude considered the reception area for each sonic ray along with absorption in the acrylic. To replicate phase-sensitive detection, all sonic rays were summed and the output signal plotted in comparison with the experimentally derived output signal. Results: From qualtitative and quantitative comparison of the experimental and computer simulation results, there is an extremely high degree of agreement of 94.2% to 99.0% between the two approaches, supporting the concept that propagation of an ultrasound wave, for the models considered, may be approximated by a parallel sonic ray model where the transit time of each ray is defined by the proportion of ‘bone’ and ‘marrow’. Conclusions: This combined experimental and computer simulation study has successfully demonstrated that lateral inhomogeneity of transit time has significant potential for phase interference to occur if a phase-sensitive ultrasound receive transducer is implemented as in most commercial ultrasound bone analysis devices.
Resumo:
In power hardware in the loop (PHIL) simulations, a real-time simulated power system is interfaced to a piece of hardware, usually called hardware under test (HuT). A PHIL test can be realized using several simulation tools. Among them Real Time Digital Simulator (RTDS) is an ideal tool to perform complex power system simulations in near real-time. Stable operation of the entire system, along with the accuracy of simulation results are the main concerns regarding a PHIL simulation. In this paper, a simulated power network on RTDS will be interfaced to HuT through a voltage source converter (VSC). Issues around stability and other interface problems are studied and a new method to stabilize some unstable PHIL cases is proposed. PHIL simulation results in PSCAD and RSCAD are presented.
Resumo:
Different types of defects can be introduced into graphene during material synthesis, and significantly influence the properties of graphene. In this work, we investigated the effects of structural defects, edge functionalisation and reconstruction on the fracture strength and morphology of graphene by molecular dynamics simulations. The minimum energy path analysis was conducted to investigate the formation of Stone-Wales defects. We also employed out-of-plane perturbation and energy minimization principle to study the possi-ble morphology of graphene nanoribbons with edge-termination. Our numerical results show that the fracture strength of graphene is dependent on defects and environmental temperature. However, pre-existing defects may be healed, resulting in strength recovery. Edge functionalization can induce compressive stress and ripples in the edge areas of gra-phene nanoribbons. On the other hand, edge reconstruction contributed to the tensile stress and curved shape in the graphene nanoribbons.
Resumo:
Sophisticated models of human social behaviour are fast becoming highly desirable in an increasingly complex and interrelated world. Here, we propose that rather than taking established theories from the physical sciences and naively mapping them into the social world, the advanced concepts and theories of social psychology should be taken as a starting point, and used to develop a new modelling methodology. In order to illustrate how such an approach might be carried out, we attempt to model the low elaboration attitude changes of a society of agents in an evolving social context. We propose a geometric model of an agent in context, where individual agent attitudes are seen to self-organise to form ideologies, which then serve to guide further agent-based attitude changes. A computational implementation of the model is shown to exhibit a number of interesting phenomena, including a tendency for a measure of the entropy in the system to decrease, and a potential for externally guiding a population of agents towards a new desired ideology.
Resumo:
The contextuality of changing attitudes makes them extremely difficult to model. This paper scales up Quantum Decision Theory (QDT) to a social setting, using it to model the manner in which social contexts can interact with the process of low elaboration attitude change. The elements of this extended theory are presented, along with a proof of concept computational implementation in a low dimensional subspace. This model suggests that a society's understanding of social issues will settle down into a static or frozen configuration unless that society consists of a range of individuals with varying personality types and norms.