846 resultados para Tessellation-based model
Resumo:
The method on concurrent multi-scale model of structural behavior (CMSM-of-SB) for the purpose of structural health monitoring including model updating and validating has been studied. The detailed process of model updating and validating is discussed in terms of reduced scale specimen of the steel box girder in longitudinal stiffening truss of a long span bridge. Firstly, some influence factors affecting the accuracy of the CMSM-of-SB including the boundary restraint regidity, the geometry and material parameters on the toe of the weld and its neighbor are analyzed using sensitivity method. Then, sensitivity-based model updating technology is adopted to update the developed CMSM-of-SB and model verification is carried out through calculating and comparing stresses on different locations under various loading from dynamic characteristic and static response. It can be concluded that the CMSM-of-SB based on the substructure method is valid.
Resumo:
Airport system is complex. Passenger dynamics within it appear to be complicate as well. Passenger behaviours outside standard processes are regarded more significant in terms of public hazard and service rate issues. In this paper, we devised an individual agent decision model to simulate stochastic passenger behaviour in airport departure terminal. Bayesian networks are implemented into the decision making model to infer the probabilities that passengers choose to use any in-airport facilities. We aim to understand dynamics of the discretionary activities of passengers.
Resumo:
We consider a hybrid model, created by coupling a continuum and an agent-based model of infectious disease. The framework of the hybrid model provides a mechanism to study the spread of infection at both the individual and population levels. This approach captures the stochastic spatial heterogeneity at the individual level, which is directly related to deterministic population level properties. This facilitates the study of spatial aspects of the epidemic process. A spatial analysis, involving counting the number of infectious agents in equally sized bins, reveals when the spatial domain is nonhomogeneous.
Resumo:
Articular cartilage is a complex structure with an architecture in which fluid-swollen proteoglycans constrained within a 3D network of collagen fibrils. Because of the complexity of the cartilage structure, the relationship between its mechanical behaviours at the macroscale level and its components at the micro-scale level are not completely understood. The research objective in this thesis is to create a new model of articular cartilage that can be used to simulate and obtain insight into the micro-macro-interaction and mechanisms underlying its mechanical responses during physiological function. The new model of articular cartilage has two characteristics, namely: i) not use fibre-reinforced composite material idealization ii) Provide a framework for that it does probing the micro mechanism of the fluid-solid interaction underlying the deformation of articular cartilage using simple rules of repartition instead of constitutive / physical laws and intuitive curve-fitting. Even though there are various microstructural and mechanical behaviours that can be studied, the scope of this thesis is limited to osmotic pressure formation and distribution and their influence on cartilage fluid diffusion and percolation, which in turn governs the deformation of the compression-loaded tissue. The study can be divided into two stages. In the first stage, the distributions and concentrations of proteoglycans, collagen and water were investigated using histological protocols. Based on this, the structure of cartilage was conceptualised as microscopic osmotic units that consist of these constituents that were distributed according to histological results. These units were repeated three-dimensionally to form the structural model of articular cartilage. In the second stage, cellular automata were incorporated into the resulting matrix (lattice) to simulate the osmotic pressure of the fluid and the movement of water within and out of the matrix; following the osmotic pressure gradient in accordance with the chosen rule of repartition of the pressure. The outcome of this study is the new model of articular cartilage that can be used to simulate and study the micromechanical behaviours of cartilage under different conditions of health and loading. These behaviours are illuminated at the microscale level using the socalled neighbourhood rules developed in the thesis in accordance with the typical requirements of cellular automata modelling. Using these rules and relevant Boundary Conditions to simulate pressure distribution and related fluid motion produced significant results that provided the following insight into the relationships between osmotic pressure gradient and associated fluid micromovement, and the deformation of the matrix. For example, it could be concluded that: 1. It is possible to model articular cartilage with the agent-based model of cellular automata and the Margolus neighbourhood rule. 2. The concept of 3D inter connected osmotic units is a viable structural model for the extracellular matrix of articular cartilage. 3. Different rules of osmotic pressure advection lead to different patterns of deformation in the cartilage matrix, enabling an insight into how this micromechanism influences macromechanical deformation. 4. When features such as transition coefficient were changed, permeability (representing change) is altered due to the change in concentrations of collagen, proteoglycans (i.e. degenerative conditions), the deformation process is impacted. 5. The boundary conditions also influence the relationship between osmotic pressure gradient and fluid movement at the micro-scale level. The outcomes are important to cartilage research since we can use these to study the microscale damage in the cartilage matrix. From this, we are able to monitor related diseases and their progression leading to potential insight into drug-cartilage interaction for treatment. This innovative model is an incremental progress on attempts at creating further computational modelling approaches to cartilage research and other fluid-saturated tissues and material systems.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
Designing the smart grid requires combining varied models. As their number increases, so does the complexity of the software. Having a well thought architecture for the software then becomes crucial. This paper presents MODAM, a framework designed to combine agent-based models in a flexible and extensible manner, using well known software engineering design solutions (OSGi specification [1] and Eclipse plugins [2]). Details on how to build a modular agent-based model for the smart grid are given in this paper, illustrated by an example for a small network.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Although there was substantial research into the occupational health and safety sector over the past forty years, this generally focused on statistical analyses of data related to costs and/or fatalities and injuries. There is a lack of mathematical modelling of the interactions between workers and the resulting safety dynamics of the workplace. There is also little work investigating the potential impact of different safety intervention programs prior to their implementation. In this article, we present a fundamental, differential equation-based model of workplace safety that treats worker safety habits similarly to an infectious disease in an epidemic model. Analytical results for the model, derived via phase plane and stability analysis, are discussed. The model is coupled with a model of a generic safety strategy aimed at minimising unsafe work habits, to produce an optimal control problem. The optimal control model is solved using the forward-backward sweep numerical scheme implemented in Matlab.
Resumo:
In this paper, we consider a passivity-based approach for the design of a control law of multiple ship-roll gyro-stabiliser units. We extend previous work on control of ship roll gyro-stabilisation by considering the problem within a nonlinear framework. In particular, we derive an energy-based model using the port-Hamiltonian theory and then design an active precession controller using passivity-based control interconnection and damping assignment. The design considers the possibility of having multiple gyro-stabiliser units, and the desired potential energy of the system (in closed loop) is chosen to behave like a barrier function, which allows us to enforce constraints on the precession angle of the gyros.
Resumo:
Passenger experience has become a major factor that influences the success of an airport. In this context, passenger flow simulation has been used in designing and managing airports. However, most passenger flow simulations failed to consider the group dynamics when developing passenger flow models. In this paper, an agent-based model is presented to simulate passenger behaviour at the airport check-in and evacuation process. The simulation results show that the passenger behaviour can have significant influences on the performance and utilisation of services in airport terminals. The model was created using AnyLogic software and its parameters were initialised using recent research data published in the literature.
Resumo:
In this paper we implemented six different boarding strategies (Wilma, Steffen, Reverse Pyramid, Random, Blocks and By letter) in order to minimize boarding time and turnaround time for Boeing 777 and Airbus 380 aircrafts by using Agent-based modelling approach. In the simulation, we divided passengers into six different categories which are group size more than 5 people, passengers with child, gold members, first class passengers, business class passengers and economy class passengers. Results from the simulation demonstrates Reverse Pyramid method is the best boarding method for Boeing 777 and Steffen method is the best boarding method for Airbus 380.
Resumo:
This thesis investigates the influence of passenger group dynamics on passengers' behaviour in an international airport. A simulation model is built to analyse passengers' behaviour during airport departure processes and during an emergency event. Results from the model showed that passengers' group dynamics have significant influences on the performance and utilisation of airport services. The agent-based model also provides a convenient way to investigate the effectiveness of space design and service allocations, which may contribute to the enhancement of passenger airport experiences.
Resumo:
A single plant cell was modeled with smoothed particle hydrodynamics (SPH) and a discrete element method (DEM) to study the basic micromechanics that govern the cellular structural deformations during drying. This two-dimensional particle-based model consists of two components: a cell fluid model and a cell wall model. The cell fluid was approximated to a highly viscous Newtonian fluid and modeled with SPH. The cell wall was treated as a stiff semi-permeable solid membrane with visco-elastic properties and modeled as a neo-Hookean solid material using a DEM. Compared to existing meshfree particle-based plant cell models, we have specifically introduced cell wall–fluid attraction forces and cell wall bending stiffness effects to address the critical shrinkage characteristics of the plant cells during drying. Also, a moisture domain-based novel approach was used to simulate drying mechanisms within the particle scheme. The model performance was found to be mainly influenced by the particle resolution, initial gap between the outermost fluid particles and wall particles and number of particles in the SPH influence domain. A higher order smoothing kernel was used with adaptive smoothing length to improve the stability and accuracy of the model. Cell deformations at different states of cell dryness were qualitatively and quantitatively compared with microscopic experimental findings on apple cells and a fairly good agreement was observed with some exceptions. The wall–fluid attraction forces and cell wall bending stiffness were found to be significantly improving the model predictions. A detailed sensitivity analysis was also done to further investigate the influence of wall–fluid attraction forces, cell wall bending stiffness, cell wall stiffness and the particle resolution. This novel meshfree based modeling approach is highly applicable for cellular level deformation studies of plant food materials during drying, which characterize large deformations.
Resumo:
This paper presents a layered framework for the purposes of integrating different Socio-Technical Systems (STS) models and perspectives into a whole-of-systems model. Holistic modelling plays a critical role in the engineering of STS due to the interplay between social and technical elements within these systems and resulting emergent behaviour. The framework decomposes STS models into components, where each component is either a static object, dynamic object or behavioural object. Based on existing literature, a classification of the different elements that make up STS, whether it be a social, technical or a natural environment element, is developed; each object can in turn be classified according to the STS elements it represents. Using the proposed framework, it is possible to systematically decompose models to an extent such that points of interface can be identified and the contextual factors required in transforming the component of one model to interface into another is obtained. Using an airport inbound passenger facilitation process as a case study socio-technical system, three different models are analysed: a Business Process Modelling Notation (BPMN) model, Hybrid Queue-based Bayesian Network (HQBN) model and an Agent Based Model (ABM). It is found that the framework enables the modeller to identify non-trivial interface points such as between the spatial interactions of an ABM and the causal reasoning of a HQBN, and between the process activity representation of a BPMN and simulated behavioural performance in a HQBN. Such a framework is a necessary enabler in order to integrate different modelling approaches in understanding and managing STS.
Resumo:
We describe the development and parameterization of a grid-based model of African savanna vegetation processes. The model was developed with the objective of exploring elephant effects on the diversity of savanna species and structure, and in this formulation concentrates on the relative cover of grass and woody plants, the vertical structure of the woody plant community, and the distribution of these over space. Grid cells are linked by seed dispersal and fire, and environmental variability is included in the form of stochastic rainfall and fire events. The model was parameterized from an extensive review of the African savanna literature; when available, parameter values varied widely. The most plausible set of parameters produced long-term coexistence between woody plants and grass, with the tree-grass balance being more sensitive to changes in parameters influencing demographic processes and drought incidence and response, while less sensitive to fire regime. There was considerable diversity in the woody structure of savanna systems within the range of uncertainty in tree growth rate parameters. Thus, given the paucity of height growth data regarding woody plant species in southern African savannas, managers of natural areas should be cognizant of different tree species growth and damage response attributes when considering whether to act on perceived elephant threats to vegetation. © 2007 Springer Science+Business Media B.V.