937 resultados para discrete nebulization
Resumo:
Discrete Event Simulation (DES) is a very popular simulation technique in Operational Research. Recently, there has been the emergence of another technique, namely Agent Based Simulation (ABS). Although there is a lot of literature relating to DES and ABS, we have found less that focuses on exploring the capabilities of both in tackling human behaviour issues. In order to understand the gap between these two simulation techniques, therefore, our aim is to understand the distinctions between DES and ABS models with the real world phenomenon in modelling and simulating human behaviour. In achieving the aim, we have carried out a case study at a department store. Both DES and ABS models will be compared using the same problem domain which is concerning on management policy in a fitting room. The behaviour of staffs while working and customers’ satisfaction will be modelled for both models behaviour understanding.
Resumo:
info:eu-repo/semantics/inPress
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
Resumo:
In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store’s fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.
Resumo:
Queueing systems constitute a central tool in modeling and performance analysis. These types of systems are in our everyday life activities, and the theory of queueing systems was developed to provide models for forecasting behaviors of systems subject to random demand. The practical and useful applications of the discrete-time queues make the researchers to con- tinue making an e ort in analyzing this type of models. Thus the present contribution relates to a discrete-time Geo/G/1 queue in which some messages may need a second service time in addition to the rst essential service. In day-to-day life, there are numerous examples of queueing situations in general, for example, in manufacturing processes, telecommunication, home automation, etc, but in this paper a particular application is the use of video surveil- lance with intrusion recognition where all the arriving messages require the main service and only some may require the subsidiary service provided by the server with di erent types of strategies. We carry out a thorough study of the model, deriving analytical results for the stationary distribution. The generating functions of the number of messages in the queue and in the system are obtained. The generating functions of the busy period as well as the sojourn times of a message in the server, the queue and the system are also provided.
Resumo:
When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.
Resumo:
Research has found that children with autism spectrum disorders (ASD) show significant deficits in receptive language skills (Wiesmer, Lord, & Esler, 2010). One of the primary goals of applied behavior analytic intervention is to improve the communication skills of children with autism by teaching receptive discriminations. Both receptive discriminations and receptive language entail matching spoken words with corresponding objects, symbols (e.g., pictures or words), actions, people, and so on (Green, 2001). In order to develop receptive language skills, children with autism often undergo discrimination training within the context of discrete trial training. This training entails teaching the learner how to respond differentially to different stimuli (Green, 2001). It is through discrimination training that individuals with autism learn and develop language (Lovaas, 2003). The present study compares three procedures for teaching receptive discriminations: (1) simple/conditional (Procedure A), (2) conditional only (Procedure B), and (3) conditional discrimination of two target cards (Procedure C). Six children, ranging in age from 2-years-old to 5-years-old, with an autism diagnosis were taught how to receptively discriminate nine sets of stimuli. Results suggest that the extra training steps included in the simple/conditional and conditional only procedures may not be necessary to teach children with autism how to receptively discriminate. For all participants, Procedure C appeared to be the most efficient and effective procedure for teaching young children with autism receptive discriminations. Response maintenance and generalization probes conducted one-month following the end of training indicate that even though Procedure C resulted in less training sessions overall, no one procedure resulted in better maintenance and generalization than the others. In other words, more training sessions, as evident with the simple/conditional and conditional only procedures, did not facilitate participants’ ability to accurately respond or generalize one-month following training. The present study contributes to the literature on what is the most efficient and effective way to teach receptive discrimination during discrete trial training to children with ASD. These findings are critical as research shows that receptive language skills are predictive of better outcomes and adaptive behaviors in the future. ^
Resumo:
In this paper, we develop a new family of graph kernels where the graph structure is probed by means of a discrete-time quantum walk. Given a pair of graphs, we let a quantum walk evolve on each graph and compute a density matrix with each walk. With the density matrices for the pair of graphs to hand, the kernel between the graphs is defined as the negative exponential of the quantum Jensen–Shannon divergence between their density matrices. In order to cope with large graph structures, we propose to construct a sparser version of the original graphs using the simplification method introduced in Qiu and Hancock (2007). To this end, we compute the minimum spanning tree over the commute time matrix of a graph. This spanning tree representation minimizes the number of edges of the original graph while preserving most of its structural information. The kernel between two graphs is then computed on their respective minimum spanning trees. We evaluate the performance of the proposed kernels on several standard graph datasets and we demonstrate their effectiveness and efficiency.
Resumo:
Development of methodologies for the controlled chemical assembly of nanoparticles into plasmonic molecules of predictable spatial geometry is vital in order to harness novel properties arising from the combination of the individual components constituting the resulting superstructures. This paper presents a route for fabrication of gold plasmonic structures of controlled stoichiometry obtained by the use of a di-rhenium thio-isocyanide complex as linker molecule for gold nanocrystals. Correlated scanning electron microscopy (SEM)—dark-field spectroscopy was used to characterize obtained discrete monomer, dimer and trimer plasmonic molecules. Polarization-dependent scattering spectra of dimer structures showed highly polarized scattering response, due to their highly asymmetric D∞h geometry. In contrast, some trimer structures displayed symmetric geometry (D3h), which showed small polarization dependent response. Theoretical calculations were used to further understand and attribute the origin of plasmonic bands arising during linker-induced formation of plasmonic molecules. Theoretical data matched well with experimentally calculated data. These results confirm that obtained gold superstructures possess properties which are a combination of the properties arising from single components and can, therefore, be classified as plasmonic molecules
Resumo:
An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. It is a well-known fact that DVMs can also have extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and so without spurious ones, is called normal. For binary mixtures also the concept of supernormal DVMs was introduced, meaning that in addition to the DVM being normal, the restriction of the DVM to any single species also is normal. Here we introduce generalizations of this concept to DVMs for multicomponent mixtures. We also present some general algorithms for constructing such models and give some concrete examples of such constructions. One of our main results is that for any given number of species, and any given rational mass ratios we can construct a supernormal DVM. The DVMs are constructed in such a way that for half-space problems, as the Milne and Kramers problems, but also nonlinear ones, we obtain similar structures as for the classical discrete Boltzmann equation for one species, and therefore we can apply obtained results for the classical Boltzmann equation.
Resumo:
This paper presents an integrated model for an offshore wind turbine taking into consideration a contribution for the marine wave and wind speed with perturbations influences on the power quality of current injected into the electric grid. The paper deals with the simulation of one floating offshore wind turbine equipped with a permanent magnet synchronous generator, and a two-level converter connected to an onshore electric grid. The use of discrete mass modeling is accessed in order to reveal by computing the total harmonic distortion on how the perturbations of the captured energy are attenuated at the electric grid injection point. Two torque actions are considered for the three-mass modeling, the aerodynamic on the flexible part and on the rigid part of the blades. Also, a torque due to the influence of marine waves in deep water is considered. Proportional integral fractional-order control supports the control strategy. A comparison between the drive train models is presented.
Resumo:
This paper presents an integrated model for an offshore wind energy system taking into consideration a contribution for the marine wave and wind speed with perturbations influences on the power quality of current injected into the electric grid. The paper deals with the simulation of one floating offshore wind turbine equipped with a PMSG and a two-level converter connected to an onshore electric grid. The use of discrete mass modeling is accessed in order to reveal by computing the THD on how the perturbations of the captured energy are attenuated at the electric grid injection point. Two torque actions are considered for the three-mass modeling, the aerodynamic on the flexible part and on the rigid part of the blades. Also, a torque due to the influence of marine waves in deep water is considered. PI fractional-order control supports the control strategy. A comparison between the drive train models is presented.
Resumo:
We propose an alternative crack propagation algo- rithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algo- rithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equa- tions is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algo- rithm, we use five quasi-brittle benchmarks, all successfully solved.
Resumo:
The chemical amount values vary in a discrete or continuous form, depending on the approach used to describe the system. In classical sciences, the chemical amount is a property of the macroscopic system and, like any other property of the system, it varies continuously. This is neither inconsistent with the concept of indivisible particles forming the system, nor a mere approximation, but it is a sound concept which enables the use of differential calculus, for instance, in chemical thermodynamics. It is shown that the fundamental laws of chemistry are absolutely compatible to the continuous concept of the chemical amount.