952 resultados para discrete-event simulation
Resumo:
In this thesis, research for tsunami remote sensing using the Global Navigation Satellite System-Reflectometry (GNSS-R) delay-Doppler maps (DDMs) is presented. Firstly, a process for simulating GNSS-R DDMs of a tsunami-dominated sea sur- face is described. In this method, the bistatic scattering Zavorotny-Voronovich (Z-V) model, the sea surface mean square slope model of Cox and Munk, and the tsunami- induced wind perturbation model are employed. The feasibility of the Cox and Munk model under a tsunami scenario is examined by comparing the Cox and Munk model- based scattering coefficient with the Jason-1 measurement. A good consistency be- tween these two results is obtained with a correlation coefficient of 0.93. After con- firming the applicability of the Cox and Munk model for a tsunami-dominated sea, this work provides the simulations of the scattering coefficient distribution and the corresponding DDMs of a fixed region of interest before and during the tsunami. Fur- thermore, by subtracting the simulation results that are free of tsunami from those with presence of tsunami, the tsunami-induced variations in scattering coefficients and DDMs can be clearly observed. Secondly, a scheme to detect tsunamis and estimate tsunami parameters from such tsunami-dominant sea surface DDMs is developed. As a first step, a procedure to de- termine tsunami-induced sea surface height anomalies (SSHAs) from DDMs is demon- strated and a tsunami detection precept is proposed. Subsequently, the tsunami parameters (wave amplitude, direction and speed of propagation, wavelength, and the tsunami source location) are estimated based upon the detected tsunami-induced SSHAs. In application, the sea surface scattering coefficients are unambiguously re- trieved by employing the spatial integration approach (SIA) and the dual-antenna technique. Next, the effective wind speed distribution can be restored from the scat- tering coefficients. Assuming all DDMs are of a tsunami-dominated sea surface, the tsunami-induced SSHAs can be derived with the knowledge of background wind speed distribution. In addition, the SSHA distribution resulting from the tsunami-free DDM (which is supposed to be zero) is considered as an error map introduced during the overall retrieving stage and is utilized to mitigate such errors from influencing sub- sequent SSHA results. In particular, a tsunami detection procedure is conducted to judge the SSHAs to be truly tsunami-induced or not through a fitting process, which makes it possible to decrease the false alarm. After this step, tsunami parameter estimation is proceeded based upon the fitted results in the former tsunami detec- tion procedure. Moreover, an additional method is proposed for estimating tsunami propagation velocity and is believed to be more desirable in real-world scenarios. The above-mentioned tsunami-dominated sea surface DDM simulation, tsunami detection precept and parameter estimation have been tested with simulated data based on the 2004 Sumatra-Andaman tsunami event.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
A novel surrogate model is proposed in lieu of Computational Fluid Dynamics (CFD) solvers, for fast nonlinear aerodynamic and aeroelastic modeling. A nonlinear function is identified on selected interpolation points by
a discrete empirical interpolation method (DEIM). The flow field is then reconstructed using a least square approximation of the flow modes extracted
by proper orthogonal decomposition (POD). The aeroelastic reduce order
model (ROM) is completed by introducing a nonlinear mapping function
between displacements and the DEIM points. The proposed model is investigated to predict the aerodynamic forces due to forced motions using
a N ACA 0012 airfoil undergoing a prescribed pitching oscillation. To investigate aeroelastic problems at transonic conditions, a pitch/plunge airfoil
and a cropped delta wing aeroelastic models are built using linear structural models. The presence of shock-waves triggers the appearance of limit
cycle oscillations (LCO), which the model is able to predict. For all cases
tested, the new ROM shows the ability to replicate the nonlinear aerodynamic forces, structural displacements and reconstruct the complete flow
field with sufficient accuracy at a fraction of the cost of full order CFD
model.
Resumo:
In-situ characterisation of thermocouple sensors is a challenging problem. Recently the authors presented a blind characterisation technique based on the cross-relation method of blind identification. The method allows in-situ identification of two thermocouple probes, each with a different dynamic response, using only sampled sensor measurement data. While the technique offers certain advantages over alternative methods, including low estimation variance and the ability to compensate for noise induced bias, the robustness of the method is limited by the multimodal nature of the cost function. In this paper, a normalisation term is proposed which improves the convexity of
the cost function. Further, a normalisation and bias compensation hybrid approach is presented that exploits the advantages of both normalisation and bias compensation. It is found that the optimum of the hybrid cost function is less biased and more stable than when only normalisation is applied. All results were verified by simulation.
Resumo:
This paper presents a discrete formalism for temporal reasoning about actions and change, which enjoys an explicit representation of time and action/event occurrences. The formalism allows the expression of truth values for given fluents over various times including nondecomposable points/moments and decomposable intervals. Two major problems which beset most existing interval-based theories of action and change, i.e., the so-called dividing instant problem and the intermingling problem, are absent from this new formalism. The dividing instant problem is overcome by excluding the concepts of ending points of intervals, and the intermingling problem is bypassed by means of characterising the fundamental time structure as a well-ordered discrete set of non-decomposable times (points and moments), from which decomposable intervals are constructed. A comprehensive characterisation about the relationship between the negation of fluents and the negation of involved sentences is formally provided. The formalism provides a flexible expression of temporal relationships between effects and their causal events, including delayed effects of events which remains a problematic question in most existing theories about action and change.
Resumo:
We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.
Resumo:
Automatic analysis of human behaviour in large collections of videos is gaining interest, even more so with the advent of file sharing sites such as YouTube. However, challenges still exist owing to several factors such as inter- and intra-class variations, cluttered backgrounds, occlusion, camera motion, scale, view and illumination changes. This research focuses on modelling human behaviour for action recognition in videos. The developed techniques are validated on large scale benchmark datasets and applied on real-world scenarios such as soccer videos. Three major contributions are made. The first contribution is in the area of proper choice of a feature representation for videos. This involved a study of state-of-the-art techniques for action recognition, feature extraction processing and dimensional reduction techniques so as to yield the best performance with optimal computational requirements. Secondly, temporal modelling of human behaviour is performed. This involved frequency analysis and temporal integration of local information in the video frames to yield a temporal feature vector. Current practices mostly average the frame information over an entire video and neglect the temporal order. Lastly, the proposed framework is applied and further adapted to real-world scenario such as soccer videos. A dataset consisting of video sequences depicting events of players falling is created from actual match data to this end and used to experimentally evaluate the proposed framework.
Resumo:
The catastrophic event of red tide has happened in the Strait of Hormuz, the Persian Gulf and Gulf of Oman from late summer 2008 to spring 2009. With its devastating effects, the phenomenon shocked all the countries located in the margin of the Persian Gulf and the Gulf of Oman and caused considerable losses to fishery industries, tourism, and tourist and trade economy of the region. In the maritime cruise carried out by the Persian Gulf and Gulf of Oman Ecological Research Institute, field data, including temperature, salinity, chlorophyll-a, dissolved oxygen and algal density were obtained for this research. Satellite information was received from MODIS and MERIS and SeaWiFS sensors. Temperature and surface chlorophyll images were obtained and compared with the field data and data of PROBE model. The results obtained from the present research indicated that with the occurrence of harmful algal blooms (HAB), the Chlorophyll-a and the dissolved oxygen contents increased in the surface water. Maximum algal density was seen in the northern coasts of the Strait of Hormuz. Less concentration of algal density was detected in deep and surface offshore water. Our results show that the occurred algal bloom was the result of seawater temperature drop, water circulation and the adverse environmental pollutions caused by industrial and urban sewages entering the coastal waters in this region of the Persian Gulf ,This red tide phenomenon was started in the Strait of Hormuz and eventually covered about 140,000 km2 of the Persian Gulf and total area of Strait of Hormuz and it survived for 10 months which is a record amongst the occurred algal blooms across the world. Temperature and chlorophyll satellite images were proportionate to the measured values obtained by the field method. This indicates that satellite measurements have acceptable precisions and they can be used in sea monitoring and modeling.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Developments in theory and experiment have raised the prospect of an electronic technology based on the discrete nature of electron tunnelling through a potential barrier. This thesis deals with novel design and analysis tools developed to study such systems. Possible devices include those constructed from ultrasmall normal tunnelling junctions. These exhibit charging effects including the Coulomb blockade and correlated electron tunnelling. They allow transistor-like control of the transfer of single carriers, and present the prospect of digital systems operating at the information theoretic limit. As such, they are often referred to as single electronic devices. Single electronic devices exhibit self quantising logic and good structural tolerance. Their speed, immunity to thermal noise, and operating voltage all scale beneficially with junction capacitance. For ultrasmall junctions the possibility of room temperature operation at sub picosecond timescales seems feasible. However, they are sensitive to external charge; whether from trapping-detrapping events, externally gated potentials, or system cross-talk. Quantum effects such as charge macroscopic quantum tunnelling may degrade performance. Finally, any practical system will be complex and spatially extended (amplifying the above problems), and prone to fabrication imperfection. This summarises why new design and analysis tools are required. Simulation tools are developed, concentrating on the basic building blocks of single electronic systems; the tunnelling junction array and gated turnstile device. Three main points are considered: the best method of estimating capacitance values from physical system geometry; the mathematical model which should represent electron tunnelling based on this data; application of this model to the investigation of single electronic systems. (DXN004909)
Resumo:
A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.
Resumo:
Abundance and composition of marine benthic communities have been relatively well studied in the SE Brazilian coast, but little is known on patterns controlling the distribution of their planktonic larval stages. A survey of larval abundance in the continental margin, using a Multi-Plankton Sampler, was conducted in a cross-shelf transect off Cabo Frio (23 degrees S and 42 degrees W) during a costal upwelling event. Hydrographic conditions were monitored through discrete CDT casts. Chlorophyll-a in the top 100 m of the water column was determined and changes in surface chlorophyll-a was estimated using SeaWiFS images. Based on the larval abundances and the meso-scale hydrodynamics scenario, our results suggest two different processes affecting larval distributions. High larval densities were found nearshore due to the upwelling event associated with high chlorophyll a and strong along shore current. on the continental slope, high larval abundance was associated with a clockwise rotating meander, which may have entrapped larvae from a region located further north (Cabo de Sao Tome, 22 degrees S and 41 degrees W). In mid-shelf areas, our data suggests that vertical migration may likely occur as a response to avoid offshore transport by upwelling plumes and/or cyclonic meanders. The hydrodynamic scenario observed in the study area has two distinct yet extremely important consequences: larval retention on food-rich upwelling areas and the broadening of the tropical domain to southernmost subtropical areas. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.
Resumo:
Introduction: Caring for a child who has an unexpected ALTE in hospital can be stressful. An ALTE may include a cardiac arrest, respiratory arrest or call for immediate assistance. An international survey of practice was conducted to identify any existing interventions to inform a PhD program of work aimed at reducing the stress from these events through preparation and support. Purpose: The purpose of the survey was threefold: (1) Describe ‘normal’ practice when it comes to preparing staff or providing psychological support after caring for a child who has had an ALTE. (2) Determine if there are any interventions to prepare clinical staff for potential psychological effects of caring for a child who has an ALTE. (3) Determine if there are any interventions to provide support for clinical staff after caring for a child who has an ALTE. Material - Methods: An 18 item semi structured questionnaire was designed for the study to allow respondent to describe practices within their institution and outline their opinions and professional experiences. Clinicians from selected children’s and adult hospitals in Australia, Canada, New Zealand, United Kingdom and the United States of America were contacted by telephone. Following consent they were given the option to complete the survey via the telephone, by post or online. Results: Of the 61 hospitals approached 44 (72%) clinicians responded. Eighteen (41%) respondents identified interventions in place to prepare nurses for an ALTE ranging from (but not limited to) ad hoc discussions during life support training through to structured simulation training. Thirty-six (82%) respondents identified that they had interventions in place to support nurses after an ALTE ranging from (but not limited to) debriefing through to structured case reviews. Conclusions: Interventions varied across institutions, with no outcome or evaluation data for the interventions published to date.
Resumo:
Questo elaborato ha analizzato le Event Related Spectral Perturbation (ERSP), cioè le variazioni nella potenza del segnale in corrispondenza di una frequenza o una banda di frequenze, suddivisi in Event Related Synchronization (ERS) nel caso di incremento di potenza e Event Related Desynchronization (ERD) nel caso di decremento di potenza, relative a movimenti di reaching eseguiti con l’arto superiore. In particolare, sono state prese in considerazione le potenze nel ritmo alpha (8-12 Hz) e ritmo beta (12-30 Hz), in quanto ritmi associati alla preparazione ed esecuzione del movimento. I segnali EEG analizzati (60 canali) sono relativi a 14 soggetti a cui è stato chiesto di compiere movimenti di reaching verso una posizione target, secondo una tempistica definita dall’accensione di uno stimolo visivo informativo (CUE, corrispondente al led nella posizione target) e un secondo stimolo visivo imperativo (GO) che dà il via libera all’esecuzione del movimento verso il target. Sono codificate 5 posizioni discrete, a 0°, 45°, 90°, 135° e 180°, su una semicirconferenza il cui centro rappresenta il punto di partenza e ritorno, chiamato base. I segnali preventivamente preprocessati, sono stati decomposti tramite trasformata wavelet ed elaborati con la tecnica dell’averaging, con lo scopo di evidenziare ERS e ERD in banda alpha e beta associati a eventi visivi e motori. È stata presentata l’evoluzione nel tempo della distribuzione topologica delle ERSP a livello dello scalpo e sono state svolte analisi suddividendo i dati secondo la direzione (destra, centro, sinistra) del target e secondo la profondità (vicino, medio, lontano) del target. I risultati mostrano una desincronizzazione (ERD) nella fase di preparazione del movimento (immediatamente prima dello stimolo GO), sia in banda alpha che beta e suggeriscono una modulazione dell’ERD in funzione della direzione/profondità del target.