922 resultados para large-eddy simulation
Resumo:
When designing a new passenger ship or naval vessel or modifying an existing design, how do we ensure that the proposed design is safe from an evacuation point of view? In the wake of major maritime disasters such as the Herald of Free Enterprise and the Estonia and in light of the growth in the numbers of high density, high-speed ferries and large capacity cruise ships, issues concerned with the evacuation of passengers and crew at sea are receiving renewed interest. In the maritime industry, ship evacuation models are now recognised by IMO through the publication of the Interim Guidelines for Evacuation Analysis of New and Existing Passenger Ships including Ro-Ro. This approach offers the promise to quickly and efficiently bring evacuation considerations into the design phase, while the ship is "on the drawing board" as well as reviewing and optimising the evacuation provision of the existing fleet. Other applications of this technology include the optimisation of operating procedures for civil and naval vessels such as determining the optimal location of a feature such as a casino, organising major passenger movement events such as boarding/disembarkation or restaurant/theatre changes, determining lean manning requirements, location and number of damage control parties, etc. This paper describes the development of the maritimeEXODUS evacuation model which is fully compliant with IMO requirements and briefly presents an example application to a large passenger ferry.
Resumo:
The prediction of convective heat transfer in enclosures under high ventilative flow rates is primarily of interest for building design and simulation purposes. Current models are based on experiments performed forty years ago with flat plates under natural convection conditions.
Resumo:
Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.
Resumo:
Background: The use of artificial endoprostheses has become a routine procedure for knee and hip joints while ankle arthritis has traditionally been treated by means of arthrodesis. Due to its advantages, the implantation of endoprostheses is constantly increasing. While finite element analyses (FEA) of strain-adaptive bone remodelling have been carried out for the hip joint in previous studies, to our knowledge there are no investigations that have considered remodelling processes of the ankle joint. In order to evaluate and optimise new generation implants of the ankle joint, as well as to gain additional knowledge regarding the biomechanics, strain-adaptive bone remodelling has been calculated separately for the tibia and the talus after providing them with an implant. Methods: FE models of the bone-implant assembly for both the tibia and the talus have been developed. Bone characteristics such as the density distribution have been applied corresponding to CT scans. A force of 5,200 N, which corresponds to the compression force during normal walking of a person with a weight of 100 kg according to Stauffer et al., has been used in the simulation. The bone adaptation law, previously developed by our research team, has been used for the calculation of the remodelling processes. Results: A total bone mass loss of 2% in the tibia and 13% in the talus was calculated. The greater decline of density in the talus is due to its smaller size compared to the relatively large implant dimensions causing remodelling processes in the whole bone tissue. In the tibia, bone remodelling processes are only calculated in areas adjacent to the implant. Thus, a smaller bone mass loss than in the talus can be expected. There is a high agreement between the simulation results in the distal tibia and the literature regarding. Conclusions: In this study, strain-adaptive bone remodelling processes are simulated using the FE method. The results contribute to a better understanding of the biomechanical behaviour of the ankle joint and hence are useful for the optimisation of the implant geometry in the future.
Resumo:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
Resumo:
Observational data and a three dimensional numerical model (POM) are used to investigate the Persian Gulf outflow structure and its spreading pathway into the Oman Sea. The model is based on orthogonal curvilinear coordinate system in horizontal and train following coordinate (sigma coordinate) system in vertical. In the simulation, the horizontal diffusivity coefficients are calculated form Smogorinsky diffusivity formula and the eddy vertical diffusivities are obtained from a second turbulence closure model (namely Mellor-Yamada level 2.5 model of turbulence). The modeling area includes the east of the Persian Gulf, the Oman Sea and a part of the north-east of the Indian Ocean. In the model, the horizontal grid spacing was assumed to be about 3.5 km and the number of vertical levels was set to 32. The simulations show that the mean salinity of the PG outflow does not change substantially during the year and is about 39 psu, while its temperature exhibits seasonal variations. These lead to variations in outflow density in a way that is has its maximum density in late winter (March) and its minimum in mid-summer (August). At the entrance to the Oman Sea, the PG outflow turns to the right due to Coriolis Effect and falls down on the continental slope until it gains its equilibrium depth. The highest density of the outflow during March causes it to sink more into the deeper depths in contrast to that of August which the density is the lowest one. Hence, the neutral buoyancy depths of the outflow are about 500 m and 250 m for March and August respectively. Then, the outflow spreads in its equilibrium depths in the Oman Sea in vicinity of western and southern boundaries until it approach the Ras al Hamra Cape where the water depth suddenly begins to increase. Therefore, during March, the outflow that is deeper and wider relative to August, is more affected by the steep slope topography and as a result of vortex stretching mechanism and conservation of potential vorticity it separates from the lateral boundaries and finally forms an anti-cyclonic eddy in the Oman Sea. But during August the outflow moves as before in vicinity of lateral boundaries. In addition, the interaction of the PG outflow with tide in the Strait of Hormuz leads to intermittency in outflow movement into the Oman Sea and it could be the major reason for generations of Peddy (Peddies) in the Oman Sea.
Resumo:
Observing system experiments (OSEs) are carried out over a 1-year period to quantify the impact of Argo observations on the Mercator Ocean 0.25° global ocean analysis and forecasting system. The reference simulation assimilates sea surface temperature (SST), SSALTO/DUACS (Segment Sol multi-missions dALTimetrie, d'orbitographie et de localisation précise/Data unification and Altimeter combination system) altimeter data and Argo and other in situ observations from the Coriolis data center. Two other simulations are carried out where all Argo and half of the Argo data are withheld. Assimilating Argo observations has a significant impact on analyzed and forecast temperature and salinity fields at different depths. Without Argo data assimilation, large errors occur in analyzed fields as estimated from the differences when compared with in situ observations. For example, in the 0–300 m layer RMS (root mean square) differences between analyzed fields and observations reach 0.25 psu and 1.25 °C in the western boundary currents and 0.1 psu and 0.75 °C in the open ocean. The impact of the Argo data in reducing observation–model forecast differences is also significant from the surface down to a depth of 2000 m. Differences between in situ observations and forecast fields are thus reduced by 20 % in the upper layers and by up to 40 % at a depth of 2000 m when Argo data are assimilated. At depth, the most impacted regions in the global ocean are the Mediterranean outflow, the Gulf Stream region and the Labrador Sea. A significant degradation can be observed when only half of the data are assimilated. Therefore, Argo observations matter to constrain the model solution, even for an eddy-permitting model configuration. The impact of the Argo floats' data assimilation on other model variables is briefly assessed: the improvement of the fit to Argo profiles do not lead globally to unphysical corrections on the sea surface temperature and sea surface height. The main conclusion is that the performance of the Mercator Ocean 0.25° global data assimilation system is heavily dependent on the availability of Argo data.
Resumo:
The focus of this research is to explore the applications of the finite difference formulation based on the latency insertion method (LIM) to the analysis of circuit interconnects. Special attention is devoted to addressing the issues that arise in very large networks such as on-chip signal and power distribution networks. We demonstrate that the LIM has the power and flexibility to handle various types of analysis required at different stages of circuit design. The LIM is particularly suitable for simulations of very large scale linear networks and can significantly outperform conventional circuit solvers (such as SPICE).
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
Fatigue damage in the connections of single mast arm signal support structures is one of the primary safety concerns because collapse could result from fatigue induced cracking. This type of cantilever signal support structures typically has very light damping and excessively large wind-induced vibration have been observed. Major changes related to fatigue design were made in the 2001 AASHTO LRFD Specification for Structural Supports for Highway Signs, Luminaries, and Traffic Signals and supplemental damping devices have been shown to be promising in reducing the vibration response and thus fatigue load demand on mast arm signal support structures. The primary objective of this study is to investigate the effectiveness and optimal use of one type of damping devices termed tuned mass damper (TMD) in vibration response mitigation. Three prototype single mast arm signal support structures with 50-ft, 60-ft, and 70-ft respectively are selected for this numerical simulation study. In order to validate the finite element models for subsequent simulation study, analytical modeling of static deflection response of mast arm of the signal support structures was performed and found to be close to the numerical simulation results from beam element based finite element model. A 3-DOF dynamic model was then built using analytically derived stiffness matrix for modal analysis and time history analysis. The free vibration response and forced (harmonic) vibration response of the mast arm structures from the finite element model are observed to be in good agreement with the finite element analysis results. Furthermore, experimental test result from recent free vibration test of a full-scale 50-ft mast arm specimen in the lab is used to verify the prototype structure’s fundamental frequency and viscous damping ratio. After validating the finite element models, a series of parametric study were conducted to examine the trend and determine optimal use of tuned mass damper on the prototype single mast arm signal support structures by varying the following parameters: mass, frequency, viscous damping ratio, and location of TMD. The numerical simulation study results reveal that two parameters that influence most the vibration mitigation effectiveness of TMD on the single mast arm signal pole structures are the TMD frequency and its viscous damping ratio.
Resumo:
Determining effective hydraulic, thermal, mechanical and electrical properties of porous materials by means of classical physical experiments is often time-consuming and expensive. Thus, accurate numerical calculations of material properties are of increasing interest in geophysical, manufacturing, bio-mechanical and environmental applications, among other fields. Characteristic material properties (e.g. intrinsic permeability, thermal conductivity and elastic moduli) depend on morphological details on the porescale such as shape and size of pores and pore throats or cracks. To obtain reliable predictions of these properties it is necessary to perform numerical analyses of sufficiently large unit cells. Such representative volume elements require optimized numerical simulation techniques. Current state-of-the-art simulation tools to calculate effective permeabilities of porous materials are based on various methods, e.g. lattice Boltzmann, finite volumes or explicit jump Stokes methods. All approaches still have limitations in the maximum size of the simulation domain. In response to these deficits of the well-established methods we propose an efficient and reliable numerical method which allows to calculate intrinsic permeabilities directly from voxel-based data obtained from 3D imaging techniques like X-ray microtomography. We present a modelling framework based on a parallel finite differences solver, allowing the calculation of large domains with relative low computing requirements (i.e. desktop computers). The presented method is validated in a diverse selection of materials, obtaining accurate results for a large range of porosities, wider than the ranges previously reported. Ongoing work includes the estimation of other effective properties of porous media.
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
Resumo:
In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store’s fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.