959 resultados para Feature scale simulation
Resumo:
This paper describes a new 2D model for the photospheric evolution of the magnetic carpet. It is the first in a series of papers working towards constructing a realistic 3D non-potential model for the interaction of small-scale solar magnetic fields. In the model, the basic evolution of the magnetic elements is governed by a supergranular flow profile. In addition, magnetic elements may evolve through the processes of emergence, cancellation, coalescence and fragmentation. Model parameters for the emergence of bipoles are based upon the results of observational studies. Using this model, several simulations are considered, where the range of flux with which bipoles may emerge is varied. In all cases the model quickly reaches a steady state where the rates of emergence and cancellation balance. Analysis of the resulting magnetic field shows that we reproduce observed quantities such as the flux distribution, mean field, cancellation rates, photospheric recycle time and a magnetic network. As expected, the simulation matches observations more closely when a larger, and consequently more realistic, range of emerging flux values is allowed (4×1016 - 1019 Mx). The model best reproduces the current observed properties of the magnetic carpet when we take the minimum absolute flux for emerging bipoles to be 4×1016 Mx. In future, this 2D model will be used as an evolving photospheric boundary condition for 3D non-potential modeling.
Resumo:
This paper is the second in a series of studies working towards constructing a realistic, evolving, non-potential coronal model for the solar magnetic carpet. In the present study, the interaction of two magnetic elements is considered. Our objectives are to study magnetic energy build-up, storage and dissipation as a result of emergence, cancellation, and flyby of these magnetic elements. In the future these interactions will be the basic building blocks of more complicated simulations involving hundreds of elements. Each interaction is simulated in the presence of an overlying uniform magnetic field, which lies at various orientations with respect to the evolving magnetic elements. For these three small-scale interactions, the free energy stored in the field at the end of the simulation ranges from 0.2 – 2.1×1026 ergs, whilst the total energy dissipated ranges from 1.3 – 6.3×1026 ergs. For all cases, a stronger overlying field results in higher energy storage and dissipation. For the cancellation and emergence simulations, motion perpendicular to the overlying field results in the highest values. For the flyby simulations, motion parallel to the overlying field gives the highest values. In all cases, the free energy built up is sufficient to explain small-scale phenomena such as X-ray bright points or nanoflares. In addition, if scaled for the correct number of magnetic elements for the volume considered, the energy continually dissipated provides a significant fraction of the quiet Sun coronal heating budget.
Resumo:
We present a new model for the Sun's global photospheric magnetic field during a deep minimum of activity, in which no active regions emerge. The emergence and subsequent evolution of small- scale magnetic features across the full solar surface is simulated, subject to the influence of a global supergranular flow pattern. Visually, the resulting simulated magnetograms reproduce the typical structure and scale observed in quiet Sun magnetograms. Quantitatively, the simulation quickly reaches a steady state, resulting in a mean field and flux distribution that are in good agreement with those determined from observations. A potential coronal magnetic field is extrapolated from the simulated full Sun magnetograms, to consider the implications of such a quiet photospheric magnetic field on the corona and inner heliosphere. The bulk of the coronal magnetic field closes very low down, in short connections between small-scale features in the simulated magnetic network. Just 0.1% of the photospheric magnetic flux is found to be open at 2:5 Rʘ, around 10 - 100 times less than that determined for typical HMI synoptic map observations. If such conditions were to exist on the Sun, this would lead to a significantly weaker interplanetary magnetic field than is presently observed, and hence a much higher cosmic ray flux at Earth.
Resumo:
When designing a new passenger ship or naval vessel or modifying an existing design, how do we ensure that the proposed design is safe from an evacuation point of view? In the wake of major maritime disasters such as the Herald of Free Enterprise and the Estonia and in light of the growth in the numbers of high density, high-speed ferries and large capacity cruise ships, issues concerned with the evacuation of passengers and crew at sea are receiving renewed interest. In the maritime industry, ship evacuation models are now recognised by IMO through the publication of the Interim Guidelines for Evacuation Analysis of New and Existing Passenger Ships including Ro-Ro. This approach offers the promise to quickly and efficiently bring evacuation considerations into the design phase, while the ship is "on the drawing board" as well as reviewing and optimising the evacuation provision of the existing fleet. Other applications of this technology include the optimisation of operating procedures for civil and naval vessels such as determining the optimal location of a feature such as a casino, organising major passenger movement events such as boarding/disembarkation or restaurant/theatre changes, determining lean manning requirements, location and number of damage control parties, etc. This paper describes the development of the maritimeEXODUS evacuation model which is fully compliant with IMO requirements and briefly presents an example application to a large passenger ferry.
Resumo:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
Resumo:
Automatic analysis of human behaviour in large collections of videos is gaining interest, even more so with the advent of file sharing sites such as YouTube. However, challenges still exist owing to several factors such as inter- and intra-class variations, cluttered backgrounds, occlusion, camera motion, scale, view and illumination changes. This research focuses on modelling human behaviour for action recognition in videos. The developed techniques are validated on large scale benchmark datasets and applied on real-world scenarios such as soccer videos. Three major contributions are made. The first contribution is in the area of proper choice of a feature representation for videos. This involved a study of state-of-the-art techniques for action recognition, feature extraction processing and dimensional reduction techniques so as to yield the best performance with optimal computational requirements. Secondly, temporal modelling of human behaviour is performed. This involved frequency analysis and temporal integration of local information in the video frames to yield a temporal feature vector. Current practices mostly average the frame information over an entire video and neglect the temporal order. Lastly, the proposed framework is applied and further adapted to real-world scenario such as soccer videos. A dataset consisting of video sequences depicting events of players falling is created from actual match data to this end and used to experimentally evaluate the proposed framework.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Developments in theory and experiment have raised the prospect of an electronic technology based on the discrete nature of electron tunnelling through a potential barrier. This thesis deals with novel design and analysis tools developed to study such systems. Possible devices include those constructed from ultrasmall normal tunnelling junctions. These exhibit charging effects including the Coulomb blockade and correlated electron tunnelling. They allow transistor-like control of the transfer of single carriers, and present the prospect of digital systems operating at the information theoretic limit. As such, they are often referred to as single electronic devices. Single electronic devices exhibit self quantising logic and good structural tolerance. Their speed, immunity to thermal noise, and operating voltage all scale beneficially with junction capacitance. For ultrasmall junctions the possibility of room temperature operation at sub picosecond timescales seems feasible. However, they are sensitive to external charge; whether from trapping-detrapping events, externally gated potentials, or system cross-talk. Quantum effects such as charge macroscopic quantum tunnelling may degrade performance. Finally, any practical system will be complex and spatially extended (amplifying the above problems), and prone to fabrication imperfection. This summarises why new design and analysis tools are required. Simulation tools are developed, concentrating on the basic building blocks of single electronic systems; the tunnelling junction array and gated turnstile device. Three main points are considered: the best method of estimating capacitance values from physical system geometry; the mathematical model which should represent electron tunnelling based on this data; application of this model to the investigation of single electronic systems. (DXN004909)
Resumo:
Fatigue damage in the connections of single mast arm signal support structures is one of the primary safety concerns because collapse could result from fatigue induced cracking. This type of cantilever signal support structures typically has very light damping and excessively large wind-induced vibration have been observed. Major changes related to fatigue design were made in the 2001 AASHTO LRFD Specification for Structural Supports for Highway Signs, Luminaries, and Traffic Signals and supplemental damping devices have been shown to be promising in reducing the vibration response and thus fatigue load demand on mast arm signal support structures. The primary objective of this study is to investigate the effectiveness and optimal use of one type of damping devices termed tuned mass damper (TMD) in vibration response mitigation. Three prototype single mast arm signal support structures with 50-ft, 60-ft, and 70-ft respectively are selected for this numerical simulation study. In order to validate the finite element models for subsequent simulation study, analytical modeling of static deflection response of mast arm of the signal support structures was performed and found to be close to the numerical simulation results from beam element based finite element model. A 3-DOF dynamic model was then built using analytically derived stiffness matrix for modal analysis and time history analysis. The free vibration response and forced (harmonic) vibration response of the mast arm structures from the finite element model are observed to be in good agreement with the finite element analysis results. Furthermore, experimental test result from recent free vibration test of a full-scale 50-ft mast arm specimen in the lab is used to verify the prototype structure’s fundamental frequency and viscous damping ratio. After validating the finite element models, a series of parametric study were conducted to examine the trend and determine optimal use of tuned mass damper on the prototype single mast arm signal support structures by varying the following parameters: mass, frequency, viscous damping ratio, and location of TMD. The numerical simulation study results reveal that two parameters that influence most the vibration mitigation effectiveness of TMD on the single mast arm signal pole structures are the TMD frequency and its viscous damping ratio.
Resumo:
A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.
Resumo:
Statistically stationary and homogeneous shear turbulence (SS-HST) is investigated by means of a new direct numerical simulation code, spectral in the two horizontal directions and compact-finite-differences in the direction of the shear. No remeshing is used to impose the shear-periodic boundary condition. The influence of the geometry of the computational box is explored. Since HST has no characteristic outer length scale and tends to fill the computational domain, long-term simulations of HST are “minimal” in the sense of containing on average only a few large-scale structures. It is found that the main limit is the spanwise box width, Lz, which sets the length and velocity scales of the turbulence, and that the two other box dimensions should be sufficiently large (Lx ≳ 2Lz, Ly ≳ Lz) to prevent other directions to be constrained as well. It is also found that very long boxes, Lx ≳ 2Ly, couple with the passing period of the shear-periodic boundary condition, and develop strong unphysical linearized bursts. Within those limits, the flow shows interesting similarities and differences with other shear flows, and in particular with the logarithmic layer of wall-bounded turbulence. They are explored in some detail. They include a self-sustaining process for large-scale streaks and quasi-periodic bursting. The bursting time scale is approximately universal, ∼20S−1, and the availability of two different bursting systems allows the growth of the bursts to be related with some confidence to the shearing of initially isotropic turbulence. It is concluded that SS-HST, conducted within the proper computational parameters, is a very promising system to study shear turbulence in general.
Resumo:
The need for efficient, sustainable, and planned utilization of resources is ever more critical. In the U.S. alone, buildings consume 34.8 Quadrillion (1015) BTU of energy annually at a cost of $1.4 Trillion. Of this energy 58% is utilized for heating and air conditioning. Several building energy analysis tools have been developed to assess energy demands and lifecycle energy costs in buildings. Such analyses are also essential for an efficient HVAC design that overcomes the pitfalls of an under/over-designed system. DOE-2 is among the most widely known full building energy analysis models. It also constitutes the simulation engine of other prominent software such as eQUEST, EnergyPro, PowerDOE. Therefore, it is essential that DOE-2 energy simulations be characterized by high accuracy. Infiltration is an uncontrolled process through which outside air leaks into a building. Studies have estimated infiltration to account for up to 50% of a building’s energy demand. This, considered alongside the annual cost of buildings energy consumption, reveals the costs of air infiltration. It also stresses the need that prominent building energy simulation engines accurately account for its impact. In this research the relative accuracy of current air infiltration calculation methods is evaluated against an intricate Multiphysics Hygrothermal CFD building envelope analysis. The full-scale CFD analysis is based on a meticulous representation of cracking in building envelopes and on real-life conditions. The research found that even the most advanced current infiltration methods, including in DOE-2, are at up to 96.13% relative error versus CFD analysis. An Enhanced Model for Combined Heat and Air Infiltration Simulation was developed. The model resulted in 91.6% improvement in relative accuracy over current models. It reduces error versus CFD analysis to less than 4.5% while requiring less than 1% of the time required for such a complex hygrothermal analysis. The algorithm used in our model was demonstrated to be easy to integrate into DOE-2 and other engines as a standalone method for evaluating infiltration heat loads. This will vastly increase the accuracy of such simulation engines while maintaining their speed and ease of use characteristics that make them very widely used in building design.
Resumo:
A subfilter-scale (SFS) stress model is developed for large-eddy simulations (LES) and is tested on various benchmark problems in both wall-resolved and wall-modelled LES. The basic ingredients of the proposed model are the model length-scale, and the model parameter. The model length-scale is defined as a fraction of the integral scale of the flow, decoupled from the grid. The portion of the resolved scales (LES resolution) appears as a user-defined model parameter, an advantage that the user decides the LES resolution. The model parameter is determined based on a measure of LES resolution, the SFS activity. The user decides a value for the SFS activity (based on the affordable computational budget and expected accuracy), and the model parameter is calculated dynamically. Depending on how the SFS activity is enforced, two SFS models are proposed. In one approach the user assigns the global (volume averaged) contribution of SFS to the transport (global model), while in the second model (local model), SFS activity is decided locally (locally averaged). The models are tested on isotropic turbulence, channel flow, backward-facing step and separating boundary layer. In wall-resolved LES, both global and local models perform quite accurately. Due to their near-wall behaviour, they result in accurate prediction of the flow on coarse grids. The backward-facing step also highlights the advantage of decoupling the model length-scale from the mesh. Despite the sharply refined grid near the step, the proposed SFS models yield a smooth, while physically consistent filter-width distribution, which minimizes errors when grid discontinuity is present. Finally the model application is extended to wall-modelled LES and is tested on channel flow and separating boundary layer. Given the coarse resolution used in wall-modelled LES, near the wall most of the eddies become SFS and SFS activity is required to be locally increased. The results are in very good agreement with the data for the channel. Errors in the prediction of separation and reattachment are observed in the separated flow, that are somewhat improved with some modifications to the wall-layer model.
Resumo:
The application of 3D grain-based modelling techniques is investigated in both small and large scale 3DEC models, in order to simulate brittle fracture processes in low-porosity crystalline rock. Mesh dependency in 3D grain-based models (GBMs) is examined through a number of cases to compare Voronoi and tetrahedral grain assemblages. Various methods are used in the generation of tessellations, each with a number of issues and advantages. A number of comparative UCS test simulations capture the distinct failure mechanisms, strength profiles, and progressive damage development using various Voronoi and tetrahedral GBMs. Relative calibration requirements are outlined to generate similar macro-strength and damage profiles for all the models. The results confirmed a number of inherent model behaviors that arise due to mesh dependency. In Voronoi models, inherent tensile failure mechanisms are produced by internal wedging and rotation of Voronoi grains. This results in a combined dependence on frictional and cohesive strength. In tetrahedral models, increased kinematic freedom of grains and an abundance of straight, connected failure pathways causes a preference for shear failure. This results in an inability to develop significant normal stresses causing cohesional strength dependence. In general, Voronoi models require high relative contact tensile strength values, with lower contact stiffness and contact cohesional strength compared to tetrahedral tessellations. Upscaling of 3D GBMs is investigated for both Voronoi and tetrahedral tessellations using a case study from the AECL’s Mine-by-Experiment at the Underground Research Laboratory. An upscaled tetrahedral model was able to reasonably simulate damage development in the roof forming a notch geometry by adjusting the cohesive strength. An upscaled Voronoi model underestimated the damage development in the roof and floor, and overestimated the damage in the side-walls. This was attributed to the discretization resolution limitations.