63 resultados para Natural boundary conditions
Resumo:
Verification of the dynamical Casimir effect (DCE) in optical systems is still elusive due to the very demanding requirements for its experimental implementation. This typically requires very fast changes in the boundary conditions of the problem. We show that an ensemble of two-level atoms collectively coupled to the electromagnetic field of a cavity, driven at low frequencies and close to a quantum phase transition, stimulates the production of photons from the vacuum. This paves the way for an effective simulation of the DCE through a mechanism that has recently found experimental demonstration. The spectral properties of the emitted radiation reflect the critical nature of the system and allow us to link the detection of DCE to the Kibble-Zurek mechanism for the production of defects when crossing a continuous phase transition.
Resumo:
Considering that TBMs are nowadays used for long Trans-Alpine tunnels, the
understanding of rock breaking and chipping due to TBM cutter disks mechanism, for deep tunnelling operations, becomes very interesting. In this paper, the results from carried out laboratory tests that simulate the disk cutter action at the rock tunnel face by means of an indentation tool, acting on a rock
specimen with proper size, and the related three-dimensional and two-dimensional numerical modelling are proposed. The developed numerical models simulate the different test conditions (applied load, boundary conditions) allowing the analysis of the stresses distributions along possible breaking planes.
The influence of a confinement-free area on one side of the specimen, simulating the formation of a groove near the tool, is pointed out.
The obtained results from numerical modelling put in evidence a satisfactory agreement with the experimental observations.
Resumo:
The ability to predict the mechanical behavior of polymer composites is crucial for their design and manufacture. Extensive studies based on both macro- and micromechanical analyses are used to develop new insights into the behavior of composites. In this respect, finite element modeling has proved to be a particularly powerful tool. In this article, we present a Galerkin scheme in conjunction with the penalty method for elasticity analyses of different types of polymer composites. In this scheme, the application of Green's theorem to the model equation results in the appearance of interfacial flux terms along the boundary between the filler and polymer matrix. It is shown that for some types of composites these terms significantly affect the stress transfer between polymer and fillers. Thus, inclusion of these terms in the working equations of the scheme preserves the accuracy of the model predictions. The model is used to predict the most important bulk property of different types of composites. Composites filled with rigid or soft particles, and composites reinforced with short or continuous fibers are investigated. For each case, the results are compared with the available experimental results and data obtained from other models reported in the literature. Effects of assumptions made in the development of the model and the selection of the prescribed boundary conditions are discussed.
Resumo:
This paper presents the background rationale and key findings for a model-based study of supercritical waste heat recovery organic Rankine cycles. The paper’s objective is to cover the necessary groundwork to facilitate the future operation of a thermodynamic organic Rankine cycle model under realistic thermodynamic boundary conditions for performance optimisation of organic Rankine cycles. This involves determining the type of power cycle for organic Rankine cycles, the circuit configuration and suitable boundary conditions. The study focuses on multiple heat sources from vehicles but the findings are generally applicable, with careful consideration, to any waste heat recovery system. This paper introduces waste heat recovery and discusses the general merits of organic fluids versus water and supercritical operation versus subcritical operation from a theoretical perspective and, where possible, from a practical perspective. The benefits of regeneration are investigated from an efficiency perspective for selected subcritical and supercritical conditions. A simulation model is described with an introduction to some general Rankine cycle boundary conditions. The paper describes the analysis of real hybrid vehicle data from several driving cycles and its manipulation to represent the thermal inertia for model heat input boundary conditions. Basic theory suggests that selecting the operating pressures and temperatures to maximise the Rankine cycle performance is relatively straightforward. However, it was found that this may not be the case for an organic Rankine cycle operating in a vehicle. When operating in a driving cycle, the available heat and its quality can vary with the power output and between heat sources. For example, the available coolant heat does not vary much with the load, whereas the quantity and quality of the exhaust heat varies considerably. The key objective for operation in the vehicle is optimum utilisation of the available heat by delivering the maximum work out. The fluid selection process and the presentation and analysis of the final results of the simulation work on organic Rankine cycles are the subjects of two future publications.
Resumo:
The fate and cycling of two selected legacy persistent organic pollutants (POPs), PCB 153 and gamma-HCH, in the North Sea in the 21st century have been modelled with combined hydrodynamic and fate and transport ocean models
(HAMSOM and FANTOM, respectively). To investigate the impact of climate variability on POPs in the North Sea in the 21st century, future scenario model runs for three 10-year periods to the year 2100 using plausible levels of both in
situ concentrations and atmospheric, river and open boundary inputs are performed. This slice mode under a moderate scenario (A1B) is sufficient to provide a basis for further analysis. For the HAMSOM and atmospheric forcing, results of the IPCC A1B (SRES) 21st century scenario are utilized, where surface forcing is provided by the REMO downscaling of the ECHAM5 global atmospheric model, and open boundary conditions are provided by the MPIOM global ocean model.
Dry gas deposition and volatilization of gamma-HCH increase in the future relative to the present by up to 20% (in the spring and summer months for deposition and in summer for volatilization). In the water column, total mass of
gamma-HCH and PCB 153 remain fairly steady in all three runs. In sediment,
gamma-HCH increases in the future runs, relative to the present, while PCB 153 in sediment decreases exponentially in all three runs, but even faster in the future, due to the increased number of storms, increased duration of gale wind conditions and increased water and air temperatures, all of which are the result of climate change. Annual net sinks exceed sources at the ends of all periods.
Overall, the model results indicate that the climate change scenarios considered here generally have a negligible influence on the simulated fate and transport of the two POPs in the North Sea, although the increased number and magnitude of storms in the 21st century will result in POP resuspension and ensuing revolatilization events. Trends in emissions from primary and secondary sources will remain the key driver of levels of these contaminants over time.
Resumo:
For open boundary conditions (OBCs) in regional models, a nudging term added to radiative and/or advective conditions during the wave or flow propagation outward from the model domain of interest is widely used, to prevent the predicted boundary values from evolving to become quite different from the external data, especially for a long-term integration. However, nudging time scales are basically unknown, leading to many empirical selections. In this paper, a method for objectively estimating nudging time scales during outward propagation is proposed, by using internal model dynamics near the boundary. We tested this method and other several commonly used OBCs for cases of both an idealized model domain and a realistic configuration, and model results demonstrated that the proposed method improves the model solutions. Many similarities are found between the nudging and mixing time scales, in magnitude, spatial and temporal variations, since the nudging mainly replaces the effect of the mixing terms in this study. However, the mixing time scale is not an intrinsic property of the nudging term because in other studies the nudging term might replace terms other than the mixing terms and, thus, should reflect other characteristic time scales.
Resumo:
Integrating analysis and design models is a complex task due to differences between the models and the architectures of the toolsets used to create them. This complexity is increased with the use of many different tools for specific tasks using an analysis process. In this work various design and analysis models are linked throughout the design lifecycle, allowing them to be moved between packages in a way not currently available. Three technologies named Cellular Modeling, Virtual Topology and Equivalencing are combined to demonstrate how different finite element meshes generated on abstract analysis geometries can be linked to their original geometry. Cellular models allow interfaces between adjacent cells to be extracted and exploited to transfer analysis attributes such as mesh associativity or boundary conditions between equivalent model representations. Virtual Topology descriptions used for geometry clean-up operations are explicitly stored so they can be reused by downstream applications. Establishing the equivalence relationships between models enables analysts to utilize multiple packages for specialist tasks without worrying about compatibility issues or substantial rework.
Resumo:
Defining Simulation Intent involves capturing high level modelling and idealisation decisions in order to create an efficient and fit-for-purpose analysis. These decisions are recorded as attributes of the decomposed design space.
An approach to defining Simulation Intent is described utilising three known technologies: Cellular Modelling, the subdivision of space into volumes of simulation significance (structures, gas paths, internal and external airflows etc.); Equivalencing, maintaining a consistent and coherent description
of the equivalent representations of the spatial cells in different analysis models; and Virtual Topology, which offers tools for partitioning and de-partitioning the model without disturbing the manufacturing oriented design geometry. The end result is a convenient framework to which high level analysis attributes can be applied, and from which detailed analysis models can be generated
with a high degree of controllability, repeatability and automation. There are multiple novel aspects to the approach, including its reusability, robustness to changes in model topology and the inherent links created between analysis models at different levels of fidelity and physics.
By utilising Simulation Intent, CAD modelling for simulation can be fully exploited and simulation work-flows can be more readily automated, reducing many repetitive manual tasks (e.g. the definition of appropriate coupling between elements of different types and the application of boundary conditions). The approach has been implemented and tested with practical examples, and
significant benefits are demonstrated.
Resumo:
We describe some unsolved problems of current interest; these involve quantum critical points in
ferroelectrics and problems which are not amenable to the usual density functional theory, nor to
classical Landau free energy approaches (they are kinetically limited), nor even to the Landau–
Kittel relationship for domain size (they do not satisfy the assumption of infinite lateral diameter)
because they are dominated by finite aperiodic boundary conditions.
Resumo:
Accurate modelling of the internal climate of buildings is essential if Building Energy Management Systems (BEMS) are to efficiently maintain adequate thermal comfort. Computational fluid dynamics (CFD) models are usually utilised to predict internal climate. Nevertheless CFD models, although providing the necessary level of accuracy, are highly computationally expensive, and cannot practically be integrated in BEMS. This paper presents and describes validation of a CFD-ROM method for real-time simulations of building thermal performance. The CFD-ROM method involves the automatic extraction and solution of reduced order models (ROMs) from validated CFD simulations. ROMs are shown to be adequately accurate with a total error below 5% and to retain satisfactory representation of the phenomena modelled. Each ROM has a time to solution under 20seconds, which opens the potential of their integration with BEMS, giving real-time physics-based building energy modelling. A parameter study was conducted to investigate the applicability of the extracted ROM to initial boundary conditions different from those from which it was extracted. The results show that the ROMs retained satisfactory total errors when the initial conditions in the room were varied by ±5°C. This allows the production of a finite number of ROMs with the ability to rapidly model many possible scenarios.
Resumo:
This paper outlines the importance of robust interface management for facilitating finite element analysis workflows. Topological equivalences between analysis model representations are identified and maintained in an editable and accessible manner. The model and its interfaces are automatically represented using an analysis-specific cellular decomposition of the design space. Rework of boundary conditions following changes to the design geometry or the analysis idealization can be minimized by tracking interface dependencies. Utilizing this information with the Simulation Intent specified by an analyst, automated decisions can be made to process the interface information required to rebuild analysis models. Through this work automated boundary condition application is realized within multi-component, multi-resolution and multi-fidelity analysis workflows.
Resumo:
Low-velocity impact damage can drastically reduce the residual strength of a composite structure even when the damage is barely visible. The ability to computationally predict the extent of damage and compression-after-impact (CAI) strength of a composite structure can potentially lead to the exploration of a larger design space without incurring significant time and cost penalties. A high-fidelity three-dimensional composite damage model, to predict both low-velocity impact damage and CAI strength of composite laminates, has been developed and implemented as a user material subroutine in the commercial finite element package, ABAQUS/Explicit. The intralaminar damage model component accounts for physically-based tensile and compressive failure mechanisms, of the fibres and matrix, when subjected to a three-dimensional stress state. Cohesive behaviour was employed to model the interlaminar failure between plies with a bi-linear traction–separation law for capturing damage onset and subsequent damage evolution. The virtual tests, set up in ABAQUS/Explicit, were executed in three steps, one to capture the impact damage, the second to stabilize the specimen by imposing new boundary conditions required for compression testing, and the third to predict the CAI strength. The observed intralaminar damage features, delamination damage area as well as residual strength are discussed. It is shown that the predicted results for impact damage and CAI strength correlated well with experimental testing without the need of model calibration which is often required with other damage models.
Resumo:
Lap joints are widely used in the manufacture of stiffened panels and influence local panel sub-component stability, defining buckling unit dimensions and boundary conditions. Using the Finite Element method it is possible to model joints in great detail and predict panel buckling behaviour with accuracy. However, when modelling large panel structures such detailed analysis becomes computationally expensive. Moreover, the impact of local behaviour on global panel performance may reduce as the scale of the modelled structure increases. Thus this study presents coupled computational and experimental analysis, aimed at developing relationships between modelling fidelity and the size of the modelled structure, when the global static load to cause initial buckling is the required analysis output. Small, medium and large specimens representing welded lap-joined fuselage panel structure are examined. Two element types, shell and solid-shell, are employed to model each specimen, highlighting the impact of idealisation on the prediction of welded stiffened panel initial skin buckling.
Resumo:
This paper presents the numerical simulation of the ultimate behaviour of 85 one-way and two-way spanning laterally restrained concrete slabs of variable thickness, span, reinforcement ratio, strength and boundary conditions reported in literature by different authors. The developed numerical model was described and all the assumptions were illustrated. ABAQUS, a Finite Element Analysis suite of software, was employed. Non-linear implicit static general analysis method offered by ABAQUS was used. Other analysis methods were also discussed in general in terms of application such as Explicit Dynamic Analysis and Riks method. The aim is to demonstrate the ability and efficacy of FEA to simulate the ultimate load behaviour of slabs considering different material properties and boundary conditions. The authors intended to present a numerical model that provides consistent predictions of the ultimate behaviour of laterally restrained slabs that could be used as an alternative for expensive real life testing as well as for the design and assessment of new and existing structures respectively. The enhanced strength of laterally-restrained slabs compared with conventional design methods predictions is believed to be due to compressive membrane action (CMA). CMA is an inherent phenomenon of laterally restrained concrete beams/slabs. The numerical predictions obtained from the developed model were in good correlation with the experimental results and with those obtained from the CMA method developed at the Queen’s University Belfast, UK.
Resumo:
The Arc-Length Method is a solution procedure that enables a generic non-linear problem to pass limit points. Some examples are provided of mode-jumping problems solutions using a commercial nite element package, and other investigations are carried out on a simple structure of which the numerical solution can be compared with an analytical one. It is shown that Arc-Length Method is not reliable when bifurcations are present in the primary equilibrium path; also the presence of very sharp snap-backs or special boundary conditions may cause convergence diÆculty at limit points. An improvement to the predictor used in the incremental procedure is suggested, together with a reliable criteria for selecting either solution of the quadratic arc-length constraint. The gap that is sometimes observed between the experimantal load level of mode-jumping and its arc-length prediction is explained through an example.