952 resultados para Full-scale Physical Modelling
Resumo:
The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.
This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.
Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.
Resumo:
Modelling the susceptibility of permafrost slopes to disturbance can identify areas at risk to future disturbance and result in safer infrastructure and resource development in the Arctic. In this study, we use terrain attributes derived from a digital elevation model, an inventory of permafrost slope disturbances known as active-layer detachments (ALDs) and generalised additive modelling to produce a map of permafrost slope disturbance susceptibility for an area on northern Melville Island, in the Canadian High Arctic. By examining terrain variables and their relative importance, we identified factors important for initiating slope disturbance. The model was calibrated and validated using 70 and 30 per cent of a data-set of 760 mapped ALDs, including disturbed and randomised undisturbed samples. The generalised additive model calibrated and validated very well, with areas under the receiver operating characteristic curve of 0.89 and 0.81, respectively, demonstrating its effectiveness at predicting disturbed and undisturbed samples. ALDs were most likely to occur below the marine limit on slope angles between 3 and 10° and in areas with low values of potential incoming solar radiation (north-facing slopes).
Resumo:
A major weakness among loading models for pedestrians walking on flexible structures proposed in recent years is the various uncorroborated assumptions made in their development. This applies to spatio-temporal characteristics of pedestrian loading and the nature of multi-object interactions. To alleviate this problem, a framework for the determination of localised pedestrian forces on full-scale structures is presented using a wireless attitude and heading reference systems (AHRS). An AHRS comprises a triad of tri-axial accelerometers, gyroscopes and magnetometers managed by a dedicated data processing unit, allowing motion in three-dimensional space to be reconstructed. A pedestrian loading model based on a single point inertial measurement from an AHRS is derived and shown to perform well against benchmark data collected on an instrumented treadmill. Unlike other models, the current model does not take any predefined form nor does it require any extrapolations as to the timing and amplitude of pedestrian loading. In order to assess correctly the influence of the moving pedestrian on behaviour of a structure, an algorithm for tracking the point of application of pedestrian force is developed based on data from a single AHRS attached to a foot. A set of controlled walking tests with a single pedestrian is conducted on a real footbridge for validation purposes. A remarkably good match between the measured and simulated bridge response is found, indeed confirming applicability of the proposed framework.
Resumo:
As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.
Resumo:
A subfilter-scale (SFS) stress model is developed for large-eddy simulations (LES) and is tested on various benchmark problems in both wall-resolved and wall-modelled LES. The basic ingredients of the proposed model are the model length-scale, and the model parameter. The model length-scale is defined as a fraction of the integral scale of the flow, decoupled from the grid. The portion of the resolved scales (LES resolution) appears as a user-defined model parameter, an advantage that the user decides the LES resolution. The model parameter is determined based on a measure of LES resolution, the SFS activity. The user decides a value for the SFS activity (based on the affordable computational budget and expected accuracy), and the model parameter is calculated dynamically. Depending on how the SFS activity is enforced, two SFS models are proposed. In one approach the user assigns the global (volume averaged) contribution of SFS to the transport (global model), while in the second model (local model), SFS activity is decided locally (locally averaged). The models are tested on isotropic turbulence, channel flow, backward-facing step and separating boundary layer. In wall-resolved LES, both global and local models perform quite accurately. Due to their near-wall behaviour, they result in accurate prediction of the flow on coarse grids. The backward-facing step also highlights the advantage of decoupling the model length-scale from the mesh. Despite the sharply refined grid near the step, the proposed SFS models yield a smooth, while physically consistent filter-width distribution, which minimizes errors when grid discontinuity is present. Finally the model application is extended to wall-modelled LES and is tested on channel flow and separating boundary layer. Given the coarse resolution used in wall-modelled LES, near the wall most of the eddies become SFS and SFS activity is required to be locally increased. The results are in very good agreement with the data for the channel. Errors in the prediction of separation and reattachment are observed in the separated flow, that are somewhat improved with some modifications to the wall-layer model.
Resumo:
This paper provides information on the experimental set-up, data collection methods and results to date for the project Large scale modelling of coarse grained beaches, undertaken at the Large Wave Channel (GWK) of FZK in Hannover by an international group of researchers in Spring 2002. The main objective of the experiments was to provide full scale measurements of cross-shore processes on gravel and mixed beaches for the verification and further development of cross-shore numerical models of gravel and mixed sediment beaches. Identical random and regular wave tests were undertaken for a gravel beach and a mixed sand/gravel beach set up in the flume. Measurements included profile development, water surface elevation along the flume, internal pressures in the swash zone, piezometric head levels within the beach, run-up, flow velocities in the surf-zone and sediment size distributions. The purpose of the paper is to present to the scientific community the experimental procedure, a summary of the data collected, some initial results, as well as a brief outline of the on-going research being carried out with the data by different research groups. The experimental data is available to all the scientific community following submission of a statement of objectives, specification of data requirements and an agreement to abide with the GWK and EU protocols. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
In an open channel, the transition from super- to sub-critical flow is a flow singularity (the hydraulic jump) characterised by a sharp rise in free-surface elevation, strong turbulence and air entrainment in the roller. A key feature of the hydraulic jump flow is the strong free-surface aeration and air-water flow turbulence. In the present study, similar experiments were conducted with identical inflow Froude numbers Fr1 using a geometric scaling ratio of 2:1. The results of the Froude-similar experiments showed some drastic scale effects in the smaller hydraulic jumps in terms of void fraction, bubble count rate and bubble chord time distributions. Void fraction distributions implied comparatively greater detrainment at low Reynolds numbers yielding some lesser aeration of the jump roller. The dimensionless bubble count rates were significantly lower in the smaller channel, especially in the mixing layer. The bubble chord time distributions were quantitatively close in both channels, and they were not scaled according to a Froude similitude. Simply the hydraulic jump remains a fascinating two-phase flow motion that is still poorly understood.
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Resumo:
A new ball mill scale-up procedure is developed which uses laboratory data to predict the performance of MI-scale ball mill circuits. This procedure contains two laboratory tests. These laboratory tests give the data for the determination of the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of the full-scale mill circuit. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw. A worked example shows how the new ball mill scale-up procedure is executed. This worked example uses laboratory data to predict the performance of a full-scale re-grind mill circuit. This circuit consists of a ball mill in closed circuit with hydrocyclones. The MI-scale ball mill has a diameter (inside liners) of 1.85m. The scale-up procedure shows that the full-scale circuit produces a product (hydrocyclone overflow) that has an 80% passing size of 80 mum. The circuit has a recirculating load of 173%. The calculated power draw of the full-scale mill is 92kW (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A new model to predict the extent of crushing around a blasthole is presented. The model is based on the back-analysis of a comprehensive experimental program that included the direct measurement of the zone of crushing from 92 blasting tests on concrete blocks using two commercial explosives. The concrete blocks varied from low, medium to high strength and measured 1.5 in in length, 1.0 m in width and 1.1 m in height. A dimensionless parameter called the crushing zone index (CZI) is introduced. This index measures the crushing potential of a charged blasthole and is a function of the borehole pressure, the unconfined compressive strength of the rock material, dynamic Young's modulus and Poisson's ratio. It is shown that the radius of crushing is a function of the CZI and the blasthole radius. A good correlation between the new model and measured results was obtained. A number of previously proposed models could not approximate the conditions measured in the experimental work and there are noted discrepancies between the different approaches reviewed, particularly for smaller diameter holes and low strength rock conditions. The new model has been verified with full scale tests reported in the literature. Results from this validation and model evaluations show its applicability to production blasting. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
Hydrogen stratification and atmosphere mixing is a very important phenomenon in nuclear reactor containments when severe accidents are studied and simulated. Hydrogen generation, distribution and accumulation in certain parts of containment may pose a great risk to pressure increase induced by hydrogen combustion, and thus, challenge the integrity of NPP containment. The accurate prediction of hydrogen distribution is important with respect to the safety design of a NPP. Modelling methods typically used for containment analyses include both lumped parameter and field codes. The lumped parameter method is universally used in the containment codes, because its versatility, flexibility and simplicity. The lumped parameter method allows fast, full-scale simulations, where different containment geometries with relevant engineering safety features can be modelled. Lumped parameter gas stratification and mixing modelling methods are presented and discussed in this master’s thesis. Experimental research is widely used in containment analyses. The HM-2 experiment related to hydrogen stratification and mixing conducted at the THAI facility in Germany is calculated with the APROS lump parameter containment package and the APROS 6-equation thermal hydraulic model. The main purpose was to study, whether the convection term included in the momentum conservation equation of the 6-equation modelling gives some remarkable advantages compared to the simplified lumped parameter approach. Finally, a simple containment test case (high steam release to a narrow steam generator room inside a large dry containment) was calculated with both APROS models. In this case, the aim was to determine the extreme containment conditions, where the effect of convection term was supposed to be possibly high. Calculation results showed that both the APROS containment and the 6-equation model could model the hydrogen stratification in the THAI test well, if the vertical nodalisation was dense enough. However, in more complicated cases, the numerical diffusion may distort the results. Calculation of light gas stratification could be probably improved by applying the second order discretisation scheme for the modelling of gas flows. If the gas flows are relatively high, the convection term of the momentum equation is necessary to model the pressure differences between the adjacent nodes reasonably.
Resumo:
An investigation is made of the impact of a full linearized physical (moist) parameterization package on extratropical singular vectors (SVs) using the ECMWF integrated forecasting system (IFS). Comparison is made for one particular period with a dry physical package including only vertical diffusion and surface drag. The crucial extra ingredient in the full package is found to be the large-scale latent heat release. Consistent with basic theory, its inclusion results in a shift to smaller horizontal scales and enhanced growth for the SVs. Whereas, for the dry SVs, T42 resolution is sufficient, the moist SVs require T63 to resolve their structure and growth. A 24-h optimization time appears to be appropriate for the moist SVs because of the larger growth of moist SVs compared with dry SVs. Like dry SVs, moist SVs tend to occur in regions of high baroclinicity, but their location is also influenced by the availability of moisture. The most rapidly growing SVs appear to enhance or reduce large-scale rain in regions ahead of major cold fronts. The enhancement occurs in and ahead of a cyclonic perturbation and the reduction in and ahead of an anticyclonic perturbation. Most of the moist SVs for this situation are slightly modified versions of the dry SVs. However, some occur in new locations and have particularly confined structures. The most rapidly growing SV is shown to exhibit quite linear behavior in the nonlinear model as it grows from 0.5 to 12 hPa in 1 day. For 5 times this amplitude the structure is similar but the growth is about half as the perturbation damps a potential vorticity (PV) trough or produces a cutoff, depending on its sign.
Resumo:
The EU FP7 Project MEGAPOLI: "Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation" (http://megapoli.info) brings together leading European research groups, state-of-the-art scientific tools and key players from non-European countries to investigate the interactions among megacities, air quality and climate. MEGAPOLI bridges the spatial and temporal scales that connect local emissions, air quality and weather with global atmospheric chemistry and climate. The suggested concept of multi-scale integrated modelling of megacity impact on air quality and climate and vice versa is discussed in the paper. It requires considering different spatial and temporal dimensions: time scales from seconds and hours (to understand the interaction mechanisms) up to years and decades (to consider the climate effects); spatial resolutions: with model down- and up-scaling from street- to global-scale; and two-way interactions between meteorological and chemical processes.
Resumo:
It is becoming increasingly important that we can understand and model flow processes in urban areas. Applications such as weather forecasting, air quality and sustainable urban development rely on accurate modelling of the interface between an urban surface and the atmosphere above. This review gives an overview of current understanding of turbulence generated by an urban surface up to a few building heights, the layer called the roughness sublayer (RSL). High quality datasets are also identified which can be used in the development of suitable parameterisations of the urban RSL. Datasets derived from physical and numerical modelling, and full-scale observations in urban areas now exist across a range of urban-type morphologies (e.g. street canyons, cubes, idealised and realistic building layouts). Results show that the urban RSL depth falls within 2 – 5 times mean building height and is not easily related to morphology. Systematic perturbations away from uniform layouts (e.g. varying building heights) have a significant impact on RSL structure and depth. Considerable fetch is required to develop an overlying inertial sublayer, where turbulence is more homogeneous, and some authors have suggested that the “patchiness” of urban areas may prevent inertial sublayers from developing at all. Turbulence statistics suggest similarities between vegetation and urban canopies but key differences are emerging. There is no consensus as to suitable scaling variables, e.g. friction velocity above canopy vs. square root of maximum Reynolds stress, mean vs. maximum building height. The review includes a summary of existing modelling practices and highlights research priorities.