977 resultados para Subgrid Scale Model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present ocean model sensitivity experiments aimed at separating the influence of the projected changes in the “thermal” (near-surface air temperature) and “wind” (near-surface winds) forcing on the patterns of sea level and ocean heat content. In the North Atlantic, the distribution of sea level change is more due to the “thermal” forcing, whereas it is more due to the “wind” forcing in the North Pacific; in the Southern Ocean, the “thermal” and “wind” forcing have a comparable influence. In the ocean adjacent to Antarctica the “thermal” forcing leads to an inflow of warmer waters on the continental shelves, which is somewhat attenuated by the “wind” forcing. The structure of the vertically integrated heat uptake is set by different processes at low and high latitudes: at low latitudes it is dominated by the heat transport convergence, whereas at high latitudes it represents a small residual of changes in the surface flux and advection of heat. The structure of the horizontally integrated heat content tendency is set by the increase of downward heat flux by the mean circulation and comparable decrease of upward heat flux by the subgrid-scale processes; the upward eddy heat flux decreases and increases by almost the same magnitude in response to, respectively, the “thermal” and “wind” forcing. Regionally, the surface heat loss and deep convection weaken in the Labrador Sea, but intensify in the Greenland Sea in the region of sea ice retreat. The enhanced heat flux anomaly in the subpolar Atlantic is mainly caused by the “thermal” forcing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of bivariate distributions plays a fundamental role in survival and reliability studies. In this paper, we consider a location scale model for bivariate survival times based on the proposal of a copula to model the dependence of bivariate survival data. For the proposed model, we consider inferential procedures based on maximum likelihood. Gains in efficiency from bivariate models are also examined in the censored data setting. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the bivariate regression model for matched paired survival data. Sensitivity analysis methods such as local and total influence are presented and derived under three perturbation schemes. The martingale marginal and the deviance marginal residual measures are used to check the adequacy of the model. Furthermore, we propose a new measure which we call modified deviance component residual. The methodology in the paper is illustrated on a lifetime data set for kidney patients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Increasing efforts have been made to engage children in the design of the built environment, and several participatory models have been developed. The aim of this paper is to propose a pedagogical model for children's genuine participation in architectural design, developed in an architectural education context. According to this pedagogical model, children (primary school students) and youth (university architecture students) work in teams to develop the architectural design proposals. This model was developed through a joint educational project between Deakin University and Wales Street Primary School (both institutions are based in Victoria, Australia). In the four-week duration of the project, first year architecture students worked with Grade 3 and 4 primary school children to design a school playground. The final product of the project was a 1:20 scale model of a playground, which was installed and presented at the end of the fourth week. The project received positive feedback from all the participants, including children, architecture students, university lecturers, primary school teachers and architects. In addition, it achieved a high level of children's genuine participation. This model can be refined and applied in new situations, and potentially with other primary schools working with Deakin University.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The laboratory model is considered in this thesis. Information gained from this investigation has not been trans­ferred to the larger industrial machines. Some of the factors noted concerning the efficiency of the laboratory shaking table are inherent in this small scale model only.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Princeton Ocean Model is used to study the circulation features in the Pearl River Estuary and their responses to tide, river discharge, wind, and heat flux in the winter dry and summer wet seasons. The model has an orthogonal curvilinear grid in the horizontal plane with variable spacing from 0.5 km in the estuary to 1 km on the shelf and 15 sigma levels in the vertical direction. The initial conditions and the subtidal open boundary forcing are obtained from an associated larger-scale model of the northern South China Sea. Buoyancy forcing uses the climatological monthly heat fluxes and river discharges, and both the climatological monthly wind and the realistic wind are used in the sensitivity experiments. The tidal forcing is represented by sinusoidal functions with the observed amplitudes and phases. In this paper, the simulated tide is first examined. The simulated seasonal distributions of the salinity, as well as the temporal variations of the salinity and velocity over a tidal cycle are described and then compared with the in situ survey data from July 1999 and January 2000. The model successfully reproduces the main hydrodynamic processes, such as the stratification, mixing, frontal dynamics, summer upwelling, two-layer gravitational circulation, etc., and the distributions of hydrodynamic parameters in the Pearl River Estuary and coastal waters for both the winter and the summer season.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present and analyze a subgrid viscosity Lagrange-Galerk in method that combines the subgrid eddy viscosity method proposed in W. Layton, A connection between subgrid scale eddy viscosity and mixed methods. Appl. Math. Comp., 133: 14 7-157, 2002, and a conventional Lagrange-Galerkin method in the framework of P1⊕ cubic bubble finite elements. This results in an efficient and easy to implement stabilized method for convection dominated convection diffusion reaction problems. Numerical experiments support the numerical analysis results and show that the new method is more accurate than the conventional Lagrange-Galerkin one.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El propósito de esta tesis es estudiar la aproximación a los fenómenos de transporte térmico en edificación acristalada a través de sus réplicas a escala. La tarea central de esta tesis es, por lo tanto, la comparación del comportamiento térmico de modelos a escala con el correspondiente comportamiento térmico del prototipo a escala real. Los datos principales de comparación entre modelo y prototipo serán las temperaturas. En el primer capítulo del Estado del Arte de esta tesis se hará un recorrido histórico por los usos de los modelos a escala desde la antigüedad hasta nuestro días. Dentro de éste, en el Estado de la Técnica, se expondrán los beneficios que tiene su empleo y las dificultades que conllevan. A continuación, en el Estado de la Investigación de los modelos a escala, se analizarán artículos científicos y tesis. Precisamente, nos centraremos en aquellos modelos a escala que son funcionales. Los modelos a escala funcionales son modelos a escala que replican, además, una o algunas de las funciones de sus prototipos. Los modelos a escala pueden estar distorsionados o no. Los modelos a escala distorsionados son aquellos con cambios intencionados en las dimensiones o en las características constructivas para la obtención de una respuesta específica por ejemplo, replicar el comportamiento térmico. Los modelos a escala sin distorsión, o no distorsionados, son aquellos que mantienen, en la medida de lo posible, las proporciones dimensionales y características constructivas de sus prototipos de referencia. Estos modelos a escala funcionales y no distorsionados son especialmente útiles para los arquitectos ya que permiten a la vez ser empleados como elementos funcionales de análisis y como elementos de toma de decisiones en el diseño constructivo. A pesar de su versatilidad, en general, se observará que se han utilizado muy poco estos modelos a escala funcionales sin distorsión para el estudio del comportamiento térmico de la edificación. Posteriormente, se expondrán las teorías para el análisis de los datos térmicos recogidos de los modelos a escala y su aplicabilidad a los correspondientes prototipos a escala real. Se explicarán los experimentos llevados a cabo, tanto en laboratorio como a intemperie. Se han realizado experimentos con modelos sencillos cúbicos a diferentes escalas y sometidos a las mismas condiciones ambientales. De estos modelos sencillos hemos dado el salto a un modelo reducido de una edificación acristalada relativamente sencilla. Los experimentos consisten en ensayos simultáneos a intemperie del prototipo a escala real y su modelo reducido del Taller de Prototipos de la Escuela Técnica Superior de Arquitectura de Madrid (ETSAM). Para el análisis de los datos experimentales hemos aplicado las teorías conocidas, tanto comparaciones directas como el empleo del análisis dimensional. Finalmente, las simulaciones nos permiten comparaciones flexibles con los datos experimentales, por ese motivo, hemos utilizado tanto programas comerciales como un algoritmo de simulación desarrollado ad hoc para esta investigación. Finalmente, exponemos la discusión y las conclusiones de esta investigación. Abstract The purpose of this thesis is to study the approximation to phenomena of heat transfer in glazed buildings through their scale replicas. The central task of this thesis is, therefore, the comparison of the thermal performance of scale models without distortion with the corresponding thermal performance of their full-scale prototypes. Indoor air temperatures of the scale model and the corresponding prototype are the data to be compared. In the first chapter on the State of the Art, it will be shown a broad vision, consisting of a historic review of uses of scale models, from antiquity to our days. In the section State of the Technique, the benefits and difficulties associated with their implementation are presented. Additionally, in the section State of the Research, current scientific papers and theses on scale models are reviewed. Specifically, we focus on functional scale models. Functional scale models are scale models that replicate, additionally, one or some of the functions of their corresponding prototypes. Scale models can be distorted or not. Scale models with distortion are considered scale models with intentional changes, on one hand, in dimensions scaled unevenly and, on the other hand, in constructive characteristics or materials, in order to get a specific performance for instance, a specific thermal performance. Consequently, scale models without distortion, or undistorted scale models scaled evenly, are those replicating, to the extent possible, without distortion, the dimensional proportions and constructive configurations of their prototypes of reference. These undistorted and functional scale models are especially useful for architects because they can be used, simultaneously, as functional elements of analysis and as decision-making elements during the design. Although they are versatile, in general, it is remarkable that these types of models are used very little for the study of the thermal performance of buildings. Subsequently, the theories related to the analysis of the experimental thermal data collected from the scale models and their applicability to the corresponding full-scale prototypes, will be explained. Thereafter, the experiments in laboratory and at outdoor conditions are detailed. Firstly, experiments carried out with simple cube models at different scales are explained. The prototype larger in size and the corresponding undistorted scale model have been subjected to same environmental conditions in every experimental test. Secondly, a step forward is taken carrying out some simultaneous experimental tests of an undistorted scale model, replica of a relatively simple lightweight and glazed building construction. This experiment consists of monitoring the undistorted scale model of the prototype workshop located in the School of Architecture (ETSAM) of the Technical University of Madrid (UPM). For the analysis of experimental data, known related theories and resources are applied, such as, direct comparisons, statistical analyses, Dimensional Analysis and last, but not least important, simulations. Simulations allow us, specifically, flexible comparisons with experimental data. Here, apart the use of the simulation software EnergyPlus, a simulation algorithm is developed ad hoc for this research. Finally, the discussion and conclusions of this research are exposed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Different parameterizations of subgrid-scale fluxes are utilized in a nonhydrostatic and anelastic mesoscale model to study their influence on simulated Arctic cold air outbreaks. A local closure, a profile closure and two nonlocal closure schemes are applied, including an improved scheme, which is based on other nonlocal closures. It accounts for continuous subgrid-scale fluxes at the top of the surface layer and a continuous Prandtl number with respect to stratification. In the limit of neutral stratification the improved scheme gives eddy diffusivities similar to other parameterizations, whereas for strong unstable stratifications they become much larger and thus turbulent transports are more efficient. It is shown by comparison of model results with observations that the application of simple nonlocal closure schemes results in a more realistic simulation of a convective boundary layer than that of a local or a profile closure scheme. Improvements are due to the nonlocal formulation of the eddy diffusivities and to the inclusion of heat transport, which is independent of local gradients (countergradient transport).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Large-eddy simulation is used to predict heat transfer in the separated and reattached flow regions downstream of a backward-facing step. Simulations were carried out at a Reynolds number of 28 000 (based on the step height and the upstream centreline velocity) with a channel expansion ratio of 1.25. The Prandtl number was 0.71. Two subgrid-scale models were tested, namely the dynamic eddy-viscosity, eddy-diffusivity model and the dynamic mixed model. Both models showed good overall agreement with available experimental data. The simulations indicated that the peak in heat-transfer coefficient occurs slightly upstream of the mean reattachment location, in agreement with experimental data. The results of these simulations have been analysed to discover the mechanisms that cause this phenomenon. The peak in heat-transfer coefficient shows a direct correlation with the peak in wall shear-stress fluctuations. It is conjectured that the peak in these fluctuations is caused by an impingement mechanism, in which large eddies, originating in the shear layer, impact the wall just upstream of the mean reattachment location. These eddies cause a 'downwash', which increases the local heat-transfer coefficient by bringing cold fluid from above the shear layer towards the wall.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transient simulations are widely used in studying the past climate as they provide better comparison with any exisiting proxy data. However, multi-millennial transient simulations using coupled climate models are usually computationally very expensive. As a result several acceleration techniques are implemented when using numerical simulations to recreate past climate. In this study, we compare the results from transient simulations of the present and the last interglacial with and without acceleration of the orbital forcing, using the comprehensive coupled climate model CCSM3 (Community Climate System Model 3). Our study shows that in low-latitude regions, the simulation of long-term variations in interglacial surface climate is not significantly affected by the use of the acceleration technique (with an acceleration factor of 10) and hence, large-scale model-data comparison of surface variables is not hampered. However, in high-latitude regions where the surface climate has a direct connection to the deep ocean, e.g. in the Southern Ocean or the Nordic Seas, acceleration-induced biases in sea-surface temperature evolution may occur with potential influence on the dynamics of the overlying atmosphere. The data provided here are from both accelerated and non-accelerated runs as decadal mean values.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper assesses and compares the performances of two daylight collection strategies, one passive and one active, for large-scale mirrored light pipes (MLP) illuminating deep plan buildings. Both strategies use laser cut panels (LCP) as the main component of the collection system. The passive system comprises LCPs in pyramid form, whereas the active system uses a tiled LCP on a simple rotation mechanism that rotates 360° in 24 hours. Performance is assessed using scale model testing under sunny sky conditions and mathematical modelling. Results show average illuminance levels for the pyramid LCP ranging from 50 to 250 lux and 150 to 200 lux for the rotating LCPs. Both systems improve the performance of a MLP. The pyramid LCP increases the performance of a MLP by 2.5 times and the rotating LCP by 5 times, when compared to an open pipe particularly for low sun elevation angles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes automation of the digging cycle of a mining rope shovel which considers autonomous dipper (bucket) filling and determining methods to detect when to disengage the dipper from the bank. Novel techniques to overcome dipper stall and the online estimation of dipper "fullness" are described with in-field experimental results of laser DTM generation, machine automation and digging using a 1/7th scale model rope shovel presented. © 2006 Wiley Periodicals, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines the feasibility of automation of dragline bucket excavators used to strip over-burden from open cut mines. In particular the automatic control of bucket carry angle and bucket trajectory are addressed. Open-loop dynamics of a 1:20 scale model dragline bucket are identified, through measurement of frequency response between carry angle and drag motor input voltage. A strategy for automatic control of carry angle is devised and implemented using bucket angle and rate feedback. System compensation and tuning are explained and closed loop frequency and time responses are measured.