146 resultados para real-scale modelling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new dualscale modelling approach is presented for simulating the drying of a wet hygroscopic porous material that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of wood at low temperatures and is valid in the so-called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradients of moisture content and temperature on the microscopic field using suitably-defined periodic boundary conditions, which allows the macroscopic mass and thermal fluxes to be defined as averages of the microscopic fluxes over the unit cell. This novel formulation accounts for the intricate coupling of heat and mass transfer at the microscopic scale but reduces to a classical homogenisation approach if a linear relationship is assumed between the microscopic gradient and flux. Simulation results for a sample of spruce wood highlight the potential and flexibility of the new dual-scale approach. In particular, for a given unit cell configuration it is not necessary to propose the form of the macroscopic fluxes prior to the simulations because these are determined as a direct result of the dual-scale formulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Unsaturated water flow in soil is commonly modelled using Richards’ equation, which requires the hydraulic properties of the soil (e.g., porosity, hydraulic conductivity, etc.) to be characterised. Naturally occurring soils, however, are heterogeneous in nature, that is, they are composed of a number of interwoven homogeneous soils each with their own set of hydraulic properties. When the length scale of these soil heterogeneities is small, numerical solution of Richards’ equation is computationally impractical due to the immense effort and refinement required to mesh the actual heterogeneous geometry. A classic way forward is to use a macroscopic model, where the heterogeneous medium is replaced with a fictitious homogeneous medium, which attempts to give the average flow behaviour at the macroscopic scale (i.e., at a scale much larger than the scale of the heterogeneities). Using the homogenisation theory, a macroscopic equation can be derived that takes the form of Richards’ equation with effective parameters. A disadvantage of the macroscopic approach, however, is that it fails in cases when the assumption of local equilibrium does not hold. This limitation has seen the introduction of two-scale models that include at each point in the macroscopic domain an additional flow equation at the scale of the heterogeneities (microscopic scale). This report outlines a well-known two-scale model and contributes to the literature a number of important advances in its numerical implementation. These include the use of an unstructured control volume finite element method and image-based meshing techniques, that allow for irregular micro-scale geometries to be treated, and the use of an exponential time integration scheme that permits both scales to be resolved simultaneously in a completely coupled manner. Numerical comparisons against a classical macroscopic model confirm that only the two-scale model correctly captures the important features of the flow for a range of parameter values.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A mathematical model for the galvanostatic discharge and recovery of porous, electrolytic manganese dioxide cathodes, similar to those found within primary alkaline batteries is presented. The phenomena associated with discharge are modeled over three distinct size scales, a cathodic (or macroscopic) scale, a porous manganese oxide particle (or microscopic) scale, and a manganese oxide crystal (or submicroscopic) scale. The physical and chemical coupling between these size scales is included in the model. In addition, the model explicitly accounts for the graphite phase within the cathode. The effects that manganese oxide particle size and proton diffusion have on cathodic discharge and the effects of intraparticle voids and microporous electrode structure are predicted using the model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The field of epigenetics looks at changes in the chromosomal structure that affect gene expression without altering DNA sequence. A large-scale modelling project to better understand these mechanisms is gaining momentum. Early advances in genetics led to the all-genetic paradigm: phenotype (an organism's characteristics/behaviour) is determined by genotype (its genetic make-up). This was later amended and expressed by the well-known formula P = G + E, encompassing the notion that the visible characteristics of a living organism (the phenotype, P) is a combination of hereditary genetic factors (the genotype, G) and environmental factors (E). However, this method fails to explain why in diseases such as schizophrenia we still observe differences between identical twins. Furthermore, the identification of environmental factors (such as smoking and air quality for lung cancer) is relatively rare. The formula also fails to explain cell differentiation from a single fertilized cell. In the wake of early work by Waddington, more recent results have emphasized that the expression of the genotype can be altered without any change in the DNA sequence. This phenomenon has been tagged as epigenetics. To form the chromosome, DNA strands roll over nucleosomes, which are a cluster of nine proteins (histones), as detailed in Figure 1. Epigenetic mechanisms involve inherited alterations in these two structures, eg through attachment of a functional group to the amino acids (methyl, acetyl and phosphate). These 'stable alterations' arise during development and cell proliferation and persist through cell division. While information within the genetic material is not changed, instructions for its assembly and interpretation may be. Modelling this new paradigm, P = G + E + EpiG, is the object of our study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper discusses two different approaches to teaching design and their modes of delivery and reflects upon their successes and failures. Two small groups of third year design students have been given projects focussing on incorporation of daylighting to architectural design in studios having different design themes. In association with the curriculum, the themes were Digital Tools and Sustainability. Although both studios had the topic of daylighting, the aim and methodology used were different. Digital Tool studio’s aim was to teach how to design daylighting by using a digital tool, where as, Sustainability studio aimed at using scale modelling as a tool to learn about daylighting and integrating it into design. Positive results for providing student learning success within the University context were the students’ chance to learn and practice some new skills –using a new tool for designing; integration of the tutors’ extensive research expertise to their teaching practice; and the students’ construction of their own understanding of knowledge in a student-centred educational environment. This environment created a very positive attitude in the form of exchanging ideas and collaboration among the students of Digital Tools students at the discussion forum. Sustainability group students were enthusiastic about designing and testing various proposals. Problems that both studios experienced were mainly related to timing. Synchronizing with other groups of their studios and learning of a new skill on top of an already complicated process of design learning were the setbacks.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diffusion in a composite slab consisting of a large number of layers provides an ideal prototype problem for developing and analysing two-scale modelling approaches for heterogeneous media. Numerous analytical techniques have been proposed for solving the transient diffusion equation in a one-dimensional composite slab consisting of an arbitrary number of layers. Most of these approaches, however, require the solution of a complex transcendental equation arising from a matrix determinant for the eigenvalues that is difficult to solve numerically for a large number of layers. To overcome this issue, in this paper, we present a semi-analytical method based on the Laplace transform and an orthogonal eigenfunction expansion. The proposed approach uses eigenvalues local to each layer that can be obtained either explicitly, or by solving simple transcendental equations. The semi-analytical solution is applicable to both perfect and imperfect contact at the interfaces between adjacent layers and either Dirichlet, Neumann or Robin boundary conditions at the ends of the slab. The solution approach is verified for several test cases and is shown to work well for a large number of layers. The work is concluded with an application to macroscopic modelling where the solution of a fine-scale multilayered medium consisting of two hundred layers is compared against an “up-scaled” variant of the same problem involving only ten layers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fire safety has become an important part in structural design due to the ever increasing loss of properties and lives during fires. Fire rating of load bearing wall systems made of Light gauge Steel Frames (LSF) is determined using fire tests based on the standard time-temperature curve given in ISO 834. However, modern residential buildings make use of thermoplastic materials, which mean considerably high fuel loads. Hence a detailed fire research study into the performance of load bearing LSF walls was undertaken using a series of realistic design fire curves developed based on Eurocode parametric curves and Barnett’s BFD curves. It included both full scale fire tests and numerical studies of LSF walls without any insulation, and the recently developed externally insulated composite panels. This paper presents the details of fire tests first, and then the numerical models of tested LSF wall studs. It shows that suitable finite element models can be developed to predict the fire rating of load bearing walls under real fire conditions. The paper also describes the structural and fire performances of externally insulated LSF walls in comparison to the non-insulated walls under real fires, and highlights the effects of standard and real fire curves on fire performance of LSF walls.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Fire safety has become an important part in structural design due to the ever increasing loss of properties and lives during fires. Conventionally the fire rating of load bearing wall systems made of Light gauge Steel Frames (LSF) is determined using fire tests based on the standard time-temperature curve given in ISO 834 (ISO, 1999). The standard time-temperature curve given in ISO 834 (ISO, 1999) originated from the application of wood burning furnaces in the early 1900s. However, modern commercial and residential buildings make use of thermoplastic materials, which mean considerably high fuel loads. Hence a detailed fire research study into the performance of LSF walls was undertaken using the developed real fire curves based on Eurocode parametric curves (ECS, 2002) and Barnett’s BFD curves (Barnett, 2002) using both full scale fire tests and numerical studies. It included LSF walls without any insulation, and the recently developed externally insulated composite panel system. This paper presents the details of the numerical studies and the results. It also includes brief details of the development of real building fire curves and experimental studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Vigilance declines when exposed to highly predictable and uneventful tasks. Monotonous tasks provide little cognitive and motor stimulation and contribute to human errors. This paper aims to model and detect vigilance decline in real time through participant’s reaction times during a monotonous task. A lab-based experiment adapting the Sustained Attention to Response Task (SART) is conducted to quantify the effect of monotony on overall performance. Then relevant parameters are used to build a model detecting hypovigilance throughout the experiment. The accuracy of different mathematical models are compared to detect in real-time – minute by minute - the lapses in vigilance during the task. We show that monotonous tasks can lead to an average decline in performance of 45%. Furthermore, vigilance modelling enables to detect vigilance decline through reaction times with an accuracy of 72% and a 29% false alarm rate. Bayesian models are identified as a better model to detect lapses in vigilance as compared to Neural Networks and Generalised Linear Mixed Models. This modelling could be used as a framework to detect vigilance decline of any human performing monotonous tasks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Popular wireless network standards, such as IEEE 802.11/15/16, are increasingly adopted in real-time control systems. However, they are not designed for real-time applications. Therefore, the performance of such wireless networks needs to be carefully evaluated before the systems are implemented and deployed. While efforts have been made to model general wireless networks with completely random traffic generation, there is a lack of theoretical investigations into the modelling of wireless networks with periodic real-time traffic. Considering the widely used IEEE 802.11 standard, with the focus on its distributed coordination function (DCF), for soft-real-time control applications, this paper develops an analytical Markov model to quantitatively evaluate the network quality-of-service (QoS) performance in periodic real-time traffic environments. Performance indices to be evaluated include throughput capacity, transmission delay and packet loss ratio, which are crucial for real-time QoS guarantee in real-time control applications. They are derived under the critical real-time traffic condition, which is formally defined in this paper to characterize the marginal satisfaction of real-time performance constraints.