25 resultados para Effective medium theory
em CentAUR: Central Archive University of Reading - UK
Resumo:
Effective medium approximations for the frequency-dependent and complex-valued effective stiffness tensors of cracked/ porous rocks with multiple solid constituents are developed on the basis of the T-matrix approach (based on integral equation methods for quasi-static composites), the elastic - viscoelastic correspondence principle, and a unified treatment of the local and global flow mechanisms, which is consistent with the principle of fluid mass conservation. The main advantage of using the T-matrix approach, rather than the first-order approach of Eshelby or the second-order approach of Hudson, is that it produces physically plausible results even when the volume concentrations of inclusions or cavities are no longer small. The new formulae, which operates with an arbitrary homogeneous (anisotropic) reference medium and contains terms of all order in the volume concentrations of solid particles and communicating cavities, take explicitly account of inclusion shape and spatial distribution independently. We show analytically that an expansion of the T-matrix formulae to first order in the volume concentration of cavities (in agreement with the dilute estimate of Eshelby) has the correct dependence on the properties of the saturating fluid, in the sense that it is consistent with the Brown-Korringa relation, when the frequency is sufficiently low. We present numerical results for the (anisotropic) effective viscoelastic properties of a cracked permeable medium with finite storage porosity, indicating that the complete T-matrix formulae (including the higher-order terms) are generally consistent with the Brown-Korringa relation, at least if we assume the spatial distribution of cavities to be the same for all cavity pairs. We have found an efficient way to treat statistical correlations in the shapes and orientations of the communicating cavities, and also obtained a reasonable match between theoretical predictions (based on a dual porosity model for quartz-clay mixtures, involving relatively flat clay-related pores and more rounded quartz-related pores) and laboratory results for the ultrasonic velocity and attenuation spectra of a suite of typical reservoir rocks. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Ellipsometry and atomic force microscopy (AFM) were used to study the film thickness and the surface roughness of both 'soft' and solid thin films. 'Soft' polymer thin films of polystyrene and poly(styrene-ethylene/butylene-styrene) block copolymer were prepared by spin-coating onto planar silicon wafers. Ellipsometric parameters were fitted by the Cauchy approach using a two-layer model with planar boundaries between the layers. The smooth surfaces of the prepared polymer films were confirmed by AFM. There is good agreement between AFM and ellipsometry in the 80-130 nm thickness range. Semiconductor surfaces (Si) obtained by anisotropic chemical etching were investigated as an example of a randomly rough surface. To define roughness parameters by ellipsometry, the top rough layers were treated as thin films according to the Bruggeman effective medium approximation (BEMA). Surface roughness values measured by AFM and ellipsometry show the same tendency of increasing roughness with increased etching time, although AFM results depend on the used window size. The combined use of both methods appears to offer the most comprehensive route to quantitative surface roughness characterisation of solid films. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
This chapter considers two questions. Firstly, in what ways might drama be an effective medium through which to explore difficult and sensitive issues that concern teenagers? And secondly, what ethical questions surround the use of drama to explore such issues? A practical workshop on teenage suicide is used as a platform for a discussion on the use and implications of different drama strategies and the role of humour as a critical lens and distancing device. The work of actual teenagers is used to illustrate the effectiveness of the techniques in both raising awareness and facilitating both critical and creative responses to the delicate issue explored in the workshop.
Resumo:
25 years ago when the Durham conferences were in full swing, I presented results of investigations on language and behaviour in autism. I tentatively proposed that early language in autism might tell us about the cognitive skills of people with ASD and the behaviour might lead to greater understanding of which brain systems might be affected. In this presentation, I will update these topics and present a summary of other work I have been involved with in attempting to improve the lives of people with autism and their families. Data on three people with autism at the early stages of speech development showed an unusual pattern of learning colour and number names early. One possibility was that this skill represented a sign of weak central coherence – they only attended to one dimension. Colleagues of mine were equally puzzled so we tried to find out if my results could be replicated – they were not (see Schafer, Williams & Smith, 2014). Instead we found this pattern was also seen in Down Syndrome, but that early vocabulary in autism was associated with low Colorado Meaningfulness at least in comprehension. The Colorado Meaningfulness of a word is a measure of how many words can be associated with it and often involve extensive use of context. Our data suggest that the number of contexts in which a particular word can appear has a role in determining vocabulary in ASD which is consistent with the weak central coherence theory of autism. In the course of this work I also came across a group of young people with autism who appeared to have a written vocabulary but not a spoken one. It seems possible that print might be a medium of communication when speech is not. Repetitive behaviour in autism remains a mystery. We can use functional analysis to determine why the behaviour occurs, but a worryingly large percentage of behaviours are described as being internally driven or sensory reinforced. What does that mean in terms of brain activity – could it be system analogous to epilepsy, where brain activity becomes inappropriately synchronised? At the moment I cannot claim to have solved this problem, but if sensation is a driver then sensory interventions should make a difference. Data from a recent study will be presented to suggest that for some individuals this is the case. Social behaviour remains the key however, and it remains to be seen whether it is possible for social behaviour to be aided. One route that has potential is direct teaching of skills through drama and working with others who do not have social difficulties of the same type. The picture is complicated by changes in social skills with age and experience, but the failure of people with ASD to interact when in settings of social contact is little researched.
Resumo:
Small and medium sized enterprises (SMEs) play an important role in the European economy. A critical challenge faced by SME leaders, as a consequence of the continuing digital technology revolution, is how to optimally align business strategy with digital technology to fully leverage the potential offered by these technologies in pursuit of longevity and growth. There is a paucity of empirical research examining how e-leadership in SMEs drives successful alignment between business strategy and digital technology fostering longevity and growth. To address this gap, in this paper we develop an empirically derived e-leadership model. Initially we develop a theoretical model of e-leadership drawing on strategic alignment theory. This provides a theoretical foundation on how SMEs can harness digital technology in support of their business strategy enabling sustainable growth. An in-depth empirical study was undertaken interviewing 42 successful European SME leaders to validate, advance and substantiate our theoretically driven model. The outcome of the two stage process – inductive development of a theoretically driven e-leadership model and deductive advancement to develop a complete model through in-depth interviews with successful European SME leaders – is an e-leadership model with specific constructs fostering effective strategic alignment. The resulting diagnostic model enables SME decision makers to exercise effective e-leadership by creating productive alignment between business strategy and digital technology improving longevity and growth prospects.
Resumo:
[1] Temperature and ozone observations from the Microwave Limb Sounder (MLS) on the EOS Aura satellite are used to study equatorial wave activity in the autumn of 2005. In contrast to previous observations for the same season in other years, the temperature anomalies in the middle and lower tropical stratosphere are found to be characterized by a strong wave-like eastward progression with zonal wave number equal to 3. Extended empirical orthogonal function (EOF) analysis reveals that the wave 3 components detected in the temperature anomalies correspond to a slow Kelvin wave with a period of 8 days and a phase speed of 19 m/s. Fluctuations associated with this Kelvin wave mode are also apparent in ozone profiles. Moreover, as expected by linear theory, the ozone fluctuations observed in the lower stratosphere are in phase with the temperature perturbations, and peak around 20–30 hPa where the mean ozone mixing ratios have the steepest vertical gradient. A search for other Kelvin wave modes has also been made using both the MLS observations and the analyses from one experiment where MLS ozone profiles are assimilated into the European Centre for Medium-Range Weather Forecasts (ECMWF) data assimilation system via a 6-hourly 3D var scheme. Our results show that the characteristics of the wave activity detected in the ECMWF temperature and ozone analyses are in good agreement with MLS data.
Resumo:
We consider the problem of determining the pressure and velocity fields for a weakly compressible fluid flowing in a two-dimensional reservoir in an inhomogeneous, anisotropic porous medium, with vertical side walls and variable upper and lower boundaries, in the presence of vertical wells injecting or extracting fluid. Numerical solution of this problem may be expensive, particularly in the case that the depth scale of the layer h is small compared to the horizontal length scale l. This is a situation which occurs frequently in the application to oil reservoir recovery. Under the assumption that epsilon=h/l<<1, we show that the pressure field varies only in the horizontal direction away from the wells (the outer region). We construct two-term asymptotic expansions in epsilon in both the inner (near the wells) and outer regions and use the asymptotic matching principle to derive analytical expressions for all significant process quantities. This approach, via the method of matched asymptotic expansions, takes advantage of the small aspect ratio of the reservoir, epsilon, at precisely the stage where full numerical computations become stiff, and also reveals the detailed structure of the dynamics of the flow, both in the neighborhood of wells and away from wells.
Resumo:
Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.
Resumo:
1. The management of threatened species is an important practical way in which conservationists can intervene in the extinction process and reduce the loss of biodiversity. Understanding the causes of population declines (past, present and future) is pivotal to designing effective practical management. This is the declining-population paradigm identified by Caughley. 2. There are three broad classes of ecological tool used by conservationists to guide management decisions for threatened species: statistical models of habitat use, demographic models and behaviour-based models. Each of these is described here, illustrated with a case study and evaluated critically in terms of its practical application. 3. These tools are fundamentally different. Statistical models of habitat use and demographic models both use descriptions of patterns in abundance and demography, in relation to a range of factors, to inform management decisions. In contrast, behaviourbased models describe the evolutionary processes underlying these patterns, and derive such patterns from the strategies employed by individuals when competing for resources under a specific set of environmental conditions. 4. Statistical models of habitat use and demographic models have been used successfully to make management recommendations for declining populations. To do this, assumptions are made about population growth or vital rates that will apply when environmental conditions are restored, based on either past data collected under favourable environmental conditions or estimates of these parameters when the agent of decline is removed. As a result, they can only be used to make reliable quantitative predictions about future environments when a comparable environment has been experienced by the population of interest in the past. 5. Many future changes in the environment driven by management will not have been experienced by a population in the past. Under these circumstances, vital rates and their relationship with population density will change in the future in a way that is not predictable from past patterns. Reliable quantitative predictions about population-level responses then need to be based on an explicit consideration of the evolutionary processes operating at the individual level. 6. Synthesis and applications. It is argued that evolutionary theory underpins Caughley’s declining-population paradigm, and that it needs to become much more widely used within mainstream conservation biology. This will help conservationists examine critically the reliability of the tools they have traditionally used to aid management decision-making. It will also give them access to alternative tools, particularly when predictions are required for changes in the environment that have not been experienced by a population in the past.
Resumo:
1. The management of threatened species is an important practical way in which conservationists can intervene in the extinction process and reduce the loss of biodiversity. Understanding the causes of population declines (past, present and future) is pivotal to designing effective practical management. This is the declining-population paradigm identified by Caughley. 2. There are three broad classes of ecological tool used by conservationists to guide management decisions for threatened species: statistical models of habitat use, demographic models and behaviour-based models. Each of these is described here, illustrated with a case study and evaluated critically in terms of its practical application. 3. These tools are fundamentally different. Statistical models of habitat use and demographic models both use descriptions of patterns in abundance and demography, in relation to a range of factors, to inform management decisions. In contrast, behaviour-based models describe the evolutionary processes underlying these patterns, and derive such patterns from the strategies employed by individuals when competing for resources under a specific set of environmental conditions. 4. Statistical models of habitat use and demographic models have been used successfully to make management recommendations for declining populations. To do this, assumptions are made about population growth or vital rates that will apply when environmental conditions are restored, based on either past data collected under favourable environmental conditions or estimates of these parameters when the agent of decline is removed. As a result, they can only be used to make reliable quantitative predictions about future environments when a comparable environment has been experienced by the population of interest in the past. 5. Many future changes in the environment driven by management will not have been experienced by a population in the past. Under these circumstances, vital rates and their relationship with population density will change in the future in a way that is not predictable from past patterns. Reliable quantitative predictions about population-level responses then need to be based on an explicit consideration of the evolutionary processes operating at the individual level. 6. Synthesis and applications. It is argued that evolutionary theory underpins Caughley's declining-population paradigm, and that it needs to become much more widely used within mainstream conservation biology. This will help conservationists examine critically the reliability of the tools they have traditionally used to aid management decision-making. It will also give them access to alternative tools, particularly when predictions are required for changes in the environment that have not been experienced by a population in the past.
Resumo:
Building services are worth about 2% GDP and are essential for the effective and efficient operations of the building. It is increasingly recognised that the value of a building is related to the way it supports the client organisation’s ongoing business operations. Building services are central to the functional performance of buildings and provide the necessary conditions for health, well-being, safety and security of the occupants. They frequently comprise several technologically distinct sub-systems and their design and construction requires the involvement of numerous disciplines and trades. Designers and contractors working on the same project are frequently employed by different companies. Materials and equipment is supplied by a diverse range of manufacturers. Facilities managers are responsible for operation of the building service in use. The coordination between these participants is crucially important to achieve optimum performance, but too often is neglected. This leaves room for serious faults. The need for effective integration is important. Modern technology offers increasing opportunities for integrated personal-control systems for lighting, ventilation and security as well as interoperability between systems. Opportunities for a new mode of systems integration are provided by the emergence of PFI/PPP procurements frameworks. This paper attempts to establish how systems integration can be achieved in the process of designing, constructing and operating building services. The essence of the paper therefore is to envisage the emergent organisational responses to the realisation of building services as an interactive systems network.
Resumo:
Genetic algorithms (GAs) have been introduced into site layout planning as reported in a number of studies. In these studies, the objective functions were defined so as to employ the GAs in searching for the optimal site layout. However, few studies have been carried out to investigate the actual closeness of relationships between site facilities; it is these relationships that ultimately govern the site layout. This study has determined that the underlying factors of site layout planning for medium-size projects include work flow, personnel flow, safety and environment, and personal preferences. By finding the weightings on these factors and the corresponding closeness indices between each facility, a closeness relationship has been deduced. Two contemporary mathematical approaches - fuzzy logic theory and an entropy measure - were adopted in finding these results in order to minimize the uncertainty and vagueness of the collected data and improve the quality of the information. GAs were then applied to searching for the optimal site layout in a medium-size government project using the GeneHunter software. The objective function involved minimizing the total travel distance. An optimal layout was obtained within a short time. This reveals that the application of GA to site layout planning is highly promising and efficient.
Resumo:
The assessment of building energy efficiency is one of the most effective measures for reducing building energy consumption. This paper proposes a holistic method (HMEEB) for assessing and certifying building energy efficiency based on the D-S (Dempster-Shafer) theory of evidence and the Evidential Reasoning (ER) approach. HMEEB has three main features: (i) it provides both a method to assess and certify building energy efficiency, and exists as an analytical tool to identify improvement opportunities; (ii) it combines a wealth of information on building energy efficiency assessment, including identification of indicators and a weighting mechanism; and (iii) it provides a method to identify and deal with inherent uncertainties within the assessment procedure. This paper demonstrates the robustness, flexibility and effectiveness of the proposed method, using two examples to assess the energy efficiency of two residential buildings, both located in the ‘Hot Summer and Cold Winter’ zone in China. The proposed certification method provides detailed recommendations for policymakers in the context of carbon emission reduction targets and promoting energy efficiency in the built environment. The method is transferable to other countries and regions, using an indicator weighting system to modify local climatic, economic and social factors.
Resumo:
Recent research has shown that Lighthill–Ford spontaneous gravity wave generation theory, when applied to numerical model data, can help predict areas of clear-air turbulence. It is hypothesized that this is the case because spontaneously generated atmospheric gravity waves may initiate turbulence by locally modifying the stability and wind shear. As an improvement on the original research, this paper describes the creation of an ‘operational’ algorithm (ULTURB) with three modifications to the original method: (1) extending the altitude range for which the method is effective downward to the top of the boundary layer, (2) adding turbulent kinetic energy production from the environment to the locally produced turbulent kinetic energy production, and, (3) transforming turbulent kinetic energy dissipation to eddy dissipation rate, the turbulence metric becoming the worldwide ‘standard’. In a comparison of ULTURB with the original method and with the Graphical Turbulence Guidance second version (GTG2) automated procedure for forecasting mid- and upper-level aircraft turbulence ULTURB performed better for all turbulence intensities. Since ULTURB, unlike GTG2, is founded on a self-consistent dynamical theory, it may offer forecasters better insight into the causes of the clear-air turbulence and may ultimately enhance its predictability.