53 resultados para local-to-zero analysis
Resumo:
Sensory thresholds are often collected through ascending forced-choice methods. Group thresholds are important for comparing stimuli or populations; yet, the method has two problems. An individual may correctly guess the correct answer at any concentration step and might detect correctly at low concentrations but become adapted or fatigued at higher concentrations. The survival analysis method deals with both issues. Individual sequences of incorrect and correct answers are adjusted, taking into account the group performance at each concentration. The technique reduces the chance probability where there are consecutive correct answers. Adjusted sequences are submitted to survival analysis to determine group thresholds. The technique was applied to an aroma threshold and a taste threshold study. It resulted in group thresholds similar to ASTM or logarithmic regression procedures. Significant differences in taste thresholds between younger and older adults were determined. The approach provides a more robust technique over previous estimation methods.
Resumo:
In 2006 the UK government announced a move to zero carbon homes by 2016. The demand posed a major challenge to policy makers and construction professionals entailing a protracted process of policy design. The task of giving content to this target is used to explore the role of evidence in the policy process. Whereas much literature on policy and evidence treats evidence as an external input, independent of politics, this paper explores the ongoing mutual constitution of both. Drawing on theories of policy framing and the sociology of classification, the account follows the story of a policy for Zero Carbon Homes from the parameters and values used to specify the target. Particular attention is given to the role of Regulatory Impact Assessments (RIAs) and to the creation of a new policy venue, the Zero Carbon Hub. The analysis underlines the way in which the choices about how to model and measure the aims potentially transforms them, the importance of policy venues for transparency and the role of RIAs in the authorization of particular definitions. A more transparent, open approach to policy formulation is needed in which the framing of evidence is recognized as an integral part of the policy process.
Resumo:
Numerical simulations are performed to assess the influence of the large-scale circulation on the transition from suppressed to active convection. As a model tool, we used a coupled-column model. It consists of two cloud-resolving models which are fully coupled via a large-scale circulation which is derived from the requirement that the instantaneous domain-mean potential temperature profiles of the two columns remain close to each other. This is known as the weak-temperature gradient approach. The simulations of the transition are initialized from coupled-column simulations over non-uniform surface forcing and the transition is forced within the dry column by changing the local and/or remote surface forcings to uniform surface forcing across the columns. As the strength of the circulation is reduced to zero, moisture is recharged into the dry column and a transition to active convection occurs once the column is sufficiently moistened to sustain deep convection. Direct effects of changing surface forcing occur over the first few days only. Afterward, it is the evolution of the large-scale circulation which systematically modulates the transition. Its contributions are approximately equally divided between the heating and moistening effects. A transition time is defined to summarize the evolution from suppressed to active convection. It is the time when the rain rate within the dry column is halfway to the mean value obtained at equilibrium over uniform surface forcing. The transition time is around twice as long for a transition that is forced remotely compared to a transition that is forced locally. Simulations in which both local and remote surface forcings are changed produce intermediate transition times.
Resumo:
At Hollow Banks Quarry, Scorton, located just north of Catterick (N Yorks.), a highly unusual group of 15 late Roman burials was excavated between 1998 and 2000. The small cemetery consists of almost exclusively male burials, dated to the fourth century. An unusually large proportion of these individuals was buried with crossbow brooches and belt fittings, suggesting that they may have been serving in the late Roman army or administration and may have come to Scorton from the Continent. Multi-isotope analyses (carbon, nitrogen, oxygen and strontium) of nine sufficiently well-preserved individuals indicate that seven males, all equipped with crossbow brooches and/or belt fittings, were not local to the Catterick area and that at least six of them probably came from the European mainland. Dietary (carbon and nitrogen isotope) analysis only of a tenth individual also suggests a non-local origin. At Scorton it appears that the presence of crossbow brooches and belts in the grave was more important for suggesting non-British origins than whether or not they were worn. This paper argues that cultural and social factors played a crucial part in the creation of funerary identities and highlights the need for both multi-proxy analyses and the careful contextual study of artefacts.
Resumo:
The Kelvin Helmholtz (KH) problem, with zero stratification, is examined as a limiting case of the Rayleigh model of a single shear layer whose width tends to zero. The transition of the Rayleigh modal dispersion relation to the KH one, as well as the disappearance of the supermodal transient growth in the KH limit, are both rationalized from the counterpropagating Rossby wave perspective.
Resumo:
Turbulence statistics obtained by direct numerical simulations are analysed to investigate spatial heterogeneity within regular arrays of building-like cubical obstacles. Two different array layouts are studied, staggered and square, both at a packing density of λp=0.25 . The flow statistics analysed are mean streamwise velocity ( u− ), shear stress ( u′w′−−−− ), turbulent kinetic energy (k) and dispersive stress fraction ( u˜w˜ ). The spatial flow patterns and spatial distribution of these statistics in the two arrays are found to be very different. Local regions of high spatial variability are identified. The overall spatial variances of the statistics are shown to be generally very significant in comparison with their spatial averages within the arrays. Above the arrays the spatial variances as well as dispersive stresses decay rapidly to zero. The heterogeneity is explored further by separately considering six different flow regimes identified within the arrays, described here as: channelling region, constricted region, intersection region, building wake region, canyon region and front-recirculation region. It is found that the flow in the first three regions is relatively homogeneous, but that spatial variances in the latter three regions are large, especially in the building wake and canyon regions. The implication is that, in general, the flow immediately behind (and, to a lesser extent, in front of) a building is much more heterogeneous than elsewhere, even in the relatively dense arrays considered here. Most of the dispersive stress is concentrated in these regions. Considering the experimental difficulties of obtaining enough point measurements to form a representative spatial average, the error incurred by degrading the sampling resolution is investigated. It is found that a good estimate for both area and line averages can be obtained using a relatively small number of strategically located sampling points.
Resumo:
On the time scale of a century, the Atlantic thermohaline circulation (THC) is sensitive to the global surface salinity distribution. The advection of salinity toward the deep convection sites of the North Atlantic is one of the driving mechanisms for the THC. There is both a northward and a southward contributions. The northward salinity advection (Nsa) is related to the evaporation in the subtropics, and contributes to increased salinity in the convection sites. The southward salinity advection (Ssa) is related to the Arctic freshwater forcing and tends on the contrary to diminish salinity in the convection sites. The THC changes results from a delicate balance between these opposing mechanisms. In this study we evaluate these two effects using the IPSL-CM4 ocean-atmosphere-sea-ice coupled model (used for IPCC AR4). Perturbation experiments have been integrated for 100 years under modern insolation and trace gases. River runoff and evaporation minus precipitation are successively set to zero for the ocean during the coupling procedure. This allows the effect of processes Nsa and Ssa to be estimated with their specific time scales. It is shown that the convection sites in the North Atlantic exhibit various sensitivities to these processes. The Labrador Sea exhibits a dominant sensitivity to local forcing and Ssa with a typical time scale of 10 years, whereas the Irminger Sea is mostly sensitive to Nsa with a 15 year time scale. The GIN Seas respond to both effects with a time scale of 10 years for Ssa and 20 years for Nsa. It is concluded that, in the IPSL-CM4, the global freshwater forcing damps the THC on centennial time scales.
Resumo:
Capillary electrophoresis (CE) offers the analyst a number of key advantages for the analysis of the components of foods. CE offers better resolution than, say, high-performance liquid chromatography (HPLC), and is more adept at the simultaneous separation of a number of components of different chemistries within a single matrix. In addition, CE requires less rigorous sample cleanup procedures than HPLC, while offering the same degree of automation. However, despite these advantages, CE remains under-utilized by food analysts. Therefore, this review consolidates and discusses the currently reported applications of CE that are relevant to the analysis of foods. Some discussion is also devoted to the development of these reported methods and to the advantages/disadvantages compared with the more usual methods for each particular analysis. It is the aim of this review to give practicing food analysts an overview of the current scope of CE.
Resumo:
A remote haploscopic photorefractor was used to assess objective binocular vergence and accommodation responses in 157 full-term healthy infants aged 1-6 months while fixating a brightly coloured target moving between fixation distances at 2, 1, 0.5 and 0.33 m. Vergence and accommodation response gain matured rapidly from 'flat' neonatal responses at an intercept of approximately 2 dioptres (D) for accommodation and 2.5 metre angles(MA) for vergence, reaching adult-like values at 4 months. Vergence gain was marginally higher in females (p = 0.064), but accommodation gain (p = 0.034) was higher and accommodative intercept closer to zero (p = 0.004) in males in the first 3 months as they relaxed accommodation more appropriately for distant targets. More females showed flat accommodation responses (p = 0.029). More males behaved hypermetropically in the first two months of life, but when these hypermetropic infants were excluded from the analysis, the gender difference remained. Gender differences disappeared after three months. Data showed variable responses and infants could behave appropriately and simultaneously on both, neither or only one measure at all ages. If accommodation was appropriate (gain between 0.7 and 1.3; r(2) > 0.7) but vergence was not, males over- and under-converged equally, while the females who accommodated appropriately were more likely to overconverge (p = 0.008). The apparent earlier maturity of the male accommodative responses may be due to refractive error differences but could also reflect gender-specific male preference for blur cues while females show earlier preference for disparity, which may underpin the earlier emerging, disparity dependent, stereopsis and full vergence found in females in other studies.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
The Stokes drift induced by surface waves distorts turbulence in the wind-driven mixed layer of the ocean, leading to the development of streamwise vortices, or Langmuir circulations, on a wide range of scales. We investigate the structure of the resulting Langmuir turbulence, and contrast it with the structure of shear turbulence, using rapid distortion theory (RDT) and kinematic simulation of turbulence. Firstly, these linear models show clearly why elongated streamwise vortices are produced in Langmuir turbulence, when Stokes drift tilts and stretches vertical vorticity into horizontal vorticity, whereas elongated streaky structures in streamwise velocity fluctuations (u) are produced in shear turbulence, because there is a cancellation in the streamwise vorticity equation and instead it is vertical vorticity that is amplified. Secondly, we develop scaling arguments, illustrated by analysing data from LES, that indicate that Langmuir turbulence is generated when the deformation of the turbulence by mean shear is much weaker than the deformation by the Stokes drift. These scalings motivate a quantitative RDT model of Langmuir turbulence that accounts for deformation of turbulence by Stokes drift and blocking by the air–sea interface that is shown to yield profiles of the velocity variances in good agreement with LES. The physical picture that emerges, at least in the LES, is as follows. Early in the life cycle of a Langmuir eddy initial turbulent disturbances of vertical vorticity are amplified algebraically by the Stokes drift into elongated streamwise vortices, the Langmuir eddies. The turbulence is thus in a near two-component state, with suppressed and . Near the surface, over a depth of order the integral length scale of the turbulence, the vertical velocity (w) is brought to zero by blocking of the air–sea interface. Since the turbulence is nearly two-component, this vertical energy is transferred into the spanwise fluctuations, considerably enhancing at the interface. After a time of order half the eddy decorrelation time the nonlinear processes, such as distortion by the strain field of the surrounding eddies, arrest the deformation and the Langmuir eddy decays. Presumably, Langmuir turbulence then consists of a statistically steady state of such Langmuir eddies. The analysis then provides a dynamical connection between the flow structures in LES of Langmuir turbulence and the dominant balance between Stokes production and dissipation in the turbulent kinetic energy budget, found by previous authors.
Resumo:
The problems encountered by individuals with disabilities when accessing large public buildings is described and a solution based on the generation of virtual models of the built environment is proposed. These models are superimposed on a control network infrastructure, currently utilised in intelligent building applications such as lighting, heating and access control. The use of control network architectures facilitates the creation of distributed models that closely mirror both the physical and control properties of the environment. The model of the environment is kept local to the installation which allows the virtual representation of a large building to be decomposed into an interconnecting series of smaller models. This paper describes two methods of interacting with the virtual model, firstly a two dimensional aural representation that can be used as the basis of a portable navigational device. Secondly an augmented reality called DAMOCLES that overlays additional information on a user’s normal field of view. The provision of virtual environments offers new possibilities in the man-machine interface so that intuitive access to network based services and control functions can be given to a user.
Resumo:
An error polynomial is defined, the coefficients of which indicate the difference at any instant between a system and a model of lower order approximating the system. It is shown how Markov parameters and time series proportionals of the model can be matched with those of the system by setting error polynomial coefficients to zero. Also discussed is the way in which the error between system and model can be considered as being a filtered form of an error input function specified by means of model parameter selection.
Resumo:
In recent years, there has been an increase in research on conventions motivated by the game-theoretic contributions of the philosopher David Lewis. Prior to this surge in interest, discussions of convention in economics had been tied to the analysis of John Maynard Keynes's writings. These literatures are distinct and have very little overlap. Yet this confluence of interests raises interesting methodological questions. Does the use of a common term, convention, denote a set of shared concerns? Can we identify what differentiates the game theoretic models from the Keynesian ones? This paper maps out the three most developed accounts of convention within economics and discusses their relations with each other in an attempt to provide an answer.
Resumo:
Global agreements have proliferated in the past ten years. One of these is the Kyoto Protocol, which contains provisions for emissions reductions by trading carbon through the Clean Development Mechanism (CDM). The CDM is a market-based instrument that allows companies in Annex I countries to offset their greenhouse gas emissions through energy and tree offset projects in the global South. I set out to examine the governance challenges posed by the institutional design of carbon sequestration projects under the CDM. I examine three global narratives associated with the design of CDM forest projects, specifically North – South knowledge politics, green developmentalism, and community participation, and subsequently assess how these narratives match with local practices in two projects in Latin America. Findings suggest that governance problems are operating at multiple levels and that the rhetoric of global carbon actors often asserts these schemes in one light, while the rhetoric of those who are immediately involved locally may be different. I also stress the alarmist’s discourse that blames local people for the problems of environmental change. The case studies illustrate the need for vertical communication and interaction and nested governance arrangements as well as horizontal arrangements. I conclude that the global framing of forests as offsets requires better integration of local relationships to forests and their management and more effective institutions at multiple levels to link the very local to the very large scale when dealing with carbon sequestration in the CDM.