940 resultados para local-to-zero analysis
Resumo:
Capillary electrophoresis (CE) offers the analyst a number of key advantages for the analysis of the components of foods. CE offers better resolution than, say, high-performance liquid chromatography (HPLC), and is more adept at the simultaneous separation of a number of components of different chemistries within a single matrix. In addition, CE requires less rigorous sample cleanup procedures than HPLC, while offering the same degree of automation. However, despite these advantages, CE remains under-utilized by food analysts. Therefore, this review consolidates and discusses the currently reported applications of CE that are relevant to the analysis of foods. Some discussion is also devoted to the development of these reported methods and to the advantages/disadvantages compared with the more usual methods for each particular analysis. It is the aim of this review to give practicing food analysts an overview of the current scope of CE.
Resumo:
A remote haploscopic photorefractor was used to assess objective binocular vergence and accommodation responses in 157 full-term healthy infants aged 1-6 months while fixating a brightly coloured target moving between fixation distances at 2, 1, 0.5 and 0.33 m. Vergence and accommodation response gain matured rapidly from 'flat' neonatal responses at an intercept of approximately 2 dioptres (D) for accommodation and 2.5 metre angles(MA) for vergence, reaching adult-like values at 4 months. Vergence gain was marginally higher in females (p = 0.064), but accommodation gain (p = 0.034) was higher and accommodative intercept closer to zero (p = 0.004) in males in the first 3 months as they relaxed accommodation more appropriately for distant targets. More females showed flat accommodation responses (p = 0.029). More males behaved hypermetropically in the first two months of life, but when these hypermetropic infants were excluded from the analysis, the gender difference remained. Gender differences disappeared after three months. Data showed variable responses and infants could behave appropriately and simultaneously on both, neither or only one measure at all ages. If accommodation was appropriate (gain between 0.7 and 1.3; r(2) > 0.7) but vergence was not, males over- and under-converged equally, while the females who accommodated appropriately were more likely to overconverge (p = 0.008). The apparent earlier maturity of the male accommodative responses may be due to refractive error differences but could also reflect gender-specific male preference for blur cues while females show earlier preference for disparity, which may underpin the earlier emerging, disparity dependent, stereopsis and full vergence found in females in other studies.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
The Stokes drift induced by surface waves distorts turbulence in the wind-driven mixed layer of the ocean, leading to the development of streamwise vortices, or Langmuir circulations, on a wide range of scales. We investigate the structure of the resulting Langmuir turbulence, and contrast it with the structure of shear turbulence, using rapid distortion theory (RDT) and kinematic simulation of turbulence. Firstly, these linear models show clearly why elongated streamwise vortices are produced in Langmuir turbulence, when Stokes drift tilts and stretches vertical vorticity into horizontal vorticity, whereas elongated streaky structures in streamwise velocity fluctuations (u) are produced in shear turbulence, because there is a cancellation in the streamwise vorticity equation and instead it is vertical vorticity that is amplified. Secondly, we develop scaling arguments, illustrated by analysing data from LES, that indicate that Langmuir turbulence is generated when the deformation of the turbulence by mean shear is much weaker than the deformation by the Stokes drift. These scalings motivate a quantitative RDT model of Langmuir turbulence that accounts for deformation of turbulence by Stokes drift and blocking by the air–sea interface that is shown to yield profiles of the velocity variances in good agreement with LES. The physical picture that emerges, at least in the LES, is as follows. Early in the life cycle of a Langmuir eddy initial turbulent disturbances of vertical vorticity are amplified algebraically by the Stokes drift into elongated streamwise vortices, the Langmuir eddies. The turbulence is thus in a near two-component state, with suppressed and . Near the surface, over a depth of order the integral length scale of the turbulence, the vertical velocity (w) is brought to zero by blocking of the air–sea interface. Since the turbulence is nearly two-component, this vertical energy is transferred into the spanwise fluctuations, considerably enhancing at the interface. After a time of order half the eddy decorrelation time the nonlinear processes, such as distortion by the strain field of the surrounding eddies, arrest the deformation and the Langmuir eddy decays. Presumably, Langmuir turbulence then consists of a statistically steady state of such Langmuir eddies. The analysis then provides a dynamical connection between the flow structures in LES of Langmuir turbulence and the dominant balance between Stokes production and dissipation in the turbulent kinetic energy budget, found by previous authors.
Resumo:
The problems encountered by individuals with disabilities when accessing large public buildings is described and a solution based on the generation of virtual models of the built environment is proposed. These models are superimposed on a control network infrastructure, currently utilised in intelligent building applications such as lighting, heating and access control. The use of control network architectures facilitates the creation of distributed models that closely mirror both the physical and control properties of the environment. The model of the environment is kept local to the installation which allows the virtual representation of a large building to be decomposed into an interconnecting series of smaller models. This paper describes two methods of interacting with the virtual model, firstly a two dimensional aural representation that can be used as the basis of a portable navigational device. Secondly an augmented reality called DAMOCLES that overlays additional information on a user’s normal field of view. The provision of virtual environments offers new possibilities in the man-machine interface so that intuitive access to network based services and control functions can be given to a user.
Resumo:
An error polynomial is defined, the coefficients of which indicate the difference at any instant between a system and a model of lower order approximating the system. It is shown how Markov parameters and time series proportionals of the model can be matched with those of the system by setting error polynomial coefficients to zero. Also discussed is the way in which the error between system and model can be considered as being a filtered form of an error input function specified by means of model parameter selection.
Resumo:
In recent years, there has been an increase in research on conventions motivated by the game-theoretic contributions of the philosopher David Lewis. Prior to this surge in interest, discussions of convention in economics had been tied to the analysis of John Maynard Keynes's writings. These literatures are distinct and have very little overlap. Yet this confluence of interests raises interesting methodological questions. Does the use of a common term, convention, denote a set of shared concerns? Can we identify what differentiates the game theoretic models from the Keynesian ones? This paper maps out the three most developed accounts of convention within economics and discusses their relations with each other in an attempt to provide an answer.
Resumo:
Global agreements have proliferated in the past ten years. One of these is the Kyoto Protocol, which contains provisions for emissions reductions by trading carbon through the Clean Development Mechanism (CDM). The CDM is a market-based instrument that allows companies in Annex I countries to offset their greenhouse gas emissions through energy and tree offset projects in the global South. I set out to examine the governance challenges posed by the institutional design of carbon sequestration projects under the CDM. I examine three global narratives associated with the design of CDM forest projects, specifically North – South knowledge politics, green developmentalism, and community participation, and subsequently assess how these narratives match with local practices in two projects in Latin America. Findings suggest that governance problems are operating at multiple levels and that the rhetoric of global carbon actors often asserts these schemes in one light, while the rhetoric of those who are immediately involved locally may be different. I also stress the alarmist’s discourse that blames local people for the problems of environmental change. The case studies illustrate the need for vertical communication and interaction and nested governance arrangements as well as horizontal arrangements. I conclude that the global framing of forests as offsets requires better integration of local relationships to forests and their management and more effective institutions at multiple levels to link the very local to the very large scale when dealing with carbon sequestration in the CDM.
Resumo:
This study focuses on the wealth-protective effects of socially responsible firm behavior by examining the association between corporate social performance (CSP) and financial risk for an extensive panel data sample of S&P 500 companies between the years 1992 and 2009. In addition, the link between CSP and investor utility is investigated. The main findings are that corporate social responsibility is negatively but weakly related to systematic firm risk and that corporate social irresponsibility is positively and strongly related to financial risk. The fact that both conventional and downside risk measures lead to the same conclusions adds convergent validity to the analysis. However, the risk-return trade-off appears to be such that no clear utility gain or loss can be realized by investing in firms characterized by different levels of social and environmental performance. Overall volatility conditions of the financial markets are shown to play a moderating role in the nature and strength of the CSP-risk relationship.
Resumo:
This paper aims to clarify the potential confusion about the application of attribution analysis to real estate portfolios. Its three primary objectives are: · To review, and as far as possible reconcile, the varying approaches to attribution analysis evident in the literature. · To give a clear statement of the purposes of attribution analysis, and its meaning for real-world property managers. · To show, using real portfolio data from IPD's UK performance measurement service, the practical implications of applying different attribution methods.
Resumo:
This paper review the literature on the distribution of commercial real estate returns. There is growing evidence that the assumption of normality in returns is not safe. Distributions are found to be peaked, fat-tailed and, tentatively, skewed. There is some evidence of compound distributions and non-linearity. Public traded real estate assets (such as property company or REIT shares) behave in a fashion more similar to other common stocks. However, as in equity markets, it would be unwise to assume normality uncritically. Empirical evidence for UK real estate markets is obtained by applying distribution fitting routines to IPD Monthly Index data for the aggregate index and selected sub-sectors. It is clear that normality is rejected in most cases. It is often argued that observed differences in real estate returns are a measurement issue resulting from appraiser behaviour. However, unsmoothing the series does not assist in modelling returns. A large proportion of returns are close to zero. This would be characteristic of a thinly-traded market where new information arrives infrequently. Analysis of quarterly data suggests that, over longer trading periods, return distributions may conform more closely to those found in other asset markets. These results have implications for the formulation and implementation of a multi-asset portfolio allocation strategy.
Resumo:
The broad picture of the cultural and chronological succession from the Epipalaeolithic to the Neolithic in the southern Levant is generally well understood. However, at a more detailed, local level, many questions remain unanswered. In this paper we examine the archaeological record of cultural developments in southern Jordan and the Negev. Focusing on a series of 14C dates from the early occupation of the PPNA site of WF16, we provide a critical review of dating evidence for the region. This review suggests that while the 14C chronology is ambiguous and problematic there is good evidence for a local historical development from the Harifian variant of the Natufian to the early PPNA, well to the south of any core Mediterranean woodland zone. This stresses the importance of considering developments at local scales of analysis, and that the Neolithic transition occurred within a framework of many interacting sub-regional provinces.
Resumo:
Open solar flux (OSF) variations can be described by the imbalance between source and loss terms. We use spacecraft and geomagnetic observations of OSF from 1868 to present and assume the OSF source, S, varies with the observed sunspot number, R. Computing the required fractional OSF loss, χ, reveals a clear solar cycle variation, in approximate phase with R. While peak R varies significantly from cycle to cycle, χ is surprisingly constant in both amplitude and waveform. Comparisons of χ with measures of heliospheric current sheet (HCS) orientation reveal a strong correlation. The cyclic nature of χ is exploited to reconstruct OSF back to the start of sunspot records in 1610. This agrees well with the available spacecraft, geomagnetic, and cosmogenic isotope observations. Assuming S is proportional to R yields near-zero OSF throughout the Maunder Minimum. However, χ becomes negative during periods of low R, particularly the most recent solar minimum, meaning OSF production is underestimated. This is related to continued coronal mass ejection (CME) activity, and therefore OSF production, throughout solar minimum, despite R falling to zero. Correcting S for this produces a better match to the recent solar minimum OSF observations. It also results in a cycling, nonzero OSF during the Maunder Minimum, in agreement with cosmogenic isotope observations. These results suggest that during the Maunder Minimum, HCS tilt cycled as over recent solar cycles, and the CME rate was roughly constant at the levels measured during the most recent two solar minima.
Resumo:
The paper reviews recent models that have applied the techniques of behavioural economics to the analysis of the tax compliance choice of an individual taxpayer. The construction of these models is motivated by the failure of the Yitzhaki version of the Allingham–Sandmo model to predict correctly the proportion of taxpayers who will evade and the effect of an increase in the tax rate upon the chosen level of evasion. Recent approaches have applied non-expected utility theory to the compliance decision and have addressed social interaction. The models we describe are able to match the observed extent of evasion and correctly predict the tax effect but do not have the parsimony or precision of the Yitzhaki model.
Resumo:
Robust and physically understandable responses of the global atmospheric water cycle to a warming climate are presented. By considering interannual responses to changes in surface temperature (T), observations and AMIP5 simulations agree on an increase in column integrated water vapor at the rate 7 %/K (in line with the ClausiusClapeyron equation) and of precipitation at the rate 2-3 %/K (in line with energetic constraints). Using simple and complex climate models, we demonstrate that radiative forcing by greenhouse gases is currently suppressing global precipitation (P) at ~ -0.15 %/decade. Along with natural variability, this can explain why observed trends in global P over the period 1988-2008 are close to zero. Regional responses in the global water cycle are strongly constrained by changes in moisture fluxes. Model simulations show an increased moisture flux into the tropical wet region at 900 hPa and an enhanced outflow (of smaller magnitude) at around 600 hPa with warming. Moisture transport explains an increase in P in the wet tropical regions and small or negative changes in the dry regions of the subtropics in CMIP5 simulations of a warming climate. For AMIP5 simulations and satellite observations, the heaviest 5-day rainfall totals increase in intensity at ~15 %/K over the ocean with reductions at all percentiles over land. The climate change response in CMIP5 simulations shows consistent increases in P over ocean and land for the highest intensities, close to the Clausius-Clapeyron scaling of 7 %/K, while P declines for the lowest percentiles, indicating that interannual variability over land may not be a good proxy for climate change. The local changes in precipitation and its extremes are highly dependent upon small shifts in the large-scale atmospheric circulation and regional feedbacks.