15 resultados para Empirical Bayes Methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
For vegetated surfaces, calculation of soil heat flux, G, with the Exact or Analytical method requires a harmonic analysis of below-canopy soil surface temperature, to obtain the shape of the diurnal course of G. When determining G with remote sensing methods, only composite (vegetation plus soil) radiometric brightness temperature is available. This paper presents a simple equation that relates the sum of the harmonic terms derived for the composite radiometric surface temperature to that of belowcanopy soil surface temperature. The thermal inertia, Gamma(,) for which a simple equation has been presented in a companion paper, paper I, is used to set the magnitude of G. To assess the success of the method proposed in this paper for the estimation of the diurnal shape of G, a comparison was made between 'remote' and in situ calculated values from described field sites. This indicated that the proposed method was suitable for the estimation of the shape of G for a variety of vegetation types and densities. The approach outlined in paper I, to obtain Gamma, was then combined with the estimated harmonic terms to predict estimates of G, which were compared to values predicted by empirical remote methods found in the literature. This indicated that the method proposed in the combination of papers I and II gave reliable estimates of G, which, in comparison to the other methods, resulted in more realistic predictions for vegetated surfaces. This set of equations can also be used for bare and sparsely vegetated soils, making it a universally applicable method. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The objective of this book is to present the quantitative techniques that are commonly employed in empirical finance research together with real world, state of the art research examples. Each chapter is written by international experts in their fields. The unique approach is to describe a question or issue in finance and then to demonstrate the methodologies that may be used to solve it. All of the techniques described are used to address real problems rather than being presented for their own sake and the areas of application have been carefully selected so that a broad range of methodological approaches can be covered. This book is aimed primarily at doctoral researchers and academics who are engaged in conducting original empirical research in finance. In addition, the book will be useful to researchers in the financial markets and also advanced Masters-level students who are writing dissertations.
Integrating methods for developing sustainability indicators that can facilitate learning and action
Resumo:
Bossel's (2001) systems-based approach for deriving comprehensive indicator sets provides one of the most holistic frameworks for developing sustainability indicators. It ensures that indicators cover all important aspects of system viability, performance, and sustainability, and recognizes that a system cannot be assessed in isolation from the systems upon which it depends and which in turn depend upon it. In this reply, we show how Bossel's approach is part of a wider convergence toward integrating participatory and reductionist approaches to measure progress toward sustainable development. However, we also show that further integration of these approaches may be able to improve the accuracy and reliability of indicators to better stimulate community learning and action. Only through active community involvement can indicators facilitate progress toward sustainable development goals. To engage communities effectively in the application of indicators, these communities must be actively involved in developing, and even in proposing, indicators. The accuracy, reliability, and sensitivity of the indicators derived from local communities can be ensured through an iterative process of empirical and community evaluation. Communities are unlikely to invest in measuring sustainability indicators unless monitoring provides immediate and clear benefits. However, in the context of goals, targets, and/or baselines, sustainability indicators can more effectively contribute to a process of development that matches local priorities and engages the interests of local people.
Resumo:
An alternative approach to research is described that has been developed through a succession of significant construction management research projects. The approach follows the principles of iterative grounded theory, whereby researchers iterate between alternative theoretical frameworks and emergent empirical data. Of particular importance is an orientation toward mixing methods, thereby overcoming the existing tendency to dichotomize quantitative and qualitative approaches. The approach is positioned against the existing contested literature on grounded theory, and the possibility of engaging with empirical data in a “theory free” manner is discounted. Emphasis instead is given to the way in which researchers must be theoretically sensitive as a result of being steeped in relevant literatures. Knowledge of existing literatures therefore shapes the initial research design; but emergent empirical findings cause fresh theoretical perspectives to be mobilized. The advocated approach is further aligned with notions of knowledge coproduction and the underlying principles of contextualist research. It is this unique combination of ideas which characterizes the paper's contribution to the research methodology literature within the field of construction management. Examples are provided and consideration is given to the extent to which the emergent findings are generalizable beyond the specific context from which they are derived.
Resumo:
The paper draws from three case studies of regional construction firms operating in the UK. The case studies provide new insights into the ways in which such firms strive to remain competitive. Empirical data was derived from multiple interactions with senior personnel from with each firm. Data collection methods included semi-structured interviews, informal interactions, archival research, and workshops. The initial research question was informed by existing resource-based theories of competitiveness and an extensive review of constructionspecific literature. However, subsequent emergent empirical findings progressively pointed towards the need to mobilise alternative theoretical models that emphasise localised learning and embeddedness. The findings point towards the importance of de-centralised structures that enable multiple business units to become embedded within localised markets. A significant degree of autonomy is essential to facilitate entrepreneurial behaviour. In essence, sustained competitiveness was found to rest on the way de-centralised business units enact ongoing processes of localised learning. Once local business units have become embedded within localised markets, the essential challenge is how to encourage continued entrepreneurial behaviour while maintaining some degree of centralised control and coordination. This presents a number of tensions and challenges which play out differently across each of the three case studies.
Resumo:
Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Transient neural assemblies mediated by synchrony in particular frequency ranges are thought to underlie cognition. We propose a new approach to their detection, using empirical mode decomposition (EMD), a data-driven approach removing the need for arbitrary bandpass filter cut-offs. Phase locking is sought between modes. We explore the features of EMD, including making a quantitative assessment of its ability to preserve phase content of signals, and proceed to develop a statistical framework with which to assess synchrony episodes. Furthermore, we propose a new approach to ensure signal decomposition using EMD. We adapt the Hilbert spectrum to a time-frequency representation of phase locking and are able to locate synchrony successfully in time and frequency between synthetic signals reminiscent of EEG. We compare our approach, which we call EMD phase locking analysis (EMDPL) with existing methods and show it to offer improved time-frequency localisation of synchrony.
Resumo:
Transient episodes of synchronisation of neuronal activity in particular frequency ranges are thought to underlie cognition. Empirical mode decomposition phase locking (EMDPL) analysis is a method for determining the frequency and timing of phase synchrony that is adaptive to intrinsic oscillations within data, alleviating the need for arbitrary bandpass filter cut-off selection. It is extended here to address the choice of reference electrode and removal of spurious synchrony resulting from volume conduction. Spline Laplacian transformation and independent component analysis (ICA) are performed as pre-processing steps, and preservation of phase synchrony between synthetic signals. combined using a simple forward model, is demonstrated. The method is contrasted with use of bandpass filtering following the same preprocessing steps, and filter cut-offs are shown to influence synchrony detection markedly. Furthermore, an approach to the assessment of multiple EEG trials using the method is introduced, and the assessment of statistical significance of phase locking episodes is extended to render it adaptive to local phase synchrony levels. EMDPL is validated in the analysis of real EEG data, during finger tapping. The time course of event-related (de)synchronisation (ERD/ERS) is shown to differ from that of longer range phase locking episodes, implying different roles for these different types of synchronisation. It is suggested that the increase in phase locking which occurs just prior to movement, coinciding with a reduction in power (or ERD) may result from selection of the neural assembly relevant to the particular movement. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The A-Train constellation of satellites provides a new capability to measure vertical cloud profiles that leads to more detailed information on ice-cloud microphysical properties than has been possible up to now. A variational radar–lidar ice-cloud retrieval algorithm (VarCloud) takes advantage of the complementary nature of the CloudSat radar and Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar to provide a seamless retrieval of ice water content, effective radius, and extinction coefficient from the thinnest cirrus (seen only by the lidar) to the thickest ice cloud (penetrated only by the radar). In this paper, several versions of the VarCloud retrieval are compared with the CloudSat standard ice-only retrieval of ice water content, two empirical formulas that derive ice water content from radar reflectivity and temperature, and retrievals of vertically integrated properties from the Moderate Resolution Imaging Spectroradiometer (MODIS) radiometer. The retrieved variables typically agree to within a factor of 2, on average, and most of the differences can be explained by the different microphysical assumptions. For example, the ice water content comparison illustrates the sensitivity of the retrievals to assumed ice particle shape. If ice particles are modeled as oblate spheroids rather than spheres for radar scattering then the retrieved ice water content is reduced by on average 50% in clouds with a reflectivity factor larger than 0 dBZ. VarCloud retrieves optical depths that are on average a factor-of-2 lower than those from MODIS, which can be explained by the different assumptions on particle mass and area; if VarCloud mimics the MODIS assumptions then better agreement is found in effective radius and optical depth is overestimated. MODIS predicts the mean vertically integrated ice water content to be around a factor-of-3 lower than that from VarCloud for the same retrievals, however, because the MODIS algorithm assumes that its retrieved effective radius (which is mostly representative of cloud top) is constant throughout the depth of the cloud. These comparisons highlight the need to refine microphysical assumptions in all retrieval algorithms and also for future studies to compare not only the mean values but also the full probability density function.
Resumo:
The physical and empirical relationships used by microphysics schemes to control the rate at which vapor is transferred to ice crystals growing in supercooled clouds are compared with laboratory data to evaluate the realism of various model formulations. Ice crystal growth rates predicted from capacitance theory are compared with measurements from three independent laboratory studies. When the growth is diffusion- limited, the predicted growth rates are consistent with the measured values to within about 20% in 14 of the experiments analyzed, over the temperature range −2.5° to −22°C. Only two experiments showed significant disagreement with theory (growth rate overestimated by about 30%–40% at −3.7° and −10.6°C). Growth predictions using various ventilation factor parameterizations were also calculated and compared with supercooled wind tunnel data. It was found that neither of the standard parameterizations used for ventilation adequately described both needle and dendrite growth; however, by choosing habit-specific ventilation factors from previous numerical work it was possible to match the experimental data in both regimes. The relationships between crystal mass, capacitance, and fall velocity were investigated based on the laboratory data. It was found that for a given crystal size the capacitance was significantly overestimated by two of the microphysics schemes considered here, yet for a given crystal mass the growth rate was underestimated by those same schemes because of unrealistic mass/size assumptions. The fall speed for a given capacitance (controlling the residence time of a crystal in the supercooled layer relative to its effectiveness as a vapor sink, and the relative importance of ventilation effects) was found to be overpredicted by all the schemes in which fallout is permitted, implying that the modeled crystals reside for too short a time within the cloud layer and that the parameterized ventilation effect is too strong.
Resumo:
Using the plausible model of activated carbon proposed by Harris and co-workers and grand canonical Monte Carlo simulations, we study the applicability of standard methods for describing adsorption data on microporous carbons widely used in adsorption science. Two carbon structures are studied, one with a small distribution of micropores in the range up to 1 nm, and the other with micropores covering a wide range of porosity. For both structures, adsorption isotherms of noble gases (from Ne to Xe), carbon tetrachloride and benzene are simulated. The data obtained are considered in terms of Dubinin-Radushkevich plots. Moreover, for benzene and carbon tetrachloride the temperature invariance of the characteristic curve is also studied. We show that using simulated data some empirical relationships obtained from experiment can be successfully recovered. Next we test the applicability of Dubinin's related models including the Dubinin-Izotova, Dubinin-Radushkevich-Stoeckli, and Jaroniec-Choma equations. The results obtained demonstrate the limits and applications of the models studied in the field of carbon porosity characterization.
Resumo:
Current methods for estimating event-related potentials (ERPs) assume stationarity of the signal. Empirical Mode Decomposition (EMD) is a data-driven decomposition technique that does not assume stationarity. We evaluated an EMD-based method for estimating the ERP. On simulated data, EMD substantially reduced background EEG while retaining the ERP. EMD-denoised single trials also estimated shape, amplitude, and latency of the ERP better than raw single trials. On experimental data, EMD-denoised trials revealed event-related differences between two conditions (condition A and B) more effectively than trials lowpass filtered at 40 Hz. EMD also revealed event-related differences on both condition A and condition B that were clearer and of longer duration than those revealed by low-pass filtering at 40 Hz. Thus, EMD-based denoising is a promising data-driven, nonstationary method for estimating ERPs and should be investigated further.
Resumo:
This article analyses the results of an empirical study on the 200 most popular UK-based websites in various sectors of e-commerce services. The study provides empirical evidence on unlawful processing of personal data. It comprises a survey on the methods used to seek and obtain consent to process personal data for direct marketing and advertisement, and a test on the frequency of unsolicited commercial emails (UCE) received by customers as a consequence of their registration and submission of personal information to a website. Part One of the article presents a conceptual and normative account of data protection, with a discussion of the ethical values on which EU data protection law is grounded and an outline of the elements that must be in place to seek and obtain valid consent to process personal data. Part Two discusses the outcomes of the empirical study, which unveils a significant departure between EU legal theory and practice in data protection. Although a wide majority of the websites in the sample (69%) has in place a system to ask separate consent for engaging in marketing activities, it is only 16.2% of them that obtain a consent which is valid under the standards set by EU law. The test with UCE shows that only one out of three websites (30.5%) respects the will of the data subject not to receive commercial communications. It also shows that, when submitting personal data in online transactions, there is a high probability (50%) of incurring in a website that will ignore the refusal of consent and will send UCE. The article concludes that there is severe lack of compliance of UK online service providers with essential requirements of data protection law. In this respect, it suggests that there is inappropriate standard of implementation, information and supervision by the UK authorities, especially in light of the clarifications provided at EU level.
Resumo:
Empirical Mode Decomposition is presented as an alternative to traditional analysis methods to decompose geomagnetic time series into spectral components. Important comments on the algorithm and its variations will be given. Using this technique, planetary wave modes of 5-, 10-, and 16-day mean periods can be extracted from magnetic field components of three different stations in Germany. In a second step, the amplitude modulation functions of these wave modes can be shown to contain significant contribution from solar cycle variation through correlation with smoothed sunspot numbers. Additionally, the data indicate connections with geomagnetic jerk occurrences, supported by a second set of data providing reconstructed near-Earth magnetic field for 150 years. Usually attributed to internal dynamo processes within the Earth's outer core, the question of who is impacting whom will be briefly discussed here.
Resumo:
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.