985 resultados para Spatial Empirical bayes Smoothing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neutral and niche theories give contrasting explanations for the maintenance of tropical tree species diversity. Both have some empirical support, but methods to disentangle their effects have not yet been developed. We applied a statistical measure of spatial structure to data from 14 large tropical forest plots to test a prediction of niche theory that is incompatible with neutral theory: that species in heterogeneous environments should separate out in space according to their niche preferences. We chose plots across a range of topographic heterogeneity, and tested whether pairwise spatial associations among species were more variable in more heterogeneous sites. We found strong support for this prediction, based on a strong positive relationship between variance in the spatial structure of species pairs and topographic heterogeneity across sites. We interpret this pattern as evidence of pervasive niche differentiation, which increases in importance with increasing environmental heterogeneity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of ecosystems can exhibit abrupt shifts between alternative stable states. Because of their important ecological and economic consequences, recent research has focused on devising early warning signals for anticipating such abrupt ecological transitions. In particular, theoretical studies show that changes in spatial characteristics of the system could provide early warnings of approaching transitions. However, the empirical validation of these indicators lag behind their theoretical developments. Here, we summarize a range of currently available spatial early warning signals, suggest potential null models to interpret their trends, and apply them to three simulated spatial data sets of systems undergoing an abrupt transition. In addition to providing a step-by-step methodology for applying these signals to spatial data sets, we propose a statistical toolbox that may be used to help detect approaching transitions in a wide range of spatial data. We hope that our methodology together with the computer codes will stimulate the application and testing of spatial early warning signals on real spatial data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using US data for the period 1967:5-2002:4, this paper empirically investigates the performance of a Fed’s reaction function (FRF) that (i) allows for the presence of switching regimes, (ii) considers the long-short term spread in addition to the typical variables, (iii) uses an alternative monthly indicator of general economic activity suggested by Stock and Watson (1999), and (iv) considers interest rate smoothing. The estimation results show the existence of three switching regimes, two characterized by low volatility and the remaining regime by high volatility. Moreover, the scale of the responses of the Federal funds rate to movements in the rate of inflation and the economic activity index depends on the regime. The estimation results also show robust empirical evidence that the importance of the term spread in the FRF has increased over the sample period and the FRF has been more stable during the term of office of Chairman Greenspan than in the pre-Greenspan period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using US data for the period 1967:5-2002:4, this paper empirically investigates the performance of an augmented version of the Taylor rule (ATR) that (i) allows for the presence of switching regimes, (ii) considers the long-short term spread in addition to the typical variables, (iii) uses an alternative monthly indicator of general economic activity suggested by Stock and Watson (1999), and (iv) considers interest rate smoothing. The estimation results show the existence of switching regimes, one characterized by low volatility and the other by high volatility. Moreover, the scale of the responses of the Federal funds rate to movements in the term spread, inflation and the economic activity index depend on the regime. The estimation results also show robust empirical evidence that the ATR has been more stable during the term of office of Chairman Greenspan than in the pre-Greenspan period. However, a closer look at the Greenspan period shows the existence of two alternative regimes and that the response of the Fed funds rate to inflation has not been significant during this period once the term spread is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A síndrome de imunodeficiência adquirida (AIDS) é um problema de saúde pública que alcançou grandes proporções. Na ausência de uma vacina eficaz ou tratamento efetivo para a doença, esforços têm ser concentrados na prevenção. As políticas de saúde adotadas pelo governo brasileiro têm resultado em estabilização da enfermidade no país na faixa etária mais jovem, muito embora essa tendência não venha acontecendo nos outros grupos etários mais velhos. Verificar a incidência da AIDS em indivíduos idosos, no município de Niterói, RJ, de acordo com sexo, idade, período e coorte de nascimento de 1982-2011, além de analisar a dinâmica espacial da epidemia de AIDS em idosos (indivíduos com 60 anos ou mais) no estado do Rio de Janeiro no período de 1997-2011, são os objetivos deste estudo. Os dados da população por idade, sexo e grupo, foram obtidos a partir de: censos populacionais, contagem da população (1996), projeções intercensitárias, informações do Sistema de Informações de Agravos de Notificação, de Mortalidade e de Controle de Exames Laboratoriais. As taxas de incidência por 100 000 foram calculadas para as unidades geográficas através da contagem do número de novos casos de AIDS em indivíduos com 60 anos ou mais e tamanho da população do município no mesmo grupo etário. Para avaliar a dependência espacial das taxas foi calculado o índice de Moran global. Moran Mapas foram construídos para mostrar regimes de correlação espacial potencialmente distintos em diferentes subregiões. Distribuições de probabilidade e método Bayes empírico foram aplicados para a correção das taxas de incidência da AIDS. Ocorreram 575 casos de AIDS em residentes de Niterói com ≥50 anos de idade. Tendência crescente de taxas de incidência ao longo do tempo foi detectada em ambos os sexos. No estudo da dinâmica espacial da incidência da AIDS em idosos, Rio de Janeiro, no período de 1997 a 2011, as taxas entre homens e mulheres permaneceram flutuantes ao longo de todo o período. Não foi possível detectar correlação significativa global, usando o índice global de Moran. Na costa sudeste do Estado, onde se localizam as grandes áreas metropolitanas (Rio de Janeiro e Niterói), observaram-se grupos de cidades com taxas de até 20 casos por 100 000 hab. Esta concentração se torna mais pronunciada em períodos subsequentes, quando parece ocorrer propagação gradual da epidemia da costa sul até o norte do Rio de Janeiro.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach is proposed to simulate splash erosion on local soil surfaces. Without the effect of wind and other raindrops, the impact of free-falling raindrops was considered as an independent event from the stochastic viewpoint. The erosivity of a single raindrop depending on its kinetic energy was computed by an empirical relationship in which the kinetic energy was expressed as a power function of the equivalent diameter of the raindrop. An empirical linear function combining the kinetic energy and soil shear strength was used to estimate the impacted amount of soil particles by a single raindrop. Considering an ideal local soil surface with size of I m x I m, the expected number of received free-failing raindrops with different diameters per unit time was described by the combination of the raindrop size distribution function and the terminal velocity of raindrops. The total splash amount was seen as the sum of the impact amount by all raindrops in the rainfall event. The total splash amount per unit time was subdivided into three different components, including net splash amount, single impact amount and re-detachment amount. The re-detachment amount was obtained by a spatial geometric probability derived using the Poisson function in which overlapped impacted areas were considered. The net splash amount was defined as the mass of soil particles collected outside the splash dish. It was estimated by another spatial geometric probability in which the average splashed distance related to the median grain size of soil and effects of other impacted soil particles and other free-falling raindrops were considered. Splash experiments in artificial rainfall were carried out to validate the availability and accuracy of the model. Our simulated results suggested that the net splash amount and re-detachment amount were small parts of the total splash amount. Their proportions were 0.15% and 2.6%, respectively. The comparison of simulated data with measured data showed that this model could be applied to simulate the soil-splash process successfully and needed information of the rainfall intensity and original soil properties including initial bulk intensity, water content, median grain size and some empirical constants related to the soil surface shear strength, the raindrop size distribution function and the average splashed distance. Copyright (c) 2007 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gridded sound speed data were calculated using Del Grosso's formulation from the temperature and salinity data at the PN section in the East China Sea covering 92 cruises between February 1978 and October 2000. The vertical gradients of sound speed are mainly related to the seasonal variations, and the strong horizontal gradients are mainly related to the Kuroshio and the upwelling. The standard deviations show that great variations of sound speed exist in the upper layer and in the slope zone. Empirical orthogonal function analysis shows that contributions of surface heating and the Kuroshio to sound speed variance are almost equivalent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A problem with use of the geostatistical Kriging error for optimal sampling design is that the design does not adapt locally to the character of spatial variation. This is because a stationary variogram or covariance function is a parameter of the geostatistical model. The objective of this paper was to investigate the utility of non-stationary geostatistics for optimal sampling design. First, a contour data set of Wiltshire was split into 25 equal sub-regions and a local variogram was predicted for each. These variograms were fitted with models and the coefficients used in Kriging to select optimal sample spacings for each sub-region. Large differences existed between the designs for the whole region (based on the global variogram) and for the sub-regions (based on the local variograms). Second, a segmentation approach was used to divide a digital terrain model into separate segments. Segment-based variograms were predicted and fitted with models. Optimal sample spacings were then determined for the whole region and for the sub-regions. It was demonstrated that the global design was inadequate, grossly over-sampling some segments while under-sampling others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The valuation of environmental benefits has been well researched in the forestry sector. This is not generally the case in the agriculture sector although schemes to compensate farmers for provision of officially defined environmental benefits are already in place throughout the European Union. This paper draws on empirical findings from forestry and deductions from economic theory to challenge the notion of the universality of such benefits. Empirical findings from forestry suggest recreational use value is location specific rather than widely spread. Household utility theory predicts zero willingness to pay to maintain the status quo level of a previously unpaid for environmental benefit (when provision is not perceived as under risk) but a positive willingness to pay for an increase. Thus, non use values cannot be attributed to the major part of existing commercial forestry area but to spatially restricted schemes such as additional afforestation or preservation of ancient natural woodlands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to identify, clarify and tabulate the various managerial issues encountered, to aid in the management of the complex health and safety concerns which occur within a confined construction site environment.

Design/methodology/approach – This is achieved through conducting extensive qualitative and qualitative research in the form of case studies, interviews and questionnaire survey.

Findings – The leading managerial issues in the management of health and safety on a confined construction site are found to be: “Difficulty to move materials around site safely”; “Lack of adequate room for the effective handling of materials”; “Difficulty in ensuring site is tidy and all plant and materials are stored safely”; “Close proximity of individuals to operation of large plant and machinery”; and joint fifth “Difficulty in ensuring proper arrangement and collection of waste materials on-site” along with “Difficulty in controlling hazardous materials and equipment on site”.

Practical implications – The resulting implication for practice of these results can be summarised by identifying that with sustained development of urban centres on a global scale, coupled with the increasing complexity of architectural designs, the majority of on-site project management professionals are faced with the onerous task of completing often intricate designs within a limited spatial environment, under strict health and safety parameters.

Originality/value – The subsequent value of the findings are such that just as on-site management professionals successfully identify the various managerial issues highlighted, the successful management of health and safety on a confined construction site is attainable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents the findings of an empirical channel characterisation for an ultra-wideband off-body optic fibre-fed multiple-antenna array within an office and corridor environment. The results show that for received power experiments, the office and corridor were best modelled by lognormal and Rician distributions, respectively [for both line of sight (LOS) and non-LOS (NLOS) scenarios]. In the office, LOS measurements for t and tRMS were both described by the Normal distribution for all channels, whereas NLOS measurements for t and t were Nakagami and Weibull distributed, respectively. For the corridor measurements, LOS for t and t were either Nakagami or normally distributed for all channels, with NLOS measurements for t and t being Nakagami and normally distributed, respectively. This work also shows that achievable diversity gain was influenced by both mutual coupling and cross-correlation co-efficients. Although the best diversity gains were 1.8 dB for three-channel selective diversity combining, the authors present recommendations for improving these results. © The Institution of Engineering and Technology 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the proliferation of geo-positioning and geo-tagging techniques, spatio-textual objects that possess both a geographical location and a textual description are gaining in prevalence, and spatial keyword queries that exploit both location and textual description are gaining in prominence. However, the queries studied so far generally focus on finding individual objects that each satisfy a query rather than finding groups of objects where the objects in a group together satisfy a query.

We define the problem of retrieving a group of spatio-textual objects such that the group's keywords cover the query's keywords and such that the objects are nearest to the query location and have the smallest inter-object distances. Specifically, we study three instantiations of this problem, all of which are NP-hard. We devise exact solutions as well as approximate solutions with provable approximation bounds to the problems. In addition, we solve the problems of retrieving top-k groups of three instantiations, and study a weighted version of the problem that incorporates object weights. We present empirical studies that offer insight into the efficiency of the solutions, as well as the accuracy of the approximate solutions.