41 resultados para [JEL:C1] Mathematical and Quantitative Methods - Econometric and Statistical Methods: General
em CentAUR: Central Archive University of Reading - UK
Resumo:
Ranald Roderick Macdonald (1945-2007) was an important contributor to mathematical psychology in the UK, as a referee and action editor for British Journal of Mathematical and Statistical Psychology and as a participant and organizer at the British Psychological Society's Mathematics, statistics and computing section meetings. This appreciation argues that his most important contribution was to the foundations of significance testing, where his concern about what information was relevant in interpreting the results of significance tests led him to be a persuasive advocate for the 'Weak Fisherian' form of hypothesis testing.
Resumo:
Answering many of the critical questions in conservation, development and environmental management requires integrating the social and natural sciences. However, understanding the array of available quantitative methods and their associated terminology presents a major barrier to successful collaboration. We provide an overview of quantitative socio-economic methods that distils their complexity into a simple taxonomy. We outline how each has been used in conjunction with ecological models to address questions relating to the management of socio-ecological systems. We review the application of social and ecological quantitative concepts to agro-ecology and classify the approaches used to integrate the two disciplines. Our review included all published integrated models from 2003 to 2008 in 27 journals that publish agricultural modelling research. Although our focus is on agro-ecology, many of the results are broadly applicable to other fields involving an interaction between human activities and ecology. We found 36 papers that integrated social and ecological concepts in a quantitative model. Four different approaches to integration were used, depending on the scale at which human welfare was quantified. Most models viewed humans as pure profit maximizers, both when calculating welfare and predicting behaviour. Synthesis and applications. We reached two main conclusions based on our taxonomy and review. The first is that quantitative methods that extend predictions of behaviour and measurements of welfare beyond a simple market value basis are underutilized by integrated models. The second is that the accuracy of prediction for integrated models remains largely unquantified. Addressing both problems requires researchers to reach a common understanding of modelling goals and data requirements during the early stages of a project.
Resumo:
A precipitation downscaling method is presented using precipitation from a general circulation model (GCM) as predictor. The method extends a previous method from monthly to daily temporal resolution. The simplest form of the method corrects for biases in wet-day frequency and intensity. A more sophisticated variant also takes account of flow-dependent biases in the GCM. The method is flexible and simple to implement. It is proposed here as a correction of GCM output for applications where sophisticated methods are not available, or as a benchmark for the evaluation of other downscaling methods. Applied to output from reanalyses (ECMWF, NCEP) in the region of the European Alps, the method is capable of reducing large biases in the precipitation frequency distribution, even for high quantiles. The two variants exhibit similar performances, but the ideal choice of method can depend on the GCM/reanalysis and it is recommended to test the methods in each case. Limitations of the method are found in small areas with unresolved topographic detail that influence higher-order statistics (e.g. high quantiles). When used as benchmark for three regional climate models (RCMs), the corrected reanalysis and the RCMs perform similarly in many regions, but the added value of the latter is evident for high quantiles in some small regions.
Resumo:
The congruential rule advanced by Graves for polarization basis transformation of the radar backscatter matrix is now often misinterpreted as an example of consimilarity transformation. However, consimilarity transformations imply a physically unrealistic antilinear time-reversal operation. This is just one of the approaches found in literature to the description of transformations where the role of conjugation has been misunderstood. In this paper, the different approaches are examined in particular in respect to the role of conjugation. In order to justify and correctly derive the congruential rule for polarization basis transformation and properly place the role of conjugation, the origin of the problem is traced back to the derivation of the antenna height from the transmitted field. In fact, careful consideration of the role played by the Green’s dyadic operator relating the antenna height to the transmitted field shows that, under general unitary basis transformation, it is not justified to assume a scalar relationship between them. Invariance of the voltage equation shows that antenna states and wave states must in fact lie in dual spaces, a distinction not captured in conventional Jones vector formalism. Introducing spinor formalism, and with the use of an alternate spin frame for the transmitted field a mathematically consistent implementation of the directional wave formalism is obtained. Examples are given comparing the wider generality of the congruential rule in both active and passive transformations with the consimilarity rule.
Resumo:
We review and structure some of the mathematical and statistical models that have been developed over the past half century to grapple with theoretical and experimental questions about the stochastic development of aging over the life course. We suggest that the mathematical models are in large part addressing the problem of partitioning the randomness in aging: How does aging vary between individuals, and within an individual over the lifecourse? How much of the variation is inherently related to some qualities of the individual, and how much is entirely random? How much of the randomness is cumulative, and how much is merely short-term flutter? We propose that recent lines of statistical inquiry in survival analysis could usefully grapple with these questions, all the more so if they were more explicitly linked to the relevant mathematical and biological models of aging. To this end, we describe points of contact among the various lines of mathematical and statistical research. We suggest some directions for future work, including the exploration of information-theoretic measures for evaluating components of stochastic models as the basis for analyzing experiments and anchoring theoretical discussions of aging.
Resumo:
We investigate the initialization of Northern-hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates significantly reduces assimilation error both in identical-twin experiments and when assimilating sea-ice observations, reducing the concentration error by a factor of four to six, and the thickness error by a factor of two. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that the strong dependence of thermodynamic ice growth on ice concentration necessitates an adjustment of mean ice thickness in the analysis update. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that proportional mean-thickness updates are superior to the other two methods considered and enable us to assimilate sea ice in a global climate model using simple Newtonian relaxation.
Resumo:
We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.
Resumo:
The paper discusses the observed and projected warming in the Caucasus region and its implications for glacier melt and runoff. A strong positive trend in summer air temperatures of 0.05 degrees C a(-1) is observed in the high-altitude areas providing for a strong glacier melt and continuous decline in glacier mass balance. A warming of 4-7 degrees C and 3-5 degrees C is projected for the summer months in 2071-2100 under the A2 and B2 emission scenarios respectively, suggesting that enhanced glacier melt can be expected. The expected changes in winter precipitation will not compensate for the summer melt and glacier retreat is likely to continue. However, a projected small increase in both winter and summer precipitation combined with the enhanced glacier melt will result in increased summer runoff in the currently glaciated region of the Caucasus (independent of whether the region is glaciated at the end of the twenty-first century) by more than 50% compared with the baseline period.
Resumo:
In this paper, we address issues in segmentation Of remotely sensed LIDAR (LIght Detection And Ranging) data. The LIDAR data, which were captured by airborne laser scanner, contain 2.5 dimensional (2.5D) terrain surface height information, e.g. houses, vegetation, flat field, river, basin, etc. Our aim in this paper is to segment ground (flat field)from non-ground (houses and high vegetation) in hilly urban areas. By projecting the 2.5D data onto a surface, we obtain a texture map as a grey-level image. Based on the image, Gabor wavelet filters are applied to generate Gabor wavelet features. These features are then grouped into various windows. Among these windows, a combination of their first and second order of statistics is used as a measure to determine the surface properties. The test results have shown that ground areas can successfully be segmented from LIDAR data. Most buildings and high vegetation can be detected. In addition, Gabor wavelet transform can partially remove hill or slope effects in the original data by tuning Gabor parameters.
Resumo:
The development of a combined engineering and statistical Artificial Neural Network model of UK domestic appliance load profiles is presented. The model uses diary-style appliance use data and a survey questionnaire collected from 51 suburban households and 46 rural households during the summer of 2010 and2011 respectively. It also incorporates measured energy data and is sensitive to socioeconomic, physical dwelling and temperature variables. A prototype model is constructed in MATLAB using a two layer feed forward network with back propagation training which has a 12:10:24 architecture. Model outputs include appliance load profiles which can be applied to the fields of energy planning (microrenewables and smart grids), building simulation tools and energy policy.