13 resultados para Dynamic data analysis

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tropical deforestation is the major contemporary threat to global biodiversity, because a diminishing extent of tropical forests supports the majority of the Earth's biodiversity. Forest clearing is often spatially concentrated in regions where human land use pressures, either planned or unplanned, increase the likelihood of deforestation. However, it is not a random process, but often moves in waves originating from settled areas. We investigate the spatial dynamics of land cover change in a tropical deforestation hotspot in the Colombian Amazon. We apply a forest cover zoning approach which permitted: calculation of colonization speed; comparative spatial analysis of patterns of deforestation and regeneration; analysis of spatial patterns of mature and recently regenerated forests; and the identification of local-level hotspots experiencing the fastest deforestation or regeneration. The colonization frontline moved at an average of 0.84 km yr(-1) from 1989 to 2002, resulting in the clearing of 3400 ha yr(-1) of forests beyond the 90% forest cover line. The dynamics of forest clearing varied across the colonization front according to the amount of forest in the landscape, but was spatially concentrated in well-defined 'local hotspots' of deforestation and forest regeneration. Behind the deforestation front, the transformed landscape mosaic is composed of cropping and grazing lands interspersed with mature forest fragments and patches of recently regenerated forests. We discuss the implications of the patterns of forest loss and fragmentation for biodiversity conservation within a framework of dynamic conservation planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of scramjet propulsion for alternative launch and payload delivery capabilities has been composed largely of ground experiments for the last 40 years. With the goal of validating the use of short duration ground test facilities, a ballistic reentry vehicle experiment called HyShot was devised to achieve supersonic combustion in flight above Mach 7.5. It consisted of a double wedge intake and two back-to-back constant area combustors; one supplied with hydrogen fuel at an equivalence ratio of 0.34 and the other unfueled. Of the two flights conducted, HyShot 1 failed to reach the desired altitude due to booster failure, whereas HyShot 2 successfully accomplished both the desired trajectory and satisfactory scramjet operation. Postflight data analysis of HyShot 2 confirmed the presence of supersonic combustion during the approximately 3 s test window at altitudes between 35 and 29 km. Reasonable correlation between flight and some preflight shock tunnel tests was observed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research in conditioning (all the processes of preparation for competition) has used group research designs, where multiple athletes are observed at one or more points in time. However, empirical reports of large inter-individual differences in response to conditioning regimens suggest that applied conditioning research would greatly benefit from single-subject research designs. Single-subject research designs allow us to find out the extent to which a specific conditioning regimen works for a specific athlete, as opposed to the average athlete, who is the focal point of group research designs. The aim of the following review is to outline the strategies and procedures of single-subject research as they pertain to.. the assessment of conditioning for individual athletes. The four main experimental designs in single-subject research are: the AB design, reversal (withdrawal) designs and their extensions, multiple baseline designs and alternating treatment designs. Visual and statistical analyses commonly used to analyse single-subject data, and advantages and limitations are discussed. Modelling of multivariate single-subject data using techniques such as dynamic factor analysis and structural equation modelling may identify individualised models of conditioning leading to better prediction of performance. Despite problems associated with data analyses in single-subject research (e.g. serial dependency), sports scientists should use single-subject research designs in applied conditioning research to understand how well an intervention (e.g. a training method) works and to predict performance for a particular athlete.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantile computation has many applications including data mining and financial data analysis. It has been shown that an is an element of-approximate summary can be maintained so that, given a quantile query d (phi, is an element of), the data item at rank [phi N] may be approximately obtained within the rank error precision is an element of N over all N data items in a data stream or in a sliding window. However, scalable online processing of massive continuous quantile queries with different phi and is an element of poses a new challenge because the summary is continuously updated with new arrivals of data items. In this paper, first we aim to dramatically reduce the number of distinct query results by grouping a set of different queries into a cluster so that they can be processed virtually as a single query while the precision requirements from users can be retained. Second, we aim to minimize the total query processing costs. Efficient algorithms are developed to minimize the total number of times for reprocessing clusters and to produce the minimum number of clusters, respectively. The techniques are extended to maintain near-optimal clustering when queries are registered and removed in an arbitrary fashion against whole data streams or sliding windows. In addition to theoretical analysis, our performance study indicates that the proposed techniques are indeed scalable with respect to the number of input queries as well as the number of items and the item arrival rate in a data stream.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The importance of availability of comparable real income aggregates and their components to applied economic research is highlighted by the popularity of the Penn World Tables. Any methodology designed to achieve such a task requires the combination of data from several sources. The first is purchasing power parities (PPP) data available from the International Comparisons Project roughly every five years since the 1970s. The second is national level data on a range of variables that explain the behaviour of the ratio of PPP to market exchange rates. The final source of data is the national accounts publications of different countries which include estimates of gross domestic product and various price deflators. In this paper we present a method to construct a consistent panel of comparable real incomes by specifying the problem in state-space form. We present our completed work as well as briefly indicate our work in progress.