30 resultados para random walk and efficiency
Resumo:
This paper forecasts Daily Sterling exchange rate returns using various naive, linear and non-linear univariate time-series models. The accuracy of the forecasts is evaluated using mean squared error and sign prediction criteria. These show only a very modest improvement over forecasts generated by a random walk model. The Pesaran–Timmerman test and a comparison with forecasts generated artificially shows that even the best models have no evidence of market timing ability.
Resumo:
Unlike most other biological species, humans can use cultural innovations to occupy a range of environments, raising the intriguing question of whether human migrations move relatively independently of habitat or show preferences for familiar ones. The Bantu expansion that swept out of West Central Africa beginning ∼5,000 y ago is one of the most influential cultural events of its kind, eventually spreading over a vast geographical area a new way of life in which farming played an increasingly important role. We use a new dated phylogeny of ∼400 Bantu languages to show that migrating Bantu-speaking populations did not expand from their ancestral homeland in a “random walk” but, rather, followed emerging savannah corridors, with rainforest habitats repeatedly imposing temporal barriers to movement. When populations did move from savannah into rainforest, rates of migration were slowed, delaying the occupation of the rainforest by on average 300 y, compared with similar migratory movements exclusively within savannah or within rainforest by established rainforest populations. Despite unmatched abilities to produce innovations culturally, unfamiliar habitats significantly alter the route and pace of human dispersals.
Resumo:
Background and Aims: Phosphate (Pi) is one of the most limiting nutrients for agricultural production in Brazilian soils due to low soil Pi concentrations and rapid fixation of fertilizer Pi by adsorption to oxidic minerals and/or precipitation by iron and aluminum ions. The objectives of this study were to quantify phosphorus (P) uptake and use efficiency in cultivars of the species Coffea arabica L. and Coffea canephora L., and group them in terms of efficiency and response to Pi availability. Methods: Plants of 21 cultivars of C. arabica and four cultivars of C. canephora were grown under contrasting soil Pi availabilities. Biomass accumulation, tissue P concentration and accumulation and efficiency indices for P use were measured. Key Results: Coffee plant growth was significantly reduced under low Pi availability, and P concentration was higher in cultivars of C. canephora. The young leaves accumulated more P than any other tissue. The cultivars of C. canephora had a higher root/shoot ratio and were significantly more efficient in P uptake, while the cultivars of C. arabica were more efficient in P utilization. Agronomic P use efficiency varied among coffee cultivars and E16 Shoa, E22 Sidamo, Iêmen and Acaiá cultivars were classified as the most efficient and responsive to Pi supply. A positive correlation between P uptake efficiency and root to shoot ratio was observed across all cultivars at low Pi supply. These data identify Coffea genotypes better adapted to low soil Pi availabilities, and the traits that contribute to improved P uptake and use efficiency. These data could be used to select current genotypes with improved P uptake or utilization efficiencies for use on soils with low Pi availability and also provide potential breeding material and targets for breeding new cultivars better adapted to the low Pi status of Brazilian soils. This could ultimately reduce the use of Pi fertilizers in tropical soils, and contribute to more sustainable coffee production.
Resumo:
Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.
Resumo:
New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration data sets extend an additional 2000 yr, from 0-26 cal kyr BP (Before Present, 0 cal. BP = AD 1950), and provide much higher resolution, greater precision, and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically-dated tree-ring samples, converted with a box diffusion model to marine mixed-layer ages, cover the period from 0-10.5 call kyr BR Beyond 10.5 cal kyr BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals. The marine records are corrected with site-specific C-14 reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 cal kyr BR A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the C-14 age to calculate the underlying calibration curve (Buck and Blackwell, this issue). The marine data sets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring data sets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al. (this issue).
Resumo:
A new calibration curve for the conversion of radiocarbon ages to calibrated (cal) ages has been constructed and internationally ratified to replace ImCal98, which extended from 0-24 cal kyr BP (Before Present, 0 cal BP = AD 1950). The new calibration data set for terrestrial samples extends from 0-26 cal kyr BP, but with much higher resolution beyond 11.4 cal kyr BP than ImCal98. Dendrochronologically-dated tree-ring samples cover the period from 0-12.4 cal kyr BP. Beyond the end of the tree rings, data from marine records (corals and foraminifera) are converted to the atmospheric equivalent with a site-specific marine reservoir correction to provide terrestrial calibration from 12.4-26.0 cal kyr BP. A substantial enhancement relative to ImCal98 is the introduction of a coherent statistical approach based on a random walk model, which takes into account the uncertainty in both the calendar age and the C-14 age to calculate the underlying calibration curve (Buck and Blackwell, this issue). The tree-ring data sets, sources of uncertainty, and regional offsets are discussed here. The marine data sets and calibration curve for marine samples from the surface mixed layer (Marine 04) are discussed in brief, but details are presented in Hughen et al. (this issue a). We do not make a recommendation for calibration beyond 26 cal kyr BP at this time; however, potential calibration data sets are compared in another paper (van der Plicht et al., this issue).
Resumo:
The results from three types of study with broilers, namely nitrogen (N) balance, bioassays and growth experiments, provided the data used herein. Sets of data on N balance and protein accretion (bioassay studies) were used to assess the ability of the monomolecular equation to describe the relationship between (i) N balance and amino acid (AA) intake and (ii) protein accretion and AA intake. The model estimated the levels of isoleucine, lysine, valine, threonine, methionine, total sulphur AAs and tryptophan resulting in zero balance to be 58, 59, 80, 96, 23, 85 and 32 mg/kg live weight (LW)/day, respectively. These estimates show good agreement with those obtained in previous studies. For the growth experiments, four models, specifically re-parameterized for analysing energy balance data, were evaluated for their ability to determine crude protein (CP) intake at maintenance and efficiency of utilization of CP intake for producing gain. They were: a straight line, two equations representing diminishing returns behaviour (monomolecular and rectangular hyperbola) and one equation describing smooth sigmoidal behaviour with a fixed point of inflexion (Gompertz). The estimates of CP requirement for maintenance and efficiency of utilization of CP intake for producing gain varied from 5.4 to 5.9 g/kg LW/day and 0.60 to 0.76, respectively, depending on the models.
Resumo:
We developed a stochastic simulation model incorporating most processes likely to be important in the spread of Phytophthora ramorum and similar diseases across the British landscape (covering Rhododendron ponticum in woodland and nurseries, and Vaccinium myrtillus in heathland). The simulation allows for movements of diseased plants within a realistically modelled trade network and long-distance natural dispersal. A series of simulation experiments were run with the model, representing an experiment varying the epidemic pressure and linkage between natural vegetation and horticultural trade, with or without disease spread in commercial trade, and with or without inspections-with-eradication, to give a 2 x 2 x 2 x 2 factorial started at 10 arbitrary locations spread across England. Fifty replicate simulations were made at each set of parameter values. Individual epidemics varied dramatically in size due to stochastic effects throughout the model. Across a range of epidemic pressures, the size of the epidemic was 5-13 times larger when commercial movement of plants was included. A key unknown factor in the system is the area of susceptible habitat outside the nursery system. Inspections, with a probability of detection and efficiency of infected-plant removal of 80% and made at 90-day intervals, reduced the size of epidemics by about 60% across the three sectors with a density of 1% susceptible plants in broadleaf woodland and heathland. Reducing this density to 0.1% largely isolated the trade network, so that inspections reduced the final epidemic size by over 90%, and most epidemics ended without escape into nature. Even in this case, however, major wild epidemics developed in a few percent of cases. Provided the number of new introductions remains low, the current inspection policy will control most epidemics. However, as the rate of introduction increases, it can overwhelm any reasonable inspection regime, largely due to spread prior to detection. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
It has previously been shown that experimental infections of the parasitic trematode Schistosoma mansoni, the adult worms of which reside in the blood stream of the mammalian host, significantly reduced atherogenesis in apolipoprotein E gene knockout (apoE(-/-)) mice. These effects occurred in tandem with a lowering of serum total cholesterol levels in both apoE(-/-) and random-bred laboratory mice and a beneficial increase in the proportion of HDL to LDL cholesterol. To better understand how the parasitic infections induce these effects we have here investigated the involvement of adult worms and their eggs on lipids in the host. Our results indicate that the serum cholesterol-lowering effect is mediated by factors released from S. mansoni eggs, while the presence of adult worms seemed to have had little or no effect. It was also observed that high levels of lipids, particularly triacylglycerols and cholesteryl esters, present in the uninfected livers of both random-bred and apoE(-/-) mice fed a high-fat diet were not present in livers of the schistosome-infected mice. (C) 2009 Elsevier Ireland Ltd. All rights reserved.
Resumo:
This paper presents several new families of cumulant-based linear equations with respect to the inverse filter coefficients for deconvolution (equalisation) and identification of nonminimum phase systems. Based on noncausal autoregressive (AR) modeling of the output signals and three theorems, these equations are derived for the cases of 2nd-, 3rd and 4th-order cumulants, respectively, and can be expressed as identical or similar forms. The algorithms constructed from these equations are simpler in form, but can offer more accurate results than the existing methods. Since the inverse filter coefficients are simply the solution of a set of linear equations, their uniqueness can normally be guaranteed. Simulations are presented for the cases of skewed series, unskewed continuous series and unskewed discrete series. The results of these simulations confirm the feasibility and efficiency of the algorithms.
Resumo:
The performance of various statistical models and commonly used financial indicators for forecasting securitised real estate returns are examined for five European countries: the UK, Belgium, the Netherlands, France and Italy. Within a VAR framework, it is demonstrated that the gilt-equity yield ratio is in most cases a better predictor of securitized returns than the term structure or the dividend yield. In particular, investors should consider in their real estate return models the predictability of the gilt-equity yield ratio in Belgium, the Netherlands and France, and the term structure of interest rates in France. Predictions obtained from the VAR and univariate time-series models are compared with the predictions of an artificial neural network model. It is found that, whilst no single model is universally superior across all series, accuracy measures and horizons considered, the neural network model is generally able to offer the most accurate predictions for 1-month horizons. For quarterly and half-yearly forecasts, the random walk with a drift is the most successful for the UK, Belgian and Dutch returns and the neural network for French and Italian returns. Although this study underscores market context and forecast horizon as parameters relevant to the choice of the forecast model, it strongly indicates that analysts should exploit the potential of neural networks and assess more fully their forecast performance against more traditional models.
Resumo:
This paper uses an entropy-based information approach to determine if farmland values are more closely associated with urban pressure or farm income. The basic question is: how much information on changes in farm real estate values is contained in changes in population versus changes in returns to production agriculture? Results suggest population is informative, but changes in farmland values are more strongly associated with changes in the distribution of returns. However, this relationship is not true for every region nor does it hold over time, as for some regions and time periods changes in population are more informative. Results have policy implications for both equity and efficiency.
Resumo:
Understanding how species and ecosystems respond to climate change has become a major focus of ecology and conservation biology. Modelling approaches provide important tools for making future projections, but current models of the climate-biosphere interface remain overly simplistic, undermining the credibility of projections. We identify five ways in which substantial advances could be made in the next few years: (i) improving the accessibility and efficiency of biodiversity monitoring data, (ii) quantifying the main determinants of the sensitivity of species to climate change, (iii) incorporating community dynamics into projections of biodiversity responses, (iv) accounting for the influence of evolutionary processes on the response of species to climate change, and (v) improving the biophysical rule sets that define functional groupings of species in global models.
Resumo:
We consider the forecasting performance of two SETAR exchange rate models proposed by Kräger and Kugler [J. Int. Money Fin. 12 (1993) 195]. Assuming that the models are good approximations to the data generating process, we show that whether the non-linearities inherent in the data can be exploited to forecast better than a random walk depends on both how forecast accuracy is assessed and on the ‘state of nature’. Evaluation based on traditional measures, such as (root) mean squared forecast errors, may mask the superiority of the non-linear models. Generalized impulse response functions are also calculated as a means of portraying the asymmetric response to shocks implied by such models.
Resumo:
The induction of classification rules from previously unseen examples is one of the most important data mining tasks in science as well as commercial applications. In order to reduce the influence of noise in the data, ensemble learners are often applied. However, most ensemble learners are based on decision tree classifiers which are affected by noise. The Random Prism classifier has recently been proposed as an alternative to the popular Random Forests classifier, which is based on decision trees. Random Prism is based on the Prism family of algorithms, which is more robust to noise. However, like most ensemble classification approaches, Random Prism also does not scale well on large training data. This paper presents a thorough discussion of Random Prism and a recently proposed parallel version of it called Parallel Random Prism. Parallel Random Prism is based on the MapReduce programming paradigm. The paper provides, for the first time, novel theoretical analysis of the proposed technique and in-depth experimental study that show that Parallel Random Prism scales well on a large number of training examples, a large number of data features and a large number of processors. Expressiveness of decision rules that our technique produces makes it a natural choice for Big Data applications where informed decision making increases the user’s trust in the system.