30 resultados para Predictive Mean Squared Efficiency
Resumo:
The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.
Resumo:
We consider the forecasting performance of two SETAR exchange rate models proposed by Kräger and Kugler [J. Int. Money Fin. 12 (1993) 195]. Assuming that the models are good approximations to the data generating process, we show that whether the non-linearities inherent in the data can be exploited to forecast better than a random walk depends on both how forecast accuracy is assessed and on the ‘state of nature’. Evaluation based on traditional measures, such as (root) mean squared forecast errors, may mask the superiority of the non-linear models. Generalized impulse response functions are also calculated as a means of portraying the asymmetric response to shocks implied by such models.
Resumo:
This paper uses appropriately modified information criteria to select models from the GARCH family, which are subsequently used for predicting US dollar exchange rate return volatility. The out of sample forecast accuracy of models chosen in this manner compares favourably on mean absolute error grounds, although less favourably on mean squared error grounds, with those generated by the commonly used GARCH(1, 1) model. An examination of the orders of models selected by the criteria reveals that (1, 1) models are typically selected less than 20% of the time.
Resumo:
This paper forecasts Daily Sterling exchange rate returns using various naive, linear and non-linear univariate time-series models. The accuracy of the forecasts is evaluated using mean squared error and sign prediction criteria. These show only a very modest improvement over forecasts generated by a random walk model. The Pesaran–Timmerman test and a comparison with forecasts generated artificially shows that even the best models have no evidence of market timing ability.
Resumo:
High resolution surface wind fields covering the global ocean, estimated from remotely sensed wind data and ECMWF wind analyses, have been available since 2005 with a spatial resolution of 0.25 degrees in longitude and latitude, and a temporal resolution of 6h. Their quality is investigated through various comparisons with surface wind vectors from 190 buoys moored in various oceanic basins, from research vessels and from QuikSCAT scatterometer data taken during 2005-2006. The NCEP/NCAR and NCDC blended wind products are also considered. The comparisons performed during January-December 2005 show that speeds and directions compare well to in-situ observations, including from moored buoys and ships, as well as to the remotely sensed data. The root-mean-squared differences of the wind speed and direction for the new blended wind data are lower than 2m/s and 30 degrees, respectively. These values are similar to those estimated in the comparisons of hourly buoy measurements and QuickSCAT near real time retrievals. At global scale, it is found that the new products compare well with the wind speed and wind vector components observed by QuikSCAT. No significant dependencies on the QuikSCAT wind speed or on the oceanic region considered are evident.Evaluation of high-resolution surface wind products at global and regional scales
Resumo:
During the development of new therapies, it is not uncommon to test whether a new treatment works better than the existing treatment for all patients who suffer from a condition (full population) or for a subset of the full population (subpopulation). One approach that may be used for this objective is to have two separate trials, where in the first trial, data are collected to determine if the new treatment benefits the full population or the subpopulation. The second trial is a confirmatory trial to test the new treatment in the population selected in the first trial. In this paper, we consider the more efficient two-stage adaptive seamless designs (ASDs), where in stage 1, data are collected to select the population to test in stage 2. In stage 2, additional data are collected to perform confirmatory analysis for the selected population. Unlike the approach that uses two separate trials, for ASDs, stage 1 data are also used in the confirmatory analysis. Although ASDs are efficient, using stage 1 data both for selection and confirmatory analysis introduces selection bias and consequently statistical challenges in making inference. We will focus on point estimation for such trials. In this paper, we describe the extent of bias for estimators that ignore multiple hypotheses and selecting the population that is most likely to give positive trial results based on observed stage 1 data. We then derive conditionally unbiased estimators and examine their mean squared errors for different scenarios.
Resumo:
We apply the Coexistence Approach (CoA) to reconstruct mean annual precipitation (MAP), mean annual temperature (MAT), mean temperature of thewarmestmonth (MTWA) and mean temperature of the coldest month (MTCO) at 44 pollen sites on the Qinghai–Tibetan Plateau. The modern climate ranges of the taxa are obtained (1) from county-level presence/absence data and (2) from data on the optimum and range of each taxon from Lu et al. (2011). The CoA based on the optimumand range data yields better predictions of observed climate parameters at the pollen sites than that based on the county-level data. The presence of arboreal pollen, most of which is derived fromoutside the region, distorts the reconstructions. More reliable reconstructions are obtained using only the non-arboreal component of the pollen assemblages. The root mean-squared error (RMSE) of the MAP reconstructions are smaller than the RMSE of MAT, MTWA and MTCO, suggesting that precipitation gradients are the most important control of vegetation distribution on the Qinghai–Tibetan Plateau. Our results show that CoA could be used to reconstruct past climates in this region, although in areas characterized by open vegetation the most reliable estimates will be obtained by excluding possible arboreal contaminants.
Resumo:
Field studies were carried out on the water and sediment dynamics in the tropical, macro-tidal, Daly Estuary. The estuary is shallow, very-turbid, about 100 km long, and the entrance is funnel-shape. In the wet, high flow season, normal tidal ranges can be suppressed in the estuary, depending on inflow rates, and freshwater becomes dominant up to the mouth. At that time a fraction of the fine sediment load is exported offshore as a bottom-tagging nepheloid layer after the sediment falls out of suspension of the thin, near-surface, river plume. The remaining fraction and the riverine coarse sediment form a large sediment bar 10 km long, up to 6 m in height and extending across the whole width of the channel near the mouth. This bar, as well as shoals in the estuary, partially pond the mid- to upper-estuary. This bar builds up from the deposition of riverine sediment during a wet season with high runoff and can raise mean water level by up to 2 m in the upper estuary in the low flow season. This ponding effect takes about three successive dry years to disappear by the sediment forming the bar being redistributed all over the estuary by tidal pumping of fine and coarse sediment in the dry season, which is the low flow season. The swift reversal of the tidal currents from ebb to flood results in macro-turbulence that lasts about 20 min. Bed load transport is preferentially landward and occurs only for water currents greater than 0.6 m s(-1). This high value of the threshold velocity suggests that the sand may be cemented by the mud. The Daly Estuary thus is a leaky sediment trap with an efficiency varying both seasonally and inter-annually. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
We evaluate the profitability and technical efficiency of aquaculture in the Philippines. Farm-level data are used to compare two production systems corresponding to the intensive monoculture of tilapia in freshwater ponds and the extensive polyculture of shrimps and fish in brackish water ponds. Both activities are very lucrative, with brackish water aquaculture achieving the higher level of profit per farm. Stochastic frontier production functions reveal that technical efficiency is low in brackish water aquaculture, with a mean of 53%, explained primarily by the operator's experience and by the frequency of his visits to the farm. In freshwater aquaculture, the farms achieve a mean efficiency level of 83%. The results suggest that the provision of extension services to brackish water fish farms might be a cost-effective way of increasing production and productivity in that sector. By contrast, technological change will have to be the driving force of future productivity growth in freshwater aquaculture.
Resumo:
An automatic nonlinear predictive model-construction algorithm is introduced based on forward regression and the predicted-residual-sums-of-squares (PRESS) statistic. The proposed algorithm is based on the fundamental concept of evaluating a model's generalisation capability through crossvalidation. This is achieved by using the PRESS statistic as a cost function to optimise model structure. In particular, the proposed algorithm is developed with the aim of achieving computational efficiency, such that the computational effort, which would usually be extensive in the computation of the PRESS statistic, is reduced or minimised. The computation of PRESS is simplified by avoiding a matrix inversion through the use of the orthogonalisation procedure inherent in forward regression, and is further reduced significantly by the introduction of a forward-recursive formula. Based on the properties of the PRESS statistic, the proposed algorithm can achieve a fully automated procedure without resort to any other validation data set for iterative model evaluation. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
This study investigated the potential application of mid-infrared spectroscopy (MIR 4,000–900 cm−1) for the determination of milk coagulation properties (MCP), titratable acidity (TA), and pH in Brown Swiss milk samples (n = 1,064). Because MCP directly influence the efficiency of the cheese-making process, there is strong industrial interest in developing a rapid method for their assessment. Currently, the determination of MCP involves time-consuming laboratory-based measurements, and it is not feasible to carry out these measurements on the large numbers of milk samples associated with milk recording programs. Mid-infrared spectroscopy is an objective and nondestructive technique providing rapid real-time analysis of food compositional and quality parameters. Analysis of milk rennet coagulation time (RCT, min), curd firmness (a30, mm), TA (SH°/50 mL; SH° = Soxhlet-Henkel degree), and pH was carried out, and MIR data were recorded over the spectral range of 4,000 to 900 cm−1. Models were developed by partial least squares regression using untreated and pretreated spectra. The MCP, TA, and pH prediction models were improved by using the combined spectral ranges of 1,600 to 900 cm−1, 3,040 to 1,700 cm−1, and 4,000 to 3,470 cm−1. The root mean square errors of cross-validation for the developed models were 2.36 min (RCT, range 24.9 min), 6.86 mm (a30, range 58 mm), 0.25 SH°/50 mL (TA, range 3.58 SH°/50 mL), and 0.07 (pH, range 1.15). The most successfully predicted attributes were TA, RCT, and pH. The model for the prediction of TA provided approximate prediction (R2 = 0.66), whereas the predictive models developed for RCT and pH could discriminate between high and low values (R2 = 0.59 to 0.62). It was concluded that, although the models require further development to improve their accuracy before their application in industry, MIR spectroscopy has potential application for the assessment of RCT, TA, and pH during routine milk analysis in the dairy industry. The implementation of such models could be a means of improving MCP through phenotypic-based selection programs and to amend milk payment systems to incorporate MCP into their payment criteria.
Resumo:
This study examines the numerical accuracy, computational cost, and memory requirements of self-consistent field theory (SCFT) calculations when the diffusion equations are solved with various pseudo-spectral methods and the mean field equations are iterated with Anderson mixing. The different methods are tested on the triply-periodic gyroid and spherical phases of a diblock-copolymer melt over a range of intermediate segregations. Anderson mixing is found to be somewhat less effective than when combined with the full-spectral method, but it nevertheless functions admirably well provided that a large number of histories is used. Of the different pseudo-spectral algorithms, the 4th-order one of Ranjan, Qin and Morse performs best, although not quite as efficiently as the full-spectral method.
Resumo:
We qualitatively describe the condition of communally managed rangelands in the Transkei, South Africa, using GIS and high resolution near-infrared imagery. Using livestock census data from 28 magisterial districts in the Transkei, we explored the trends in livestock biomass from 1923–1998. The area had been subjected to intensive herbivory by domestic livestock during that period, and the high livestock biomass had been blamed for the perceived degradation or ‘overgrazing’ of the region. Our assessment used the concept rain-use efficiency (RUE) (kg dry matter ha–1 mm–1) to determine whether there is evidence of change in the efficiency of the system to produce domestic livestock. We calculated RUE from annual livestock numbers and the mean annual rainfall for each district. We found no evidence of a decline in rain-use efficiency between the two assessment periods (1923–1944, 1945–1998). There was evidence of a shift in the ratio of sheep to goats between 1923 and 1998, with goat numbers increasing (greater than twofold) relative to sheep in eight districts. This trend may be associated with changes in the structure of vegetation. We conclude that this region is not showing evidence of system run down that affects domestic livestock production.
Resumo:
Purpose – Investors are now able to analyse more noise-free news to inform their trading decisions than ever before. Their expectation that more information means better performance is not supported by previous psychological experiments which argue that too much information actually impairs performance. The purpose of this paper is to examine whether the degree of information explicitness improves stock market performance. Design/methodology/approach – An experiment is conducted in a computer laboratory to examine a trading simulation manipulated from a real market-shock. Participants’ performance efficiency and effectiveness are measured separately. Findings – The results indicate that the explicitness of information neither improves nor impairs participants’ performance effectiveness from the perspectives of returns, share and cash positions, and trading volumes. However, participants’ performance efficiency is significantly affected by information explicitness. Originality/value – The novel approach and findings of this research add to the knowledge of the impact of information explicitness on the quality of decision making in a financial market environment.
Resumo:
The hypothesis that foraging male and female Coccinella septempunctata L. would exhibit a turning bias when walking along a branched linear wire in a Y-maze was tested. Individuals were placed repeatedly in the maze. Approximately 45% of all individuals tested displayed significant turning biases, with a similar number of individuals biased to the left and right. In the maze right-handed individuals turned right at 84.4% of turns and the left-handed individuals turned left at 80.2% of turns. A model of the searching efficiency of C. septempunctata in dichotomous branched environments showed that model coccinellids with greater turning biases discovered a higher proportion of the plant for a given number of searches than those with no bias. A modification of the model to investigate foraging efficiency, by calculating the mean time taken by individuals to find randomly distributed aphid patches, suggested that on four different sizes of plants, with a variety of aphid patch densities, implementing a turning bias was a significantly more efficient foraging strategy than no bias. In general the benefits to foraging of implementing a turning bias increased with the degree of the bias. It may be beneficial for individuals in highly complex branched environments to have a turning bias slightly lower than 100% in order to benefit from increased foraging efficiency without walking in circles. Foraging bias benefits increased with increasing plant size and decreasing aphid density. In comparisons of two different plant morphologies, one with a straight stem and side branches and one with a symmetrically branched morphology, there were few significant differences in the effects of turning biases on foraging efficiency between morphologies