16 resultados para logarithmic sprayer
em Queensland University of Technology - ePrints Archive
Resumo:
We present an algorithm called Optimistic Linear Programming (OLP) for learning to optimize average reward in an irreducible but otherwise unknown Markov decision process (MDP). OLP uses its experience so far to estimate the MDP. It chooses actions by optimistically maximizing estimated future rewards over a set of next-state transition probabilities that are close to the estimates, a computation that corresponds to solving linear programs. We show that the total expected reward obtained by OLP up to time T is within C(P) log T of the reward obtained by the optimal policy, where C(P) is an explicit, MDP-dependent constant. OLP is closely related to an algorithm proposed by Burnetas and Katehakis with four key differences: OLP is simpler, it does not require knowledge of the supports of transition probabilities, the proof of the regret bound is simpler, but our regret bound is a constant factor larger than the regret of their algorithm. OLP is also similar in flavor to an algorithm recently proposed by Auer and Ortner. But OLP is simpler and its regret bound has a better dependence on the size of the MDP.
Resumo:
This paper provides an empirical estimation of energy efficiency and other proximate factors that explain energy intensity in Australia for the period 1978-2009. The analysis is performed by decomposing the changes in energy intensity by means of energy efficiency, fuel mix and structural changes using sectoral and sub-sectoral levels of data. The results show that the driving forces behind the decrease in energy intensity in Australia are efficiency effect and sectoral composition effect, where the former is found to be more prominent than the latter. Moreover, the favourable impact of the composition effect has slowed consistently in recent years. A perfect positive association characterizes the relationship between energy intensity and carbon intensity in Australia. The decomposition results indicate that Australia needs to improve energy efficiency further to reduce energy intensity and carbon emissions. © 2012 Elsevier Ltd.
Resumo:
Background: The objective of this study was to scrutinize number line estimation behaviors displayed by children in mathematics classrooms during the first three years of schooling. We extend existing research by not only mapping potential logarithmic-linear shifts but also provide a new perspective by studying in detail the estimation strategies of individual target digits within a number range familiar to children. Methods: Typically developing children (n = 67) from Years 1 – 3 completed a number-to-position numerical estimation task (0-20 number line). Estimation behaviors were first analyzed via logarithmic and linear regression modeling. Subsequently, using an analysis of variance we compared the estimation accuracy of each digit, thus identifying target digits that were estimated with the assistance of arithmetic strategy. Results: Our results further confirm a developmental logarithmic-linear shift when utilizing regression modeling; however, uniquely we have identified that children employ variable strategies when completing numerical estimation, with levels of strategy advancing with development. Conclusion: In terms of the existing cognitive research, this strategy factor highlights the limitations of any regression modeling approach, or alternatively, it could underpin the developmental time course of the logarithmic-linear shift. Future studies need to systematically investigate this relationship and also consider the implications for educational practice.
Resumo:
The first representative chemical, structural, and morphological analysis of the solid particles from a single collection surface has been performed. This collection surface sampled the stratosphere between 17 and 19km in altitude in the summer of 1981, and therefore before the 1982 eruptions of El Chichón. A particle collection surface was washed free of all particles with rinses of Freon and hexane, and the resulting wash was directed through a series of vertically stacked Nucleopore filters. The size cutoff for the solid particle collection process in the stratosphere is found to be considerably less than 1 μm. The total stratospheric number density of solid particles larger than 1μm in diameter at the collection time is calculated to be about 2.7×10−1 particles per cubic meter, of which approximately 95% are smaller than 5μm in diameter. Previous classification schemes are expanded to explicitly recognize low atomic number material. With the single exception of the calcium-aluminum-silicate (CAS) spheres all solid particle types show a logarithmic increase in number concentration with decreasing diameter. The aluminum-rich particles are unique in showing bimodal size distributions. In addition, spheres constitute only a minor fraction of the aluminum-rich material. About 2/3 of the particles examined were found to be shards of rhyolitic glass. This abundant volcanic material could not be correlated with any eruption plume known to have vented directly to the stratosphere. The micrometeorite number density calculated from this data set is 5×10−2 micrometeorites per cubic meter of air, an order of magnitude greater than the best previous estimate. At the collection altitude, the maximum collision frequency of solid particles >5μm in average diameter is calculated to be 6.91×10−16 collisions per second, which indicates negligible contamination of extraterrestrial particles in the stratosphere by solid anthropogenic particles.
Resumo:
Objective: Menopause is the consequence of exhaustion of the ovarian follicular pool. AMH, an indirect hormonal marker of ovarian reserve, has been recently proposed as a predictor for age at menopause. Since BMI and smoking status are relevant independent factors associated with age at menopause we evaluated whether a model including all three of these variables could improve AMH-based prediction of age at menopause. Methods: In the present cohort study, participants were 375 eumenorrheic women aged 19–44 years and a sample of 2,635 Italian menopausal women. AMH values were obtained from the eumenorrheic women. Results: Regression analysis of the AMH data showed that a quadratic function of age provided a good description of these data plotted on a logarithmic scale, with a distribution of residual deviates that was not normal but showed significant leftskewness. Under the hypothesis that menopause can be predicted by AMH dropping below a critical threshold, a model predicting menopausal age was constructed from the AMH regression model and applied to the data on menopause. With the AMH threshold dependent on the covariates BMI and smoking status, the effects of these covariates were shown to be highly significant. Conclusions: In the present study we confirmed the good level of conformity between the distributions of observed and AMH-predicted ages at menopause, and showed that using BMI and smoking status as additional variables improves AMH-based prediction of age at menopause.
Resumo:
Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.
Resumo:
This paper aims to evaluate the brand value of property in subdivision developments in the Bangkok Metropolitan Region (BMR), Thailand. The result has been determined by the application of a hedonic price model. The development of the model is developed based on a sample of 1,755 property sales during the period of 1992-2010 in eight zones of the BMR. The results indicate that the use of a semi-logarithmic model has stronger explanatory power and is more reliable. Property price increases 12.90% from the branding. Meanwhile, the price annually increases 2.96%; lot size and dwelling area have positive impacts on the price. In contrast, duplexes and townhouses have a negative impact on the price compared to single detached houses. Moreover, the price of properties which are located outside the Bangkok inner city area is reduced by 21.26% to 43.19%. These findings also contribute towards a new understanding of the positive impact of branding on the property price in the BMR. The result is useful for setting selling prices for branded and unbranded properties, and the model could provide a reference for setting property prices in subdivision developments in the BMR.
Resumo:
We study two problems of online learning under restricted information access. In the first problem, prediction with limited advice, we consider a game of prediction with expert advice, where on each round of the game we query the advice of a subset of M out of N experts. We present an algorithm that achieves O(√(N/M)TlnN ) regret on T rounds of this game. The second problem, the multiarmed bandit with paid observations, is a variant of the adversarial N-armed bandit game, where on round t of the game we can observe the reward of any number of arms, but each observation has a cost c. We present an algorithm that achieves O((cNlnN) 1/3 T2/3+√TlnN ) regret on T rounds of this game in the worst case. Furthermore, we present a number of refinements that treat arm- and time-dependent observation costs and achieve lower regret under benign conditions. We present lower bounds that show that, apart from the logarithmic factors, the worst-case regret bounds cannot be improved.
Resumo:
This study analyzes the management of air pollutant substance in Chinese industrial sectors from 1998 to 2009. Decomposition analysis applying the logarithmic mean divisia index is used to analyze changes in emissions of air pollutants with a focus on the following five factors: coal pollution intensity (CPI), end-of-pipe treatment (EOP), the energy mix (EM), productive efficiency change (EFF), and production scale changes (PSC). Three pollutants are the main focus of this study: sulfur dioxide (SO2), dust, and soot. The novelty of this paper is focusing on the impact of the elimination policy on air pollution management in China by type of industry using the scale merit effect for pollution abatement technology change. First, the increase in SO2 emissions from Chinese industrial sectors because of the increase in the production scale is demonstrated. However, the EOP equipment that induced this change and improvements in energy efficiency has prevented an increase in SO2 emissions that is commensurate with the increase in production. Second, soot emissions were successfully reduced and controlled in all industries except the steel industry between 1998 and 2009, even though the production scale expanded for these industries. This reduction was achieved through improvements in EOP technology and in energy efficiency. Dust emissions decreased by nearly 65% between 1998 and 2009 in the Chinese industrial sectors. This successful reduction in emissions was achieved by implementing EOP technology and pollution prevention activities during the production processes, especially in the cement industry. Finally, pollution prevention in the cement industry is shown to result from production technology development rather than scale merit. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
This study analyzes toxic chemical substance management in three U.S. manufacturing sectors from 1991 to 2008. Decomposition analysis applying the logarithmic mean Divisia index is used to analyze changes in toxic chemical substance emissions by the following five factors: cleaner production, end-of-pipe treatment, transfer for further management, mixing of intermediate materials, and production scale. Based on our results, the chemical manufacturing sector reduced toxic chemical substance emissions mainly via end-of-pipe treatment. In the meantime, transfer for further management contributed to the reduction of toxic chemical substance emissions in the metal fabrication industry. This occurred because the environmental business market expanded in the 1990s, and the infrastructure for the recycling of metal and other wastes became more efficient. Cleaner production is the main contributor to toxic chemical reduction in the electrical product industry. This implies that the electrical product industry is successful in developing a more environmentally friendly product design and production process.
Resumo:
Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate. This learning rate needs to be large enough to fit the data well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic in the number of experts and the inverse of the learning rate, our method performs as well as if we would know the empirically best learning rate from a large range that includes both conservative small values and values that are much higher than those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the grid.
Resumo:
Preface The 9th Australasian Conference on Information Security and Privacy (ACISP 2004) was held in Sydney, 13–15 July, 2004. The conference was sponsored by the Centre for Advanced Computing – Algorithms and Cryptography (ACAC), Information and Networked Security Systems Research (INSS), Macquarie University and the Australian Computer Society. The aims of the conference are to bring together researchers and practitioners working in areas of information security and privacy from universities, industry and government sectors. The conference program covered a range of aspects including cryptography, cryptanalysis, systems and network security. The program committee accepted 41 papers from 195 submissions. The reviewing process took six weeks and each paper was carefully evaluated by at least three members of the program committee. We appreciate the hard work of the members of the program committee and external referees who gave many hours of their valuable time. Of the accepted papers, there were nine from Korea, six from Australia, five each from Japan and the USA, three each from China and Singapore, two each from Canada and Switzerland, and one each from Belgium, France, Germany, Taiwan, The Netherlands and the UK. All the authors, whether or not their papers were accepted, made valued contributions to the conference. In addition to the contributed papers, Dr Arjen Lenstra gave an invited talk, entitled Likely and Unlikely Progress in Factoring. This year the program committee introduced the Best Student Paper Award. The winner of the prize for the Best Student Paper was Yan-Cheng Chang from Harvard University for his paper Single Database Private Information Retrieval with Logarithmic Communication. We would like to thank all the people involved in organizing this conference. In particular we would like to thank members of the organizing committee for their time and efforts, Andrina Brennan, Vijayakrishnan Pasupathinathan, Hartono Kurnio, Cecily Lenton, and members from ACAC and INSS.
Resumo:
Agility is an essential part of many athletic activities. Currently, agility drill duration is the sole criterion used for evaluation of agility performance. The relationship between drill duration and factors such as acceleration, deceleration and change of direction, however, has not been fully explored. This paper provides a mathematical description of the relationship between velocity and radius of curvatures in an agility drill through implementation of a power law (PL). Two groups of skilled and unskilled participants performed a cyclic forward/backward shuttle agility test. Kinematic data was recorded using motion capture system at a sampling rate of 200 Hz. The logarithmic relationship between tangential velocity and radius of curvature of participant trajectories in both groups was established using the PL. The slope of the regression line was found to be 0.26 and 0.36, for the skilled and unskilled groups, respectively. The magnitudes of regression line slope for both groups were approximately 0.3 which is close to the expected 1/3 value. Results are an indication of how the PL could be implemented in an agility drill thus opening the way for establishment of a more representative measure of agility performance instead of drill duration.
Resumo:
We present an algorithm for multiarmed bandits that achieves almost optimal performance in both stochastic and adversarial regimes without prior knowledge about the nature of the environment. Our algorithm is based on augmentation of the EXP3 algorithm with a new control lever in the form of exploration parameters that are tailored individually for each arm. The algorithm simultaneously applies the “old” control lever, the learning rate, to control the regret in the adversarial regime and the new control lever to detect and exploit gaps between the arm losses. This secures problem-dependent “logarithmic” regret when gaps are present without compromising on the worst-case performance guarantee in the adversarial regime. We show that the algorithm can exploit both the usual expected gaps between the arm losses in the stochastic regime and deterministic gaps between the arm losses in the adversarial regime. The algorithm retains “logarithmic” regret guarantee in the stochastic regime even when some observations are contaminated by an adversary, as long as on average the contamination does not reduce the gap by more than a half. Our results for the stochastic regime are supported by experimental validation.