937 resultados para mean profitability
Resumo:
Forecasting category or industry sales is a vital component of a company's planning and control activities. Sales for most mature durable product categories are dominated by replacement purchases. Previous sales models which explicitly incorporate a component of sales due to replacement assume there is an age distribution for replacements of existing units which remains constant over time. However, there is evidence that changes in factors such as product reliability/durability, price, repair costs, scrapping values, styling and economic conditions will result in changes in the mean replacement age of units. This paper develops a model for such time-varying replacement behaviour and empirically tests it in the Australian automotive industry. Both longitudinal census data and the empirical analysis of the replacement sales model confirm that there has been a substantial increase in the average aggregate replacement age for motor vehicles over the past 20 years. Further, much of this variation could be explained by real price increases and a linear temporal trend. Consequently, the time-varying model significantly outperformed previous models both in terms of fitting and forecasting the sales data. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
The principal aim of this paper is to measure the amount by which the profit of a multi-input, multi-output firm deviates from maximum short-run profit, and then to decompose this profit gap into components that are of practical use to managers. In particular, our interest is in the measurement of the contribution of unused capacity, along with measures of technical inefficiency, and allocative inefficiency, in this profit gap. We survey existing definitions of capacity and, after discussing their shortcomings, we propose a new ray economic capacity measure that involves short-run profit maximisation, with the output mix held constant. We go on to describe how the gap between observed profit and maximum profit can be calculated and decomposed using linear programming methods. The paper concludes with an empirical illustration, involving data on 28 international airline companies. The empirical results indicate that these airline companies achieve profit levels which are on average US$815m below potential levels, and that 70% of the gap may be attributed to unused capacity. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Darwin's paradigm holds that the diversity of present-day organisms has arisen via a process of genetic descent with modification, as on a bifurcating tree. Evidence is accumulating that genes are sometimes transferred not along lineages but rather across lineages. To the extent that this is so, Darwin's paradigm can apply only imperfectly to genomes, potentially complicating or perhaps undermining attempts to reconstruct historical relationships among genomes (i.e., a genome tree). Whether most genes in a genome have arisen via treelike (vertical) descent or by lateral transfer across lineages can be tested if enough complete genome sequences are used. We define a phylogenetically discordant sequence (PDS) as an open reading frame (ORF) that exhibits patterns of similarity relationships statistically distinguishable from those of most other ORFs in the same genome. PDSs represent between 6.0 and 16.8% (mean, 10.8%) of the analyzable ORFs in the genomes of 28 bacteria, eight archaea, and one eukaryote (Saccharomyces cerevisiae). In this study we developed and assessed a distance-based approach, based on mean pairwise sequence similarity, for generating genome trees. Exclusion of PDSs improved bootstrap support for basal nodes but altered few topological features, indicating that there is little systematic bias among PDSs. Many but not all features of the genome tree from which PDSs were excluded are consistent with the 16S rRNA tree.
Resumo:
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Resumo:
Blast fragmentation can have a significant impact on the profitability of a mine. An optimum run of mine (ROM) size distribution is required to maximise the performance of downstream processes. If this fragmentation size distribution can be modelled and controlled, the operation will have made a significant advancement towards improving its performance. Blast fragmentation modelling is an important step in Mine to Mill™ optimisation. It allows the estimation of blast fragmentation distributions for a number of different rock mass, blast geometry, and explosive parameters. These distributions can then be modelled in downstream mining and milling processes to determine the optimum blast design. When a blast hole is detonated rock breakage occurs in two different stress regions - compressive and tensile. In the-first region, compressive stress waves form a 'crushed zone' directly adjacent to the blast hole. The second region, termed the 'cracked zone', occurs outside the crush one. The widely used Kuz-Ram model does not recognise these two blast regions. In the Kuz-Ram model the mean fragment size from the blast is approximated and is then used to estimate the remaining size distribution. Experience has shown that this model predicts the coarse end reasonably accurately, but it can significantly underestimate the amount of fines generated. As part of the Australian Mineral Industries Research Association (AMIRA) P483A Mine to Mill™ project, the Two-Component Model (TCM) and Crush Zone Model (CZM), developed by the Julius Kruttschnitt Mineral Research Centre (JKMRC), were compared and evaluated to measured ROM fragmentation distributions. An important criteria for this comparison was the variation of model results from measured ROM in the-fine to intermediate section (1-100 mm) of the fragmentation curve. This region of the distribution is important for Mine to Mill™ optimisation. The comparison of modelled and Split ROM fragmentation distributions has been conducted in harder ores (UCS greater than 80 MPa). Further work involves modelling softer ores. The comparisons will be continued with future site surveys to increase confidence in the comparison of the CZM and TCM to Split results. Stochastic fragmentation modelling will then be conducted to take into account variation of input parameters. A window of possible fragmentation distributions can be compared to those obtained by Split . Following this work, an improved fragmentation model will be developed in response to these findings.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
When a mixture is confined, one of the phases can condense out. This condensate, which is otherwise metastable in the bulk, is stabilized by the presence of surfaces. In a sphere-plane geometry, routinely used in atomic force microscope and surface force apparatus, it, can form a bridge connecting the surfaces. The pressure drop in the bridge gives rise to additional long-range attractive forces between them. By minimizing the free energy of a binary mixture we obtain the force-distance curves as well as the structural phase diagram of the configuration with the bridge. Numerical results predict a discontinuous transition between the states with and without the bridge and linear force-distance curves with hysteresis. We also show that similar phenomenon can be observed in a number of different systems, e.g., liquid crystals and polymer mixtures. (C). 2004 American Institute of Physics.
Resumo:
The scope of this paper is to adapt the standard mean-variance model of Henry Markowitz theory, creating a simulation tool to find the optimal configuration of the portfolio aggregator, calculate its profitability and risk. Currently, there is a deep discussion going on among the power system society about the structure and architecture of the future electric system. In this environment, policy makers and electric utilities find new approaches to access the electricity market; this configures new challenging positions in order to find innovative strategies and methodologies. Decentralized power generation is gaining relevance in liberalized markets, and small and medium size electricity consumers are also become producers (“prosumers”). In this scenario an electric aggregator is an entity that joins a group of electric clients, customers, producers, “prosumers” together as a single purchasing unit to negotiate the purchase and sale of electricity. The aggregator conducts research on electricity prices, contract terms and conditions in order to promote better energy prices for their clients and allows small and medium customers to benefit improved market prices.
Resumo:
This paper addresses the calculation of fractional order expressions through rational fractions. The article starts by analyzing the techniques adopted in the continuous to discrete time conversion. The problem is re-evaluated in an optimization perspective by tacking advantage of the degree of freedom provided by the generalized mean formula. The results demonstrate the superior performance of the new algorithm.
Resumo:
Electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs), which obtain their fuel from the grid by charging a battery, are set to be introduced into the mass market and expected to contribute to oil consumption reduction. This research is concerned with studying the potential impacts on the electric utilities of large-scale adoption of plug-in electric vehicles from the perspective of electricity demand, fossil fuels use, CO2 emissions and energy costs. Simulations were applied to the Portuguese case study in order to analyze what would be the optimal recharge profile and EV penetration in an energy-oriented, an emissions-oriented and a cost-oriented objective. The objectives considered were: The leveling of load profiles, minimization of daily emissions and minimization of daily wholesale costs. Almost all solutions point to an off-peak recharge and a 50% reduction in daily wholesale costs can be verified from a peak recharge scenario to an off-peak recharge for a 2 million EVs in 2020. A 15% improvement in the daily total wholesale costs can be verified in the costs minimization objective when compared with the off-peak scenario result.
Resumo:
Most of small islands around the world today, are dependent on imported fossil fuels for the majority of their energy needs especially for transport activities and electricity production. The use of locally renewable energy resources and the implementation of energy efficiency measures could make a significant contribution to their economic development by reducing fossil fuel imports. An electrification of vehicles has been suggested as a way to both reduce pollutant emissions and increase security of supply of the transportation sector by reducing the dependence on oil products imports and facilitate the accommodation of renewable electricity generation, such as wind and, in the case of volcanic islands like Sao Miguel (Azores) of the geothermal energy whose penetration has been limited by the valley electricity consumption level. In this research, three scenarios of EV penetration were studied and it was verified that, for a 15% LD fleet replacement by EVs with 90% of all energy needs occurring during the night, the accommodation of 10 MW of new geothermal capacity becomes viable. Under this scenario, reductions of 8% in electricity costs, 14% in energy, 23% in fossil fuels use and CO2 emissions for the transportation and electricity production sectors could be expected.