919 resultados para cost-per-wear model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines why practitioners and researchers get different estimates of equity value when they use a discounted cash flow (CF) model versus a residual income (RI) model. Both models are derived from the same underlying assumption -- that price is the present value of expected future net dividends discounted at the cost of equity capital -- but in practice and in research they frequently yield different estimates. We argue that the research literature devoted to comparing the accuracy of these two models is misguided; properly implemented, both models yield identical valuations for all firms in all years. We identify how prior research has applied inconsistent assumptions to the two models and show how these seemingly small errors cause surprisingly large differences in the value estimates. [ABSTRACT FROM AUTHOR]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Management are keen to maximize the life span of an information system because of the high cost, organizational disruption, and risk of failure associated with the re-development or replacement of an information system. This research investigates the effects that various factors have on an information system's life span by understanding how the factors affect an information system's stability. The research builds on a previously developed two-stage model of information system change whereby an information system is either in a stable state of evolution in which the information system's functionality is evolving, or in a state of revolution, in which the information system is being replaced because it is not providing the functionality expected by its users. A case study surveyed a number of systems within one organization. The aim was to test whether a relationship existed between the base value of the volatility index (a measure of the stability of an information system) and certain system characteristics. Data relating to some 3000 user change requests covering 40 systems over a 10-year period were obtained. The following factors were hypothesized to have significant associations with the base value of the volatility index: language level (generation of language of construction), system size, system age, and the timing of changes applied to a system. Significant associations were found in the hypothesized directions except that the timing of user changes was not associated with any change in the value of the volatility index. Copyright (C) 2002 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applying programming techniques to detailed data for 406 rice farms in 21 villages, for 1997, produces inefficiency measures, which differ substantially from the results of simple yield and unit cost measures. For the Boro (dry) season, mean technical efficiency was efficiency was 56.2 per cent and 69.4 per cent, allocative efficiency was 81.3 per cent, cost efficiency was 56.2 per cent and scale efficiency 94.9 per cent. The Aman (wet) season results are similar, but a few points lower. Allocative inefficiency is due to overuse of labour, suggesting population pressure, and of fertiliser, where recommended rates may warrant revision. Second-stage regressions show that large families are more inefficient, whereas farmers with better access to input markets, and those who do less off-farm work, tend to be more efficient. The information on the sources of inter-farm performance differentials could be used by the extension agents to help inefficient farmers. There is little excuse for such sub-optimal use of survey data, which are often collected at substantial costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of the world's population now resides in urban environments and information on the internal composition and dynamics of these environments is essential to enable preservation of certain standards of living. Remotely sensed data, especially the global coverage of moderate spatial resolution satellites such as Landsat, Indian Resource Satellite and Systeme Pour I'Observation de la Terre (SPOT), offer a highly useful data source for mapping the composition of these cities and examining their changes over time. The utility and range of applications for remotely sensed data in urban environments could be improved with a more appropriate conceptual model relating urban environments to the sampling resolutions of imaging sensors and processing routines. Hence, the aim of this work was to take the Vegetation-Impervious surface-Soil (VIS) model of urban composition and match it with the most appropriate image processing methodology to deliver information on VIS composition for urban environments. Several approaches were evaluated for mapping the urban composition of Brisbane city (south-cast Queensland, Australia) using Landsat 5 Thematic Mapper data and 1:5000 aerial photographs. The methods evaluated were: image classification; interpretation of aerial photographs; and constrained linear mixture analysis. Over 900 reference sample points on four transects were extracted from the aerial photographs and used as a basis to check output of the classification and mixture analysis. Distinctive zonations of VIS related to urban composition were found in the per-pixel classification and aggregated air-photo interpretation; however, significant spectral confusion also resulted between classes. In contrast, the VIS fraction images produced from the mixture analysis enabled distinctive densities of commercial, industrial and residential zones within the city to be clearly defined, based on their relative amount of vegetation cover. The soil fraction image served as an index for areas being (re)developed. The logical match of a low (L)-resolution, spectral mixture analysis approach with the moderate spatial resolution image data, ensured the processing model matched the spectrally heterogeneous nature of the urban environments at the scale of Landsat Thematic Mapper data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To assess the (i) benefits, (ii) harms and (iii) costs of continuing mammographic screening for women 70 years and over. Data sources and synthesis: (i) We conducted a MEDLINE search (1966 - July 2000) for decision-analytic models estimating life-expectancy gains from screening in older women. The five studies meeting the inclusion criteria were critically appraised using standard criteria. We estimated relative benefit from each model's estimate of effectiveness of screening in older women relative to that in women aged 50-69 years using the same model. (ii) With data from BreastScreen Queensland, we constructed balance sheets of the consequences of screening for women in 10-year age groups (40-49 to 80-89 years), and (iii) we used a validated model to estimate the marginal cost-effectiveness of extending screening to women 70 years and over. Results: For women aged 70-79 years, the relative benefit was estimated as 40%-72%, and 18%-62% with adjustment for the impact of screening on quality of life. For women over 80 years the relative benefit was about a third, and with quality-of-life adjustment only 14%, that in women aged 50-69 years. (ii) Of 10 000 Australian women participating in ongoing screening, about 400 are recalled for further testing, and, depending on age, about 70-112 undergo biopsy and about 19-80 cancers are detected. (iii) Cost-effectiveness estimates for extending the upper age limit for mammographic screening from 69 to 79 years range from $8119 to $27 751 per quality-adjusted life-year saved, which compares favourably with extending screening to women aged 40-49 years (estimated at between $24 000 and $65 000 per life-year saved). Conclusions: Women 70 years and over, in consultation with their healthcare providers, may want to decide for themselves whether to continue mammographic screening. Decision-support materials are needed for women in this age group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Loss of magnetic medium solids from dense medium circuits is a substantial contributor to operating cost. Much of this loss is by way of wet drum magnetic separator effluent. A model of the separator would be useful for process design, optimisation and control. A review of the literature established that although various rules of thumb exist, largely based on empirical or anecdotal evidence, there is no model of magnetics recovery in a wet drum magnetic separator which includes as inputs all significant machine and operating variables. A series of trials, in both factorial experiments and in single variable experiments, was therefore carried out using a purpose built rig which featured a small industrial scale (700 mm lip length, 900 mm diameter) wet drum magnetic separator. A substantial data set of 191 trials was generated in the work. The results of the factorial experiments were used to identify the variables having a significant effect on magnetics recovery. Observations carried out as an adjunct to this work, as well as magnetic theory, suggests that the capture of magnetic particles in the wet drum magnetic separator is by a flocculation process. Such a process should be defined by a flocculation rate and a flocculation time; the latter being defined by the volumetric flowrate and the volume within the separation zone. A model based on this concept and containing adjustable parameters was developed. This model was then fitted to a randomly chosen 80% of the data, and validated by application to the remaining 20%. The model is shown to provide a satisfactory fit to the data over three orders of magnitude of magnetics loss. (C) 2003 Elsevier Science BY. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glycogen-accumulating organisms (GAO) have the potential to directly compete with polyphosphate-accumulating organisms (PAO) in EBPR systems as both are able to take up VFA anaerobically and grow on the intracellular storage products aerobically. Under anaerobic conditions GAO hydrolyse glycogen to gain energy and reducing equivalents to take up VFA and to synthesise polyhydroxyalkanoate (PHA). In the subsequent aerobic stage, PHA is being oxidised to gain energy for glycogen replenishment (from PHA) and for cell growth. This article describes a complete anaerobic and aerobic model for GAO based on the understanding of their metabolic pathways. The anaerobic model has been developed and reported previously, while the aerobic metabolic model was developed in this study. It is based on the assumption that acetyl-CoA and propionyl-CoA go through the catabolic and anabolic processes independently. Experimental validation shows that the integrated model can predict the anaerobic and aerobic results very well. It was found in this study that at pH 7 the maximum acetate uptake rate of GAO was slower than that reported for PAO in the anaerobic stage. On the other hand, the net biomass production per C-mol acetate added is about 9% higher for GAO than for PAO. This would indicate that PAO and GAO each have certain competitive advantages during different parts of the anaerobic/aerobic process cycle. (C) 2002 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rapid prototyping (RP) is an approach for automatically building a physical object through solid freeform fabrication. Nowadays, RP has become a vital aspect of most product development processes, due to the significant competitive advantages it offers compared to traditional manual model making. Even in academic environments, it is important to be able to quickly create accurate physical representations of concept solutions. Some of these can be used for simple visual validation, while others can be employed for ergonomic assessment by potential users or even for physical testing. However, the cost of traditional RP methods prevents their use in most academic environments on a regular basis, and even for very preliminary prototypes in many small companies. That results in delaying the first physical prototypes to later stages, or creating very rough mock-ups which are not as useful as they could be. In this paper we propose an approach for rapid and inexpensive model-making, which was developed in an academic context, and which can be employed for a variety of objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores the main determinants of the use of the cost accounting system (CAS) in Portuguese local government (PLG). Regression analysis is used to study the fit of a model of accounting changes in PLG, focused on cost accounting systems oriented to activities and outputs. Based on survey data gathered from PLG, we have found that the use of information in decision-making and external reporting is still a mirage. We obtain evidence about the influence of the internal organizational context (especially the lack of support and difficulties in the CAS implementation) in the use for internal purposes, while the institutional environment (like external pressures to implement the CAS) appears to be more deterministic of the external use. Results strengthen the function of external reporting to legitimate the organization’s activities to external stakeholders. On the other hand, some control variables (like political competition, usefulness and experience) also evidence some explanatory power in the model. Some mixed results were found that appeal to further research in the future. Our empirical results contribute to understand the importance of interconnecting the contingency and institutional approaches to gain a clear picture of cost accounting changes in the public sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Toxic amides, such as acrylamide, are potentially harmful to Human health, so there is great interest in the fabrication of compact and economical devices to measure their concentration in food products and effluents. The CHEmically Modified Field Effect Transistor (CHEMFET) based onamorphous silicon technology is a candidate for this type of application due to its low fabrication cost. In this article we have used a semi-empirical modelof the device to predict its performance in a solution of interfering ions. The actual semiconductor unit of the sensor was fabricated by the PECVD technique in the top gate configuration. The CHEMFET simulation was performed based on the experimental current voltage curves of the semiconductor unit and on an empirical model of the polymeric membrane. Results presented here are useful for selection and design of CHEMFET membranes and provide an idea of the limitations of the amorphous CHEMFET device. In addition to the economical advantage, the small size of this prototype means it is appropriate for in situ operation and integration in a sensor array.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a model consisting of particles with dissimilar bonding sites ("patches"), which exhibits self-assembly into chains connected by Y-junctions, and investigate its phase behaviour by both simulations and theory. We show that, as the energy cost epsilon(j) of forming Y-junctions increases, the extent of the liquid-vapour coexistence region at lower temperatures and densities is reduced. The phase diagram thus acquires a characteristic "pinched" shape in which the liquid branch density decreases as the temperature is lowered. To our knowledge, this is the first model in which the predicted topological phase transition between a fluid composed of short chains and a fluid rich in Y-junctions is actually observed. Above a certain threshold for epsilon(j), condensation ceases to exist because the entropy gain of forming Y-junctions can no longer offset their energy cost. We also show that the properties of these phase diagrams can be understood in terms of a temperature-dependent effective valence of the patchy particles. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3605703]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM is a multi-agent electricity market simulator to model market players and simulate their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. MASCEM provides several dynamic strategies for agents’ behavior. This paper presents a method that aims to provide market players with strategic bidding capabilities, allowing them to obtain the higher possible gains out of the market. This method uses a reinforcement learning algorithm to learn from experience how to choose the best from a set of possible bids. These bids are defined accordingly to the cost function that each producer presents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a distributed model predictive control (DMPC) for indoor thermal comfort that simultaneously optimizes the consumption of a limited shared energy resource. The control objective of each subsystem is to minimize the heating/cooling energy cost while maintaining the indoor temperature and used power inside bounds. In a distributed coordinated environment, the control uses multiple dynamically decoupled agents (one for each subsystem/house) aiming to achieve satisfaction of coupling constraints. According to the hourly power demand profile, each house assigns a priority level that indicates how much is willing to bid in auction for consume the limited clean resource. This procedure allows the bidding value vary hourly and consequently, the agents order to access to the clean energy also varies. Despite of power constraints, all houses have also thermal comfort constraints that must be fulfilled. The system is simulated with several houses in a distributed environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Zootecnia - FCAV