890 resultados para monotone estimating


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate whether consideration of store-level heterogeneity in marketing mix effects improves the accuracy of the marketing mix elasticities, fit, and forecasting accuracy of the widely-applied SCAN*PRO model of store sales. Models with continuous and discrete representations of heterogeneity, estimated using hierarchical Bayes (HB) and finite mixture (FM) techniques, respectively, are empirically compared to the original model, which does not account for store-level heterogeneity in marketing mix effects, and is estimated using ordinary least squares (OLS). The empirical comparisons are conducted in two contexts: Dutch store-level scanner data for the shampoo product category, and an extensive simulation experiment. The simulation investigates how between- and within-segment variance in marketing mix effects, error variance, the number of weeks of data, and the number of stores impact the accuracy of marketing mix elasticities, model fit, and forecasting accuracy. Contrary to expectations, accommodating store-level heterogeneity does not improve the accuracy of marketing mix elasticities relative to the homogeneous SCAN*PRO model, suggesting that little may be lost by employing the original homogeneous SCAN*PRO model estimated using ordinary least squares. Improvements in fit and forecasting accuracy are also fairly modest. We pursue an explanation for this result since research in other contexts has shown clear advantages from assuming some type of heterogeneity in market response models. In an Afterthought section, we comment on the controversial nature of our result, distinguishing factors inherent to household-level data and associated models vs. general store-level data and associated models vs. the unique SCAN*PRO model specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the potential for cost savings in the general Practice units of a Primary Care Trust (PCT) in the UK. We have used Data Envelopment Analysis (DEA) to identify benchmark Practices, which offer the lowest aggregate referral and drugs costs controlling for the number, age, gender, and deprivation level of the patients registered with each Practice. For the remaining, non-benchmark Practices, estimates of the potential for savings on referral and drug costs were obtained. Such savings could be delivered through a combination of the following actions: (i) reducing the levels of referrals and prescriptions without affecting their mix (£15.74 m savings were identified, representing 6.4% of total expenditure); (ii) switching between inpatient and outpatient referrals and/or drug treatment to exploit differences in their unit costs (£10.61 m savings were identified, representing 4.3% of total expenditure); (iii) seeking a different profile of referral and drug unit costs (£11.81 m savings were identified, representing 4.8% of total expenditure). © 2012 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Zambia and many other countries in Sub-Saharan Africa face a key challenge of sustaining high levels of coverage of AIDS treatment under prospects of dwindling global resources for HIV/AIDS treatment. Policy debate in HIV/AIDS is increasingly paying more focus to efficiency in the use of available resources. In this chapter, we apply Data Envelopment Analysis (DEA) to estimate short term technical efficiency of 34 HIV/AIDS treatment facilities in Zambia. The data consists of input variables such as human resources, medical equipment, building space, drugs, medical supplies, and other materials used in providing HIV/AIDS treatment. Two main outputs namely, numbers of ART-years (Anti-Retroviral Therapy-years) and pre-ART-years are included in the model. Results show the mean technical efficiency score to be 83%, with great variability in efficiency scores across the facilities. Scale inefficiency is also shown to be significant. About half of the facilities were on the efficiency frontier. We also construct bootstrap confidence intervals around the efficiency scores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a novel approach to modeling financing constraints of firms. Specifically, we adopt an approach in which firm-level investment is a nonparametric function of some relevant firm characteristics, cash flow in particular. This enables us to generate firm-year specific measures of cash flow sensitivity of investment. We are therefore able to draw conclusions about financing constraints of individual firms as well as cohorts of firms without having to split our sample on an ad hoc basis. This is a significant improvement over the stylized approach that is based on comparison of point estimates of cash flow sensitivity of investment of the average firm of ad hoc sub-samples of firms. We use firm-level data from India to highlight the advantages of our approach. Our results suggest that the estimates generated by this approach are meaningful from an economic point of view and are consistent with the literature. © 2014 © 2014 Taylor & Francis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The UK has a relatively low ratio of business R&D to GDP (the BERD ratio) compared to other leading economies. There has also been a small decline in UK’s BERD ratio in the 1990s, whereas other leading economies have experienced small rises. The relatively low BERD ratio cannot be explained solely by sectoral or industry-level differences between the UK and other countries. There is, therefore, considerable interest in understanding the firm-level determinants of investment in R&D. This report was commissioned by the DTI to analyse the link between R&D and productivity for a sample of firms derived from merging the ONS’s Business Research and Development Database (BERD) and the Annual Respondents Database (ARD). The analysis estimates the private rates of returns to R&D, and not the social rates of return, since it is the private returns that should drive firms’ decisions. A key objective of this research is to analyse the productivity of R&D in small and medium sized enterprises (SME). The analysis is intended to allow comparisons to the results in Rogers (2005), which uses publicly available data on R&D in medium to large UK firms in the 1990s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Algorithmic resources are considered for elaboration and identification of monotone functions and some alternate structures are brought, which are more explicit in sense of structure and quantities and which can serve as elements of practical identification algorithms. General monotone recognition is considered on multi- dimensional grid structure. Particular reconstructing problem is reduced to the monotone recognition through the multi-dimensional grid partitioning into the set of binary cubes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using monotone bifunctions, we introduce a recession concept for general equilibrium problems relying on a variational convergence notion. The interesting purpose is to extend some results of P. L. Lions on variational problems. In the process we generalize some results by H. Brezis and H. Attouch relative to the convergence of the resolvents associated with maximal monotone operators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article presents a new method to estimating usability of a user interface based on its model. The principal features of the method are: creation of an expandable knowledge base of usability defects, detection defects based on the interface model, within the design phase, and information to the developer not only about existence of defects but also advice on their elimination.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 49J40, 49J35, 58E30, 47H05

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We implement a method to estimate the direct effects of foreign-ownership on foreign firms' productivity and the indirect effects (or spillovers) from the presence of foreign-owned firms on other foreign and domestic firms' productivity in a unifying framework, taking interactions between firms into account. To do so, we relax a fundamental assumption made in empirical studies examining a direct causal effect of foreign ownership on firm productivity, namely that of no interactions between firms. Based on our approach, we are able to combine direct and indirect effects of foreign ownership and calculate the total effect of foreign firms on local productivity. Our results show that all these effects vary with the level of foreign presence within a cluster, an important finding for the academic literature and policy debate on the benefits of attracting foreign owned firms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 62P35, 62P30.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.