585 resultados para cost estimation
Resumo:
The Macroscopic Fundamental Diagram (MFD) relates space-mean density and flow. Since the MFD represents the area-wide network traffic performance, studies on perimeter control strategies and network-wide traffic state estimation utilising the MFD concept have been reported. Most previous works have utilised data from fixed sensors, such as inductive loops, to estimate the MFD, which can cause biased estimation in urban networks due to queue spillovers at intersections. To overcome the limitation, recent literature reports the use of trajectory data obtained from probe vehicles. However, these studies have been conducted using simulated datasets; limited works have discussed the limitations of real datasets and their impact on the variable estimation. This study compares two methods for estimating traffic state variables of signalised arterial sections: a method based on cumulative vehicle counts (CUPRITE), and one based on vehicles’ trajectory from taxi Global Positioning System (GPS) log. The comparisons reveal some characteristics of taxi trajectory data available in Brisbane, Australia. The current trajectory data have limitations in quantity (i.e., the penetration rate), due to which the traffic state variables tend to be underestimated. Nevertheless, the trajectory-based method successfully captures the features of traffic states, which suggests that the trajectories from taxis can be a good estimator for the network-wide traffic states.
Resumo:
Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.
Resumo:
This article develops a method for analysis of growth data with multiple recaptures when the initial ages for all individuals are unknown. The existing approaches either impute the initial ages or model them as random effects. Assumptions about the initial age are not verifiable because all the initial ages are unknown. We present an alternative approach that treats all the lengths including the length at first capture as correlated repeated measures for each individual. Optimal estimating equations are developed using the generalized estimating equations approach that only requires the first two moment assumptions. Explicit expressions for estimation of both mean growth parameters and variance components are given to minimize the computational complexity. Simulation studies indicate that the proposed method works well. Two real data sets are analyzed for illustration, one from whelks (Dicathais aegaota) and the other from southern rock lobster (Jasus edwardsii) in South Australia.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
We consider estimation of mortality rates and growth parameters from length-frequency data of a fish stock when there is individual variability in the von Bertalanffy growth parameter L-infinity and investigate the possible bias in the estimates when the individual variability is ignored. Three methods are examined: (i) the regression method based on the Beverton and Holt's (1956, Rapp. P.V. Reun. Cons. Int. Explor. Mer, 140: 67-83) equation; (ii) the moment method of Powell (1979, Rapp. PV. Reun. Int. Explor. Mer, 175: 167-169); and (iii) a generalization of Powell's method that estimates the individual variability to be incorporated into the estimation. It is found that the biases in the estimates from the existing methods are, in general, substantial, even when individual variability in growth is small and recruitment is uniform, and the generalized method performs better in terms of bias but is subject to a larger variation. There is a need to develop robust and flexible methods to deal with individual variability in the analysis of length-frequency data.
Resumo:
In the analysis of tagging data, it has been found that the least-squares method, based on the increment function known as the Fabens method, produces biased estimates because individual variability in growth is not allowed for. This paper modifies the Fabens method to account for individual variability in the length asymptote. Significance tests using t-statistics or log-likelihood ratio statistics may be applied to show the level of individual variability. Simulation results indicate that the modified method reduces the biases in the estimates to negligible proportions. Tagging data from tiger prawns (Penaeus esculentus and Penaeus semisulcatus) and rock lobster (Panulirus ornatus) are analysed as an illustration.
Resumo:
The von Bertalanffy growth model is extended to incorporate explanatory variables. The generalized model includes the switched growth model and the seasonal growth model as special cases, and can also be used to assess the tagging effect on growth. Distribution-free and consistent estimating functions are constructed for estimation of growth parameters from tag-recapture data in which age at release is unknown. This generalizes the work of James (1991, Biometrics 47 1519-1530) who considered the classical model and allowed for individual variability in growth. A real dataset from barramundi (Lates calcarifer) is analysed to estimate the growth parameters and possible effect of tagging on growth.
Resumo:
There’s a polyester mullet skirt gracing a derrière near you. It’s short at the front, long at the back, and it’s also known as the hi-lo skirt. Like fads that preceded it, the mullet skirt has a short fashion life, and although it will remain potentially wearable for years, it’s likely to soon be heading to the charity shop or to landfill...
Resumo:
Heavy haul railway lines are important and expensive items of infrastructure operating in an environment which is increasingly focussed on risk-based management and constrained profit margins. It is vital that costs are minimised but also that infrastructure satisfies failure criteria and standards of reliability which account for the random nature of wheel-rail forces and of the properties of the materials in the track. In Australia and the USA, concrete railway sleepers/ties are still designed using methods which the rest of the civil engineering world discarded decades ago in favour of the more rational, more economical and probabilistically based, limit states design (LSD) concept. This paper describes a LSD method for concrete sleepers which is based on (a) billions of measurements over many years of the real, random wheel-rail forces on heavy haul lines, and (b) the true capacity of sleepers. The essential principles on which the new method is based are similar to current, widely used LSD-based standards for concrete structures. The paper proposes and describes four limit states which a sleeper must satisfy, namely: strength; operations; serviceability; and fatigue. The method has been applied commercially to two new major heavy haul lines in Australia, where it has saved clients millions of dollars in capital expenditure.
Resumo:
This paper investigates the effect that text pre-processing approaches have on the estimation of the readability of web pages. Readability has been highlighted as an important aspect of web search result personalisation in previous work. The most widely used text readability measures rely on surface level characteristics of text, such as the length of words and sentences. We demonstrate that different tools for extracting text from web pages lead to very different estimations of readability. This has an important implication for search engines because search result personalisation strategies that consider users reading ability may fail if incorrect text readability estimations are computed.
Resumo:
This research was undertaken to encompass and identify challenges and impact factors that affect the successful outcomes of heritage building projects, especially those related to finding major causes of delays and cost overruns across projects in all Australian states. This project determined and analysed the causes of such delays and programme issues emanating from the planning and execution phases, whilst also analysing the requirements for management of multiple stakeholder relationships and the influence of unforeseen technical factors. The research proposes "call for action" guidance and was validated by experienced experts in heritage building projects in Australia. The proposed guidance is designed to ensure that realistic cost targets and delivery timeframes are set in future heritage projects, and necessary interventions made at appropriate project stages to ensure decisions are made that will help to prevent overtime and cost overuns.
Resumo:
This paper presents an approach for dynamic state estimation of aggregated generators by introducing a new correction factor for equivalent inter-area power flows. The spread of generators from the center of inertia of each area is summarized by the correction term α on the equivalent power flow between the areas and is applied to the identification and estimation process. A nonlinear time varying Kalman filter is applied to estimate the equivalent angles and velocities of coherent areas by reducing the effect of local modes on the estimated states. The approach is simulated on two test systems and the results show the effect of the correction factor and the performance of the state estimation by estimating the inter-area dynamics of the system.
Resumo:
Purpose – Preliminary cost estimates for construction projects are often the basis of financial feasibility and budgeting decisions in the early stages of planning and for effective project control, monitoring and execution. The purpose of this paper is to identify and better understand the cost drivers and factors that contribute to the accuracy of estimates in residential construction projects from the developers’ perspective. Design/methodology/approach – The paper uses a literature review to determine the drivers that affect the accuracy of developers’ early stage cost estimates and the factors influencing the construction costs of residential construction projects. It used cost variance data and other supporting documentation collected from two case study projects in South East Queensland, Australia, along with semi-structured interviews conducted with the practitioners involved. Findings – It is found that many cost drivers or factors of cost uncertainty identified in the literature for large-scale projects are not as apparent and relevant for developers’ small-scale residential construction projects. Specifically, the certainty and completeness of project-specific information, suitability of historical cost data, contingency allowances, methods of estimating and the estimator’s level of experience significantly affect the accuracy of cost estimates. Developers of small-scale residential projects use pre-established and suitably priced bills of quantities as the prime estimating method, which is considered to be the most efficient and accurate method for standard house designs. However, this method needs to be backed with the expertise and experience of the estimator. Originality/value – There is a lack of research on the accuracy of developers’ early stage cost estimates and the relevance and applicability of cost drivers and factors in the residential construction projects. This research has practical significance for improving the accuracy of such preliminary cost estimates.
Resumo:
Organisations are always focussed on ensuring that their business operations are performed in the most cost-effective manner, and that processes are responsive to ever-changing cost pressures. In many organisations, however, strategic cost-based decisions at the managerial level are not directly or quickly translatable to process-level operational support. A primary reason for this disconnect is the limited system-based support for cost-informed decisions at the process-operational level in real time. In this paper, we describe the different ways in which a workflow management system can support process-related decisions, guided by cost-informed considerations at the operational level, during execution. As a result, cost information is elevated from its non-functional attribute role to a first-class, fully functional process perspective. The paper defines success criteria that a WfMS should meet to provide such support, and discusses a reference implementation within the YAWL workflow environment that demonstrates how the various types of cost-informed decision rules are supported, using an illustrative example.