2 resultados para cost utility analysis

em Universidad de Alicante


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The literature states that project duration is affected by various scope factors. Using 168 building projects carried out in Spain, this paper uses the multiple regression analysis to develop a forecast model that allows estimating project duration of new builds. The proposed model uses project type, gross floor area (GFA), the cost/GFA relationship and number of floors as predictor variables. The research identified the logarithmic form of construction speed as the most appropriate response variable. GFA has greater influence than cost on project duration but both factors are necessary to achieve a forecast model with the highest accuracy. We developed an analysis to verify the stability of forecasted values and showed how a model with high values of fit and accuracy may display an anomalous behavior in the forecasted values. The sensitivity of the proposed forecast model was also analyzed versus the variability of construction costs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.