472 resultados para Technique-cost
em Queensland University of Technology - ePrints Archive
Resumo:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
Resumo:
For certain continuum problems, it is desirable and beneficial to combine two different methods together in order to exploit their advantages while evading their disadvantages. In this paper, a bridging transition algorithm is developed for the combination of the meshfree method (MM) with the finite element method (FEM). In this coupled method, the meshfree method is used in the sub-domain where the MM is required to obtain high accuracy, and the finite element method is employed in other sub-domains where FEM is required to improve the computational efficiency. The MM domain and the FEM domain are connected by a transition (bridging) region. A modified variational formulation and the Lagrange multiplier method are used to ensure the compatibility of displacements and their gradients. To improve the computational efficiency and reduce the meshing cost in the transition region, regularly distributed transition particles, which are independent of either the meshfree nodes or the FE nodes, can be inserted into the transition region. The newly developed coupled method is applied to the stress analysis of 2D solids and structures in order to investigate its’ performance and study parameters. Numerical results show that the present coupled method is convergent, accurate and stable. The coupled method has a promising potential for practical applications, because it can take advantages of both the meshfree method and FEM when overcome their shortcomings.
Resumo:
The report presents a methodology for whole of life cycle cost analysis of alternative treatment options for bridge structures, which require rehabilitation. The methodology has been developed after a review of current methods and establishing that a life cycle analysis based on a probabilistic risk approach has many advantages including the essential ability to consider variability of input parameters. The input parameters for the analysis are identified as initial cost, maintenance, monitoring and repair cost, user cost and failure cost. The methodology utilizes the advanced simulation technique of Monte Carlo simulation to combine a number of probability distributions to establish the distribution of whole of life cycle cost. In performing the simulation, the need for a powerful software package, which would work with spreadsheet program, has been identified. After exploring several products on the market, @RISK software has been selected for the simulation. In conclusion, the report presents a typical decision making scenario considering two alternative treatment options.
Resumo:
An estimation of costs for maintenance and rehabilitation is subject to variation due to the uncertainties of input parameters. This paper presents the results of an analysis to identify input parameters that affect the prediction of variation in road deterioration. Road data obtained from 1688 km of a national highway located in the tropical northeast of Queensland in Australia were used in the analysis. Data were analysed using a probability-based method, the Monte Carlo simulation technique and HDM-4’s roughness prediction model. The results of the analysis indicated that among the input parameters the variability of pavement strength, rut depth, annual equivalent axle load and initial roughness affected the variability of the predicted roughness. The second part of the paper presents an analysis to assess the variation in cost estimates due to the variability of the overall identified critical input parameters.
Resumo:
Accurate owner budget estimates are critical to the initial decision-to-build process for highway construction projects. However, transportation projects have historically experienced significant construction cost overruns from the time the decision to build has been taken by the owner. This paper addresses the problem of why highway projects overrun their predicted costs. It identifies the owner risk variables that contribute to significant cost overrun and then uses factor analysis, expert elicitation, and the nominal group technique to establish groups of importance ranked owner risks. Stepwise multivariate regression analysis is also used to investigate any correlation of the percentage of cost overrun with risks, together with attributes such as highway project type, indexed cost, geographics location, and project delivery method. The research results indicate a correlation between the reciprocal of project budgets size and percentage cost overrun. This can be useful for owners in determining more realistic decision-to-build highway budget estimates by taking into account the economies of scale associated with larger projects.
Resumo:
As civil infrastructures such as bridges age, there is a concern for safety and a need for cost-effective and reliable monitoring tool. Different diagnostic techniques are available nowadays for structural health monitoring (SHM) of bridges. Acoustic emission is one such technique with potential of predicting failure. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). AEtechnique involves recording the stress waves bymeans of sensors and subsequent analysis of the recorded signals,which then convey information about the nature of the source. AE can be used as a local SHM technique to monitor specific regions with visible presence of cracks or crack prone areas such as welded regions and joints with bolted connection or as a global technique to monitor the whole structure. Strength of AE technique lies in its ability to detect active crack activity, thus helping in prioritising maintenance work by helping focus on active cracks rather than dormant cracks. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. One is the generation of large amount of data during the testing; hence an effective data analysis and management is necessary, especially for long term monitoring uses. Complications also arise as a number of spurious sources can giveAEsignals, therefore, different source discrimination strategies are necessary to identify genuine signals from spurious ones. Another major challenge is the quantification of damage level by appropriate analysis of data. Intensity analysis using severity and historic indices as well as b-value analysis are some important methods and will be discussed and applied for analysis of laboratory experimental data in this paper.
Resumo:
An iterative based strategy is proposed for finding the optimal rating and location of fixed and switched capacitors in distribution networks. The substation Load Tap Changer tap is also set during this procedure. A Modified Discrete Particle Swarm Optimization is employed in the proposed strategy. The objective function is composed of the distribution line loss cost and the capacitors investment cost. The line loss is calculated using estimation of the load duration curve to multiple levels. The constraints are the bus voltage and the feeder current which should be maintained within their standard range. For validation of the proposed method, two case studies are tested. The first case study is the semi-urban 37-bus distribution system which is connected at bus 2 of the Roy Billinton Test System which is located in the secondary side of a 33/11 kV distribution substation. The second case is a 33 kV distribution network based on the modification of the 18-bus IEEE distribution system. The results are compared with prior publications to illustrate the accuracy of the proposed strategy.
Resumo:
This paper presents a unified view of the relationship between (1) quantity and (2) price generating mechanisms in estimating individual prime construction costs/prices. A brief review of quantity generating techniques is provided with particular emphasis on experientially based assumptive approaches and this is compared with the level of pricing data available for the quantities generated in terms of reliability of the ensuing prime cost estimates. It is argued that there is a tradeoff between the reliability of quantity items and reliability of rates. Thus it is shown that the level of quantity generation is optimised by maximising the joint reliability function of the quantity items and their associated rates. Some thoughts on how this joint reliability function can be evaluated and quantified follow. The application of these ideas is described within the overall strategy of the estimator's decision - "Which estimating technique shall I use for a given level of contract information? - and a case is made for the computer generation of estimates by several methods, with an indication of the reliability of each estimate, the ultimate choice of estimate being left to the estimator concerned. Finally, the potential for the development of automatic estimating systems within this framework is examined.
Resumo:
Irradiance profile around the receiver tube (RT) of a parabolic trough collector (PTC) is a key effect of optical performance that affects the overall energy performance of the collector. Thermal performance evaluation of the RT relies on the appropriate determination of the irradiance profile. This article explains a technique in which empirical equations were developed to calculate the local irradiance as a function of angular location of the RT of a standard PTC using a vigorously verified Monte Carlo ray tracing model. A large range of test conditions including daily normal insolation, spectral selective coatings and glass envelop conditions were selected from the published data by Dudley et al. [1] for the job. The R2 values of the equations are excellent that vary in between 0.9857 and 0.9999. Therefore, these equations can be used confidently to produce realistic non-uniform boundary heat flux profile around the RT at normal incidence for conjugate heat transfer analyses of the collector. Required values in the equations are daily normal insolation, and the spectral selective properties of the collector components. Since the equations are polynomial functions, data processing software can be employed to calculate the flux profile very easily and quickly. The ultimate goal of this research is to make the concentrating solar power technology cost competitive with conventional energy technology facilitating its ongoing research.
Resumo:
This paper presents a pose estimation approach that is resilient to typical sensor failure and suitable for low cost agricultural robots. Guiding large agricultural machinery with highly accurate GPS/INS systems has become standard practice, however these systems are inappropriate for smaller, lower-cost robots. Our positioning system estimates pose by fusing data from a low-cost global positioning sensor, low-cost inertial sensors and a new technique for vision-based row tracking. The results first demonstrate that our positioning system will accurately guide a robot to perform a coverage task across a 6 hectare field. The results then demonstrate that our vision-based row tracking algorithm improves the performance of the positioning system despite long periods of precision correction signal dropout and intermittent dropouts of the entire GPS sensor.
Resumo:
INTRODUCTION: Increasing health care costs, limited resources and increased demand makes cost effective and cost-efficient delivery of Adolescent Idiopathic Scoliosis (AIS) management paramount. Rising implant costs in deformity correction surgery have prompted analysis of whether high implant densities are justified. The objective of this study was to analyse the costs of thoracoscopic scoliosis surgery, comparing initial learning curve costs with those of the established technique and to the costs involved in posterior instrumented fusion from the literature. METHODS: 189 consecutive cases from April 2000 to July 2011 were assessed with a minimum of 2 years follow-up. Information was gathered from a prospective database covering perioperative factors, clinical and radiological outcomes, complications and patient reported outcomes. The patients were divided into three groups to allow comparison; 1. A learning curve cohort, 2. An intermediate cohort and 3. A third cohort of patients, using our established technique. Hospital finance records and implant manufacturer figures were corrected to 2013 costs. A literature review of AIS management costs and implant density in similar curve types was performed. RESULTS: The mean pre-op Cobb angle was 53°(95%CI 0.4) and was corrected postop to mean 22.9°(CI 0.4). The overall complication rate was 20.6%, primarily in the first cohort, with a rate of 5.6% in the third cohort. The average total costs were $46,732, operating room costs of $10,301 (22.0%) and ICU costs of $4620 (9.8%). The mean number of screws placed was 7.1 (CI 0.04) with a single rod used for each case giving average implant costs of $14,004 (29.9%). Comparison of the three groups revealed higher implant costs as the technique evolved to that in use today, from $13,049 in Group 1 to $14577 in Group 3 (P<0.001). Conversely operating room costs reduced from $10,621 in Group 1 to $7573 (P<0.001) in Group 3. ICU stay was reduced from an average of 1.2 to 0 days. In-patient stay was significantly (P=0.006) lower in Groups 2 and 3 (5.4 days) than Group 1 (5.9 days) (i.e. a reduction in cost of approximately $6,140). CONCLUSIONS: The evolution of our thoracoscopic anterior scoliosis correction has resulted in an increase in the number of levels fused and reduction in complication rate. Implant costs have risen as a result, however, there has been a concurrent decrease in those costs generated by operating room use, ICU and in-patient stay with increasing experience. Literature review of equivalent curve types treated posteriorly shows similar perioperative factors but higher implant density, 69-83% compared to the 50% in this study. Thoracoscopic Scoliosis surgery presents a low density, reliable, efficient and effective option for selected curves. A cost analysis of Thoracoscopic Scoliosis Surgery using financial records and a prospectively collected database of all patients since 2000, demonstrating a clear cost advantage compared to equivalent posterior instrumentation and fusion.
Resumo:
Bearing faults are the most common cause of wind turbine failures. Unavailability and maintenance cost of wind turbines are becoming critically important, with their fast growing in electric networks. Early fault detection can reduce outage time and costs. This paper proposes Anomaly Detection (AD) machine learning algorithms for fault diagnosis of wind turbine bearings. The application of this method on a real data set was conducted and is presented in this paper. For validation and comparison purposes, a set of baseline results are produced using the popular one-class SVM methods to examine the ability of the proposed technique in detecting incipient faults.
Resumo:
A recurring feature of modern practice is occupational stress of project professionals, with both debilitating effects on the people concerned and indirectly affecting project success. Previous research outside the construction industry has involved the use of a psychology perceived stress questionnaire (PSQ) to measure occupational stress, resulting in the identification of one stressor – demand - and three sub-dimensional emotional reactions in terms of worry, tension and joy. The PSQ is translated into Chinese with a back translation technique and used in a survey of young construction cost professionals in China. Principal component analysis and confirmatory factor analysis are used to test the divisibility of occupational stress - little mentioned in previous research on stress in the construction context. In addition, structural equation modelling is used to assess nomological validity by testing the effects of the three dimensions on organizational commitment, the main finding of which is that lack of joy has the sole significant effect. The three-dimensional measurement framework facilitates the standardizing measurement of occupational stress. Further research will establish if the findings are also applicable in other settings and explore the relations between stress dimensions and other managerial concepts.
Resumo:
A new technique called the reef resource inventory (RRI) was developed to map the distribution and abundance of benthos and substratum on reefs. The rapid field sampling technique uses divers to visually estimate the percentage cover of categories of benthos and substratum along 2x20 in plotless strip-transects positioned randomly over the tops, and systematically along the edge of reefs. The purpose of this study was to compare the relative sampling accuracy of the RRI against the line intercept transect technique (LIT), an international standard for sampling reef benthos and substratum. Analysis of paired sampling with LIT and RRI at 51 sites indicated sampling accuracy was not different (P > 0.05) for 8 of the 12 benthos and substratum categories used in the study. Significant differences were attributed to small-scale patchiness and cryptic coloration of some benthos; effects associated with sampling a sparsely distributed animal along a line versus an area; difficulties in discriminating some of the benthos and substratum categories; and differences due to visual acuity since LIT measurements were taken by divers close to the seabed whereas RRI measurements were taken by divers higher in the water column. The relative cost efficiency of the RRI technique was at least three times that of LIT for all benthos and substratum categories and as much as 10 times higher for two categories. These results suggest that the RRI can be used to obtain reliable and accurate estimates of relative abundance of broad categories of reef benthos and substratum.