472 resultados para Technique-cost
Resumo:
This paper presents an unmanned aircraft system (UAS) that uses a probabilistic model for autonomous front-on environmental sensing or photography of a target. The system is based on low-cost and readily-available sensor systems in dynamic environments and with the general intent of improving the capabilities of dynamic waypoint-based navigation systems for a low-cost UAS. The behavioural dynamics of target movement for the design of a Kalman filter and Markov model-based prediction algorithm are included. Geometrical concepts and the Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of a target, thus delivering a new waypoint for autonomous navigation. The results of the application to aerial filming with low-cost UAS are presented, achieving the desired goal of maintained front-on perspective without significant constraint to the route or pace of target movement.
Resumo:
In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.
Resumo:
The design-build (DB) delivery method has been widely used in the United States due to its reputed superior cost and time performance. However, rigorous studies have produced inconclusive support and only in terms of overall results, with few attempts being made to relate project characteristics with performance levels. This paper provides a larger and more finely grained analysis of a set of 418 DB projects from the online project database of the Design-Build Institute of America (DBIA), in terms of the time-overrun rate (TOR), early start rate (ESR), early completion rate (ECR) and cost overrun rate (COR) associated with project type (e.g., commercial/institutional buildings and civil infrastructure projects), owners (e.g., Department of Defense and private corporations), procurement methods (e.g., ‘best value with discussion’ and qualifications-based selection), contract methods (e.g., lump sum and GMP) and LEED levels (e.g., gold and silver). The results show ‘best value with discussion’ to be the dominant procurement method and lump sum the most frequently used contract method. The DB method provides relatively good time performance, with more than 75% of DB projects completed on time or before schedule. However, with more than 50% of DB projects cost overrunning, the DB advantage of cost saving remains uncertain. ANOVA tests indicate that DB projects within different procurement methods have significantly different time performance and that different owner types and contract methods significantly affect cost performance. In addition to contributing to empirical knowledge concerning the cost and time performance of DB projects with new solid evidence from a large sample size, the findings and practical implications of this study are beneficial to owners in understanding the likely schedule and budget implications involved for their particular project characteristics.
Resumo:
Law is narration: it is narrative, narrator and the narrated. As a narrative, the law is constituted by a constellation of texts – from official sources such as statutes, treaties and cases, to private arrangements such as commercial contracts, deeds and parenting plans. All are a collection of stories: cases are narrative contests of facts and rights; statutes are recitations of the substantive and procedural bases for social, economic and political interactions; private agreements are plots for future relationships, whether personal or professional. As a narrator, law speaks in the language of modern liberalism. It describes its world in abstractions rather than in concrete experience, universal principles rather than individual subjectivity. It casts people into ‘parties’ to legal relationships; structures human interactions into ‘issues’ or ‘problems’; and tells individual stories within larger narrative arcs such as ‘the rule of law’ and ‘the interests of justice’. As the narrated, the law is a character in its own story. The scholarship of law, for example, is a type of story-telling with law as its central character. For positivists, still the dominant group in the legal genre, law is a closed system of formal rules with an “immanent rationality” and its own “structure, substantive content, procedure and tradition,” dedicated to finality of judgment. For scholars inspired by the interpretative tradition in the humanities, law is a more ambivalent character, susceptible to influences from outside its realm and masking a hidden ideological agenda under its cloak of universality and neutrality. For social scientists, law is a protagonist on a wider social stage, impacting on society, the economy and the polity is often surprising ways.
Resumo:
Cost estimating has been acknowledged as a crucial component of construction projects. Depending on available information and project requirements, cost estimates evolve in tandem with project lifecycle stages; conceptualisation, design development, execution and facility management. The premium placed on the accuracy of cost estimates is crucial to producing project tenders and eventually in budget management. Notwithstanding the initial slow pace of its adoption, Building Information Modelling (BIM) has successfully addressed a number of challenges previously characteristic of traditional approaches in the AEC, including poor communication, the prevalence of islands of information and frequent reworks. Therefore, it is conceivable that BIM can be leveraged to address specific shortcomings of cost estimation. The impetus for leveraging BIM models for accurate cost estimation is to align budgeted and actual cost. This paper hypothesises that the accuracy of BIM-based estimation, as more efficient, process-mirrors of traditional cost estimation methods, can be enhanced by simulating traditional cost estimation factors variables. Through literature reviews and preliminary expert interviews, this paper explores the factors that could potentially lead to more accurate cost estimates for construction projects. The findings show numerous factors that affect the cost estimates ranging from project information and its characteristic, project team, clients, contractual matters, and other external influences. This paper will make a particular contribution to the early phase of BIM-based project estimation.
Resumo:
Cost estimating is a key task within Quantity Surveyors’ (QS) offices. Provision of an accurate estimate is vital to ensure that the objectives of the client are met by staying within the client’s budget. Building Information Modelling (BIM) is an evolving technology that has gained attention in the construction industries all over the world. Benefits from the use of BIM include cost and time savings if the processes used by the procurement team are adapted to maximise the benefits of BIM. BIM can be used by QSs to automate aspects of quantity take-off and the preparation of estimates, decreasing turnaround time and assist in controlling errors and inaccuracies. The Malaysian government has decided to require the use of BIM for its projects beginning from 2016. However, slow uptake is reported in the use of BIM both within companies and to support collaboration within the Malaysian industry. It has been recommended that QSs to start evaluating the impact of BIM on their practices. This paper reviews the perspectives of QSs in Malaysia towards the use of BIM to achieve more dependable results in their cost estimating practice. The objectives of this paper include identifying strategies in improving practice and potential adoption drivers that lead QSs to BIM usage in their construction projects. From the expert interviews, it was found out that, despite still using traditional methods and not practising BIM, the interviewees still acquire limited knowledge related to BIM. There are some drivers that potentially motivate them to employ BIM in their practices. These include client demands, innovation in traditional methods, speed in estimating costs, reduced time and costs, improvement in practices and self-awareness, efficiency in projects, and competition from other companies. The findings of this paper identify the potential drivers in encouraging Malaysian Quantity Surveyors to exploit BIM in their construction projects.
Resumo:
Individuals with limb amputation fitted with conventional socket-suspended prostheses often experience socket-related discomfort leading to a significant decrease in quality of life. Bone-anchored prostheses are increasingly acknowledged as viable alternative method of attachment of artificial limb. In this case, the prosthesis is attached directly to the residual skeleton through a percutaneous fixation. To date, a few osseointegration fixations are commercially available. Several devices are at different stages of development particularly in Europe and the US. [1-15] Clearly, surgical procedures are currently blooming worldwide. Indeed, Australia and Queensland, in particular, have one of the fastest growing populations. Previous studies involving either screw-type implants or press-fit fixations for bone-anchorage have focused on biomechanics aspects as well as the clinical benefits and safety of the procedure. In principle, bone-anchored prostheses should eliminate lifetime expenses associated with sockets and, consequently, potentially alleviate the financial burden of amputation for governmental organizations. Unfortunately, publications focusing on cost-effectiveness are sparse. In fact, only one study published by Haggstrom et al (2012), reported that “despite significantly fewer visits for prosthetic service the annual mean costs for osseointegrated prostheses were comparable with socket-suspended prostheses”. Consequently, governmental organizations such as Queensland Artificial Limb Services (QALS) are facing a number of challenges while adjusting financial assistance schemes that should be fair and equitable to their clients fitted with bone-anchored prostheses. Clearly, more scientific evidence extracted from governmental databases is needed to further consolidate the analyses of financial burden associated with both methods of attachment (i.e., conventional sockets prostheses, bone-anchored prostheses). The purpose of the presentation will be to share the current outcomes of a cost-analysis study lead by QALS. The specific objectives will be: • To outline methodological avenues to assess the cost-effectiveness of bone-anchored prostheses compared to conventional sockets prostheses, • To highlight the potential obstacles and limitations in cost-effectiveness analyses of bone-anchored prostheses, • To present cohort results of a cost-effectiveness (QALY vs cost) including the determination of fair Incremental cost-effectiveness Ratios (ICER) as well as cost-benefit analysis focusing on the comparing costs and key outcome indicators (e.g., QTFA, TUG, 6MWT, activities of daily living) over QALS funding cycles for both methods of attachment.
Resumo:
BACKGROUND The workgroup of Traffic Psychology is concerned with the social, behavioral, and perceptual aspects that are associated with use and non-use of bicycle helmets, in their various forms and under various cycling conditions. OBJECTIVES The objectives of WG2 are to (1) share current knowledge among the people already working in the field, (2) suggest new ideas for research on and evaluation of the design of bicycle helmets, and (3) discuss options for funding of such research within the individual frameworks of the participants. Areas for research include 3.1. The patterns of use of helmets among different users: children, adults, and sports enthusiasts. 3.2. The use of helmets in different environments: rural roads, urban streets, and bike trails. 3.3. Concerns bicyclists have relative to their safety and convenience and the perceived impact of using helmets on comfort and convenience. 3.4. The benefit of helmets for enhancing visibility, and how variations in helmet design and colors affect daytime, nighttime, and dusktime visibility. 3.5. The role of helmets in the acceptance of city-wide pickup-and-drop-off bicycles. 3.6. The impact of helmets on visual search behaviour of bicyclists.
Resumo:
RFID is an important technology that can be used to create the ubiquitous society. But an RFID system uses open radio frequency signal to transfer information and this leads to pose many serious threats to its privacy and security. In general, the computing and storage resources in an RFID tag are very limited and this makes it difficult to solve its secure and private problems, especially for low-cost RFID tags. In order to ensure the security and privacy of low-cost RFID systems we propose a lightweight authentication protocol based on Hash function. This protocol can ensure forward security and prevent information leakage, location tracing, eavesdropping, replay attack and spoofing. This protocol completes the strong authentication of the reader to the tag by twice authenticating and it only transfers part information of the encrypted tag’s identifier for each session so it is difficult for an adversary to intercept the whole identifier of a tag. This protocol is simple and it takes less computing and storage resources, it is very suitable to some low-cost RFID systems.
Resumo:
A novel, highly selective resonance light scattering (RLS) method was researched and developed for the analysis of phenol in different types of industrial water. An important aspect of the method involved the use of graphene quantum dots (GQDs), which were initially obtained from the pyrolysis of citric acid dissolved in aqueous solutions. The GQDs in the presence of horseradish peroxidase (HRP) and H2O2 were found to react quantitatively with phenol such that the RLS spectral band (310 nm) was quantitatively enhanced as a consequence of the interaction between the GQDs and the quinone formed in the above reaction. It was demonstrated that the novel analytical method had better selectivity and sensitivity for the determination of phenol in water as compared to other analytical methods found in the literature. Thus, trace amounts of phenol were detected over the linear ranges of 6.00×10−8–2.16×10−6 M and 2.40×10−6–2.88×10−5 M with a detection limit of 2.20×10−8 M. In addition, three different spiked waste water samples and two untreated lake water samples were analysed for phenol. Satisfactory results were obtained with the use of the novel, sensitive and rapid RLS method.
Resumo:
Rolling-element bearing failures are the most frequent problems in rotating machinery, which can be catastrophic and cause major downtime. Hence, providing advance failure warning and precise fault detection in such components are pivotal and cost-effective. The vast majority of past research has focused on signal processing and spectral analysis for fault diagnostics in rotating components. In this study, a data mining approach using a machine learning technique called anomaly detection (AD) is presented. This method employs classification techniques to discriminate between defect examples. Two features, kurtosis and Non-Gaussianity Score (NGS), are extracted to develop anomaly detection algorithms. The performance of the developed algorithms was examined through real data from a test to failure bearing. Finally, the application of anomaly detection is compared with one of the popular methods called Support Vector Machine (SVM) to investigate the sensitivity and accuracy of this approach and its ability to detect the anomalies in early stages.
Resumo:
Pattern recognition is a promising approach for the identification of structural damage using measured dynamic data. Much of the research on pattern recognition has employed artificial neural networks (ANNs) and genetic algorithms as systematic ways of matching pattern features. The selection of a damage-sensitive and noise-insensitive pattern feature is important for all structural damage identification methods. Accordingly, a neural networks-based damage detection method using frequency response function (FRF) data is presented in this paper. This method can effectively consider uncertainties of measured data from which training patterns are generated. The proposed method reduces the dimension of the initial FRF data and transforms it into new damage indices and employs an ANN method for the actual damage localization and quantification using recognized damage patterns from the algorithm. In civil engineering applications, the measurement of dynamic response under field conditions always contains noise components from environmental factors. In order to evaluate the performance of the proposed strategy with noise polluted data, noise contaminated measurements are also introduced to the proposed algorithm. ANNs with optimal architecture give minimum training and testing errors and provide precise damage detection results. In order to maximize damage detection results, the optimal architecture of ANN is identified by defining the number of hidden layers and the number of neurons per hidden layer by a trial and error method. In real testing, the number of measurement points and the measurement locations to obtain the structure response are critical for damage detection. Therefore, optimal sensor placement to improve damage identification is also investigated herein. A finite element model of a two storey framed structure is used to train the neural network. It shows accurate performance and gives low error with simulated and noise-contaminated data for single and multiple damage cases. As a result, the proposed method can be used for structural health monitoring and damage detection, particularly for cases where the measurement data is very large. Furthermore, it is suggested that an optimal ANN architecture can detect damage occurrence with good accuracy and can provide damage quantification with reasonable accuracy under varying levels of damage.
Resumo:
In the field of face recognition, sparse representation (SR) has received considerable attention during the past few years, with a focus on holistic descriptors in closed-set identification applications. The underlying assumption in such SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such an assumption is easily violated in the face verification scenario, where the task is to determine if two faces (where one or both have not been seen before) belong to the same person. In this study, the authors propose an alternative approach to SR-based face verification, where SR encoding is performed on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which then form an overall face descriptor. Owing to the deliberate loss of spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment and various image deformations. Within the proposed framework, they evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN) and an implicit probabilistic technique based on Gaussian mixture models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, on both the traditional closed-set identification task and the more applicable face verification task. The experiments also show that l1-minimisation-based encoding has a considerably higher computational cost when compared with SANN-based and probabilistic encoding, but leads to higher recognition rates.
Resumo:
We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose a technique based on stochastic convex optimization and give bounds that show that the performance of our algorithm approaches the best achievable by any policy in the comparison class. Most importantly, this result depends on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithm in a queuing application.
Resumo:
Phosphorus has a number of indispensable biochemical roles, but its natural deposition and the low solubility of phosphates as well as their rapid transformation to insoluble forms make the element commonly the growth-limiting nutrient, particularly in aquatic ecosystems. Famously, phosphorus that reaches water bodies is commonly the main cause of eutrophication. This undesirable process can severely affect many aquatic biotas in the world. More management practices are proposed but long-term monitoring of phosphorus level is necessary to ensure that the eutrophication won't occur. Passive sampling techniques, which have been developed over the last decades, could provide several advantages to the conventional sampling methods including simpler sampling devices, more cost-effective sampling campaign, providing flow proportional load as well as representative average of concentrations of phosphorus in the environment. Although some types of passive samplers are commercially available, their uses are still scarcely reported in the literature. In Japan, there is limited application of passive sampling technique to monitor phosphorus even in the field of agricultural environment. This paper aims to introduce the relatively new P-sampling techniques and their potential to use in environmental monitoring studies.