149 resultados para Analysis Model

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With increasing pressure to provide environmentally responsible infrastructure products and services, stakeholders are putting significant foci on the early identification of financial viability and outcome of infrastructure projects. Traditionally, there has been an imbalance between sustainable measures and project budget. On one hand, the industry tends to employ the first-cost mentality and approach to developing infrastructure projects. On the other, environmental experts and technology innovators often push for the ultimately green products and systems without much of a concern for cost. This situation is being quickly changed as the industry is under pressure to continue to return profit, while better adapting to current and emerging global issues of sustainability. For the infrastructure sector to contribute to sustainable development, it will need to increase value and efficiency. Thus, there is a great need for tools that will enable decision makers evaluate competing initiatives and identify the most sustainable approaches to procuring infrastructure projects. In order to ensure that these objectives are achieved, the concept of life-cycle costing analysis (LCCA) will play significant roles in the economics of an infrastructure project. Recently, a few research initiatives have applied the LCCA models for road infrastructure that focused on the traditional economics of a project. There is little coverage of life-cycle costing as a method to evaluate the criteria and assess the economic implications of pursuing sustainability in road infrastructure projects. To rectify this problem, this paper reviews the theoretical basis of previous LCCA models before discussing their inability to determinate the sustainability indicators in road infrastructure project. It then introduces an on-going research aimed at developing a new model to integrate the various new cost elements based on the sustainability indicators with the traditional and proven LCCA approach. It is expected that the research will generate a working model for sustainability based life-cycle cost analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regardless of technology benefits, safety planners still face difficulties explaining errors related to the use of different technologies and evaluating how the errors impact the performance of safety decision making. This paper presents a preliminary error impact analysis testbed to model object identification and tracking errors caused by image-based devices and algorithms and to analyze the impact of the errors for spatial safety assessment of earthmoving and surface mining activities. More specifically, this research designed a testbed to model workspaces for earthmoving operations, to simulate safety-related violations, and to apply different object identification and tracking errors on the data collected and processed for spatial safety assessment. Three different cases were analyzed based on actual earthmoving operations conducted at a limestone quarry. Using the testbed, the impacts of the errors were investigated for the safety planning purpose.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes. A use case study is also presented in this paper to show the advantages of using OLAP and data cubes to analyze costumers’ opinions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. In order to enhance customer satisfaction and their shopping experiences, it has become important to analysis customers reviews to extract opinions on the products that they buy. Thus, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research proposes a multi-dimensional model for Opinion Mining, which integrates customers' characteristics and their opinions about products (or services). Customer opinions are valuable for companies to deliver right products or services to their customers. This research presents a comprehensive framework to evaluate opinions' orientation based on products' hierarchy attributes. It also provides an alternative way to obtain opinion summaries for different groups of customers and different categories of produces.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This research is aimed at addressing problems in the field of asset management relating to risk analysis and decision making based on data from a Supervisory Control and Data Acquisition (SCADA) system. It is apparent that determining risk likelihood in risk analysis is difficult, especially when historical information is unreliable. This relates to a problem in SCADA data analysis because of nested data. A further problem is in providing beneficial information from a SCADA system to a managerial level information system (e.g. Enterprise Resource Planning/ERP). A Hierarchical Model is developed to address the problems. The model is composed of three different Analyses: Hierarchical Analysis, Failure Mode and Effect Analysis, and Interdependence Analysis. The significant contributions from the model include: (a) a new risk analysis model, namely an Interdependence Risk Analysis Model which does not rely on the existence of historical information because it utilises Interdependence Relationships to determine the risk likelihood, (b) improvement of the SCADA data analysis problem by addressing the nested data problem through the Hierarchical Analysis, and (c) presentation of a framework to provide beneficial information from SCADA systems to ERP systems. The case study of a Water Treatment Plant is utilised for model validation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This work presents an extended Joint Factor Analysis model including explicit modelling of unwanted within-session variability. The goals of the proposed extended JFA model are to improve verification performance with short utterances by compensating for the effects of limited or imbalanced phonetic coverage, and to produce a flexible JFA model that is effective over a wide range of utterance lengths without adjusting model parameters such as retraining session subspaces. Experimental results on the 2006 NIST SRE corpus demonstrate the flexibility of the proposed model by providing competitive results over a wide range of utterance lengths without retraining and also yielding modest improvements in a number of conditions over current state-of-the-art.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents an extended study on the implementation of support vector machine(SVM) based speaker verification in systems that employ continuous progressive model adaptation using the weight-based factor analysis model. The weight-based factor analysis model compensates for session variations in unsupervised scenarios by incorporating trial confidence measures in the general statistics used in the inter-session variability modelling process. Employing weight-based factor analysis in Gaussian mixture models (GMM) was recently found to provide significant performance gains to unsupervised classification. Further improvements in performance were found through the integration of SVM-based classification in the system by means of GMM supervectors. This study focuses particularly on the way in which a client is represented in the SVM kernel space using single and multiple target supervectors. Experimental results indicate that training client SVMs using a single target supervector maximises performance while exhibiting a certain robustness to the inclusion of impostor training data in the model. Furthermore, the inclusion of low-scoring target trials in the adaptation process is investigated where they were found to significantly aid performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper will focus on the development of an interactive test engine using Rasch analysis of item responses for question selection and reporting of results. The Rasch analysis is used to determine student ability and question difficulty. This model is widely used in the preparation of paper-based tests and has been the subject of particular use and development at the Australian Council for Education Research (ACER). This paper presents an overview of an interactive implementation of the Rasch analysis model in HyperCard, where student ability estimates are generated 'on the fly' and question difficulty values updated from time to time. The student ability estimates are used to determine question selection and are the basis of scoring and reporting schemes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Information mismatch and overload are two fundamental issues influencing the effectiveness of information filtering systems. Even though both term-based and pattern-based approaches have been proposed to address the issues, neither of these approaches alone can provide a satisfactory decision for determining the relevant information. This paper presents a novel two-stage decision model for solving the issues. The first stage is a novel rough analysis model to address the overload problem. The second stage is a pattern taxonomy mining model to address the mismatch problem. The experimental results on RCV1 and TREC filtering topics show that the proposed model significantly outperforms the state-of-the-art filtering systems.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background: More than half of all cerebral ischemic events are the result of rupture of extracranial plaques. The clinical determination of carotid plaque vulnerability is currently based solely on luminal stenosis; however, it has been increasingly suggested that plaque morphology and biomechanical stress should also be considered. We used finite element analysis based on in vivo magnetic resonance imaging (MRI) to simulate the stress distributions within plaques of asymptomatic and symptomatic individuals. Methods: Thirty nonconsecutive subjects (15 symptomatic and 15 asymptomatic) underwent high-resolution multisequence in vivo MRI of the carotid bifurcation. Stress analysis was performed based on the geometry derived from in vivo MRI of the carotid artery at the point of maximal stenosis. The finite element analysis model considered plaque components to be hyperelastic. The peak stresses within the plaques of symptomatic and asymptomatic individuals were compared. Results: High stress concentrations were found at the shoulder regions of symptomatic plaques, and the maximal stresses predicted in this group were significantly higher than those in the asymptomatic group (508.2 ± 193.1 vs 269.6 ± 107.9 kPa; P = .004). Conclusions: Maximal predicted plaque stresses in symptomatic patients were higher than those predicted in asymptomatic patients by finite element analysis, suggesting the possibility that plaques with higher stresses may be more prone to be symptomatic and rupture. If further validated by large-scale longitudinal studies, biomechanical stress analysis based on high resolution in vivo MRI could potentially act as a useful tool for risk assessment of carotid atheroma. It may help in the identification of patients with asymptomatic carotid atheroma at greatest risk of developing symptoms or mild-to-moderate symptomatic stenoses, which currently fall outside current clinical guidelines for intervention.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Decentralized and regional load-frequency control of power systems operating in normal and near-normal conditions has been well studied; and several analysis/synthesis approaches have been developed during the last few decades. However in contingency and off-normal conditions, the existing emergency control plans, such as under-frequency load shedding, are usually applied in a centralized structure using a different analysis model. This paper discusses the feasibility of using frequency-based emergency control schemes based on tie-line measurements and local information available within a control area. The conventional load-frequency control model is generalized by considering the dynamics of emergency control/protection schemes and an analytic approach to analyze the regional frequency response under normal and emergency conditions is presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current argument is that there exist no indigenous people in Africa because all Africans are indigenous. The obverse considers those Africans who have not been touched by colonialism and lost their traditional cultures commensurate with attachments to the lands or a distinguishable traditional lifestyle to be indigenous. This paper argues in favor of the latter. For example, modernism, materialism, ex-colonial socio-cultural impacts (as in the remnants of European legal structures, and cultural scarring), globalization, and technology are international social homogenizers. People who live in this telos and do not participate in a distinct traditional culture that has been attached to the land for centuries are not indigenous. It is argued that this cultural divergence between modern and traditional is the major identifying point to settle the indigenous-non indigenous African debate. Finally, the paper looks at inclusive development, how this helps to distinguish African indigeneity, and provides a new political analysis model for quantifying inclusivity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.