905 resultados para Advanced Model Approach (AMA)
Resumo:
In this paper, we examine the use of a Kalman filter to aid in the mission planning process for autonomous gliders. Given a set of waypoints defining the planned mission and a prediction of the ocean currents from a regional ocean model, we present an approach to determine the best, constant, time interval at which the glider should surface to maintain a prescribed tracking error, and minimizing time on the ocean surface. We assume basic parameters for the execution of a given mission, and provide the results of the Kalman filter mission planning approach. These results are compared with previous executions of the given mission scenario.
Resumo:
Developing economies accommodate more than three quarters of the world's population. This means understanding their growth and well-being is of critical importance. Information technology (IT) is one resource that has had a profound effect in shaping the global economy. IT is also an important resource for driving growth and development in developing economies. Investments in developing economies, however, have focused on the exploitation of labor and natural resources. Unlike in developed economies, focus on IT investment to improve efficiency and effectiveness of business process in developing economies has been sparse, and mechanisms for deriving better IT-related business value is not well understood. This study develops a complementarities-based business value model for developing economies, and tests the relationship between IT investments, IT-related complementarities, and business process performance. It also considers the relationship between business processes performance and firm-level performance. The results suggest that a coordinated investment in IT and IT-related complementarities related favorably to business process performance. Improvements in process-level performance lead to improvements in firm-level performance. The results also suggest that the IT-related complementarities are not only a source of business value on their own, but also enhance the IT resources' ability to contribute to business process performance. This study demonstrates that a coordinated investment approach is required in developing economies. With this approach, their IT resources and IT-related complementaries would help them significantly in improving their business processes, and eventually their firm-level performances.
Resumo:
Many academic researchers have conducted studies on the selection of design-build (DB) delivery method; however, there are few studies on the selection of DB operational variations, which poses challenges to many clients. The selection of DB operational variation is a multi-criteria decision making process that requires clients to objectively evaluate the performance of each DB operational variation with reference to the selection criteria. This evaluation process is often characterized by subjectivity and uncertainty. In order to resolve this deficiency, the current investigation aimed to establish a fuzzy multicriteria decision-making (FMCDM) model for selecting the most suitable DB operational variation. A three-round Delphi questionnaire survey was conducted to identify the selection criteria and their relative importance. A fuzzy set theory approach, namely the modified horizontal approach with the bisector error method, was applied to establish the fuzzy membership functions, which enables clients to perform quantitative calculations on the performance of each DB operational variation. The FMCDM was developed using the weighted mean method to aggregate the overall performance of DB operational variations with regard to the selection criteria. The proposed FMCDM model enables clients to perform quantitative calculations in a fuzzy decision-making environment and provides a useful tool to cope with different project attributes.
Resumo:
Enterprise architecture (EA) management has be-come an intensively discussed approach to manage enterprise transformations. While there is a strong interest in EA frameworks and EA modeling, a lack of knowledge remains about the theoretical foundation of EA benefits. In this paper, we identify EA success factors and EA benefits through a literature review, and integrate these findings with the DeLone & McLean IS success model to propose a theoretical model explaining the realization of EA benefits. In addition, we con-ducted semi-structured interviews with EA experts for a preliminary validation and further exploration of the model. We see this model as a first step to gain insights in and start a discussion on the theory of EA benefit realization. In future research, we plan to empirically validate the proposed model.
Resumo:
Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.
Resumo:
Background: Trauma resulting from traffic crashes poses a significant problem in highly motorised countries. Over a million people worldwide are killed annually and 50 million are critically injured as a result of traffic collisions. In Australia, road crashes cost an average of $17 billion annually in personal loss of income and quality of life, organisational losses in productivity and workplace quality, and health care costs. Driver aggression has been identified as a key factor contributing to crashes, and many motorists report experiencing mild forms of aggression (e.g., rude gestures, horn honking). However despite this concern, driver aggression has received relatively little attention in empirical research, and existing research has been hampered by a number of methodological and conceptual shortcomings. Specifically, there has been substantial disagreement regarding what constitutes aggressive driving and a failure to examine both the situational factors and the emotional and cognitive processes underlying driver aggression. To enhance current understanding of aggressive driving, a model of driver aggression that highlights the cognitive and emotional processes at play in aggressive driving incidents is proposed. Aims: The research aims to improve current understanding of the complex nature of driver aggression by testing and refining a model of aggressive driving that incorporates the person-related and situational factors and the cognitive and emotional appraisal processes fundamental to driver aggression. In doing so, the research will assist to provide a clear definition of what constitutes aggressive driving, assist to identify on-road incidents that trigger driver aggression, and identify the emotional and cognitive appraisal processes that underlie driver aggression. Methods: The research involves three studies. Firstly, to contextualise the model and explore the cognitive and emotional aspects of driver aggression, a diary-based study using self-reports of aggressive driving events will be conducted with a general population of drivers. This data will be supplemented by in-depth follow-up interviews with a sub-sample of participants. Secondly, to test generalisability of the model, a large sample of drivers will be asked to respond to video-based scenarios depicting driving contexts derived from incidents identified in Study 1 as inciting aggression. Finally, to further operationalise and test the model an advanced driving simulator will be used with sample of drivers. These drivers will be exposed to various driving scenarios that would be expected to trigger negative emotional responses. Results: Work on the project has commenced and progress on the first study will be reported.
Resumo:
When compared with similar joint arthroplasties, the prognosis of Total Ankle Replacement (TAR) is not satisfactory although it shows promising results post surgery. To date, most models do not provide the full anatomical functionality and biomechanical range of motion of the healthy ankle joint. This has sparked additional research and evaluation of clinical outcomes in order to enhance ankle prosthesis design. However, the limited biomechanical data that exist in literature are based upon two-dimensional, discrete and outdated techniques1 and may be inaccurate. Since accurate force estimations are crucial to prosthesis design, a paper based on a new biomechanical modeling approach, providing three dimensional forces acting on the ankle joint and the surrounding tissues was published recently, but the identified forces were suspected of being under-estimated, while muscles were . The present paper reports an attempt to improve the accuracy of the analysis by means of novel methods for kinematic processing of gait data, provided in release 4.1 of the AnyBody Modeling System (AnyBody Technology, Aalborg, Denmark) Results from the new method are shown and remaining issues are discussed.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.
Resumo:
This paper presents an innovative prognostics model based on health state probability estimation embedded in the closed loop diagnostic and prognostic system. To employ an appropriate classifier for health state probability estimation in the proposed prognostic model, the comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault levels of three faults in HP-LNG pump. Two sets of impeller-rubbing data were employed for the prediction of pump remnant life based on estimation of discrete health state probability using an outstanding capability of SVM and a feature selection technique. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
Society faces an unprecedented global education challenge to equip professionals with the knowledge and skills to address emerging 21st Century challenges, spanning climate change mitigation through to adaptation measures to deal with issues such as temperature and sea level rise, and diminishing fresh water and fossil fuel reserves. This paper discusses the potential for systemic and synergistic integration of curriculum with campus operations to accelerate curriculum renewal towards ESD, drawing on the authors' experiences within engineering education. The paper begins by a providing a brief overview of the need for timely curriculum renewal towards ESD in tertiary education. The paper then highlights some examples of academic barriers that need to be overcome for integration efforts to be successful, and opportunities for promoting the benefits of such integration. The paper concludes by discussing the rational for planning green campus initiatives within a larger system of curriculum renewal considerations, including awareness raising and developing a common understanding, identifying and mapping graduate attributes, curriculum auditing, content development and strategic renewal, and bridging and outreach.
Resumo:
Digital human modelling (DHM) has today matured from research into industrial application. In the automotive domain, DHM has become a commonly used tool in virtual prototyping and human-centred product design. While this generation of DHM supports the ergonomic evaluation of new vehicle design during early design stages of the product, by modelling anthropometry, posture, motion or predicting discomfort, the future of DHM will be dominated by CAE methods, realistic 3D design, and musculoskeletal and soft tissue modelling down to the micro-scale of molecular activity within single muscle fibres. As a driving force for DHM development, the automotive industry has traditionally used human models in the manufacturing sector (production ergonomics, e.g. assembly) and the engineering sector (product ergonomics, e.g. safety, packaging). In product ergonomics applications, DHM share many common characteristics, creating a unique subset of DHM. These models are optimised for a seated posture, interface to a vehicle seat through standardised methods and provide linkages to vehicle controls. As a tool, they need to interface with other analytic instruments and integrate into complex CAD/CAE environments. Important aspects of current DHM research are functional analysis, model integration and task simulation. Digital (virtual, analytic) prototypes or digital mock-ups (DMU) provide expanded support for testing and verification and consider task-dependent performance and motion. Beyond rigid body mechanics, soft tissue modelling is evolving to become standard in future DHM. When addressing advanced issues beyond the physical domain, for example anthropometry and biomechanics, modelling of human behaviours and skills is also integrated into DHM. Latest developments include a more comprehensive approach through implementing perceptual, cognitive and performance models, representing human behaviour on a non-physiologic level. Through integration of algorithms from the artificial intelligence domain, a vision of the virtual human is emerging.
Resumo:
Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.
Resumo:
This paper seeks to identify and quantify sources of the lagging productivity in Singapore’s retail sector as reported in the Economic Strategies Committee 2010 report. A two-stage analysis is adopted. In the first stage, the Malmquist productivity index is employed which provides measures of productivity change, technological change and efficiency change. In the second stage, technical efficiency estimates are regressed against explanatory variables based on a truncated regression model. Sources of technical efficiency were attributed to quality of workers while product assortment and competition negatively impacted on efficiency.
Resumo:
As computers approach the physical limits of information storable in memory, new methods will be needed to further improve information storage and retrieval. We propose a quantum inspired vector based approach, which offers a contextually dependent mapping from the subsymbolic to the symbolic representations of information. If implemented computationally, this approach would provide exceptionally high density of information storage, without the traditionally required physical increase in storage capacity. The approach is inspired by the structure of human memory and incorporates elements of Gardenfors’ Conceptual Space approach and Humphreys et al.’s matrix model of memory.