948 resultados para Computer Engineering|Computer science
The backfilled GEI : a cross-capture modality gait feature for frontal and side-view gait recognition
Resumo:
In this paper, we propose a novel direction for gait recognition research by proposing a new capture-modality independent, appearance-based feature which we call the Back-filled Gait Energy Image (BGEI). It can can be constructed from both frontal depth images, as well as the more commonly used side-view silhouettes, allowing the feature to be applied across these two differing capturing systems using the same enrolled database. To evaluate this new feature, a frontally captured depth-based gait dataset was created containing 37 unique subjects, a subset of which also contained sequences captured from the side. The results demonstrate that the BGEI can effectively be used to identify subjects through their gait across these two differing input devices, achieving rank-1 match rate of 100%, in our experiments. We also compare the BGEI against the GEI and GEV in their respective domains, using the CASIA dataset and our depth dataset, showing that it compares favourably against them. The experiments conducted were performed using a sparse representation based classifier with a locally discriminating input feature space, which show significant improvement in performance over other classifiers used in gait recognition literature, achieving state of the art results with the GEI on the CASIA dataset.
Resumo:
In this paper, we review the sequential slotted amplify-decode-and-forward (SADF) protocol with half-duplex single-antenna and evaluate its performance in terms of pairwise error probability (PEP). We obtain the PEP upper bound of the protocol and find out that the achievable diversity order of the protocol is two with arbitrary number of relay terminals. To achieve the maximum achievable diversity order, we propose a simple precoder that is easy to implement with any number of relay terminals and transmission slots. Simulation results show that the proposed precoder achieves the maximum achievable diversity order and has similar BER performance compared to some of the existing precoders.
Resumo:
Distributed space-time coding (DSTC) exploits the concept of cooperative diversity and space-time coding to offer a powerful bandwidth efficient solution with improved diversity. In this paper, we evaluate the performance of DSTC with slotted amplify-and-forward protocol (SAF). Relay nodes between the source and the destination nodes are grouped into two relay clusters based on their respective locations and these relay clusters cooperate to transmit the space-time coded signal to the destination node in different time frames. We further extend the proposed Slotted-DSTC to Slotted DSTC with redundant code (Slotted-DSTC-R) protocol where the relay nodes in both relay clusters forward the same space-time coded signal to the destination node to achieve a higher diversity order.
Resumo:
In this paper, we propose a novel relay ordering and scheduling strategy for the sequential slotted amplify-and-forward (SAF) protocol and evaluate its performance in terms of diversity-multiplexing trade-off (DMT). The relays between the source and destination are grouped into two relay clusters based on their respective locations. The proposed strategy achieves partial relay isolation and decreases the decoding complexity at the destination. We show that the DMT upper bound of sequential-SAF with the proposed strategy outperforms other amplify and forward protocols and is more practical compared to the relay isolation assumption made in the original paper [1]. Simulation result shows that the sequential-SAF protocol with the proposed strategy has better outage performance compared to the existing AF and non-cooperative protocols in high SNR regime.
Resumo:
In this paper, we propose a novel slotted hybrid cooperative protocol named the sequential slotted amplify-decodeand-forward (SADF) protocol and evaluate its performance in terms of diversity-multiplexing trade-off (DMT). The relays between the source and destination are divided into two different groups and each relay either amplifies or decodes the received signal. We first compute the optimal DMT of the proposed protocol with the assumption of perfect decoding at the DF relays. We then derive the DMT closed-form expression of the proposed sequential-SADF and obtain the proximity gain bound for achieving the optimal DMT. With the proximity gain bound, we then found the distance ratio to achieve the optimal DMT performance. Simulation result shows that the proposed protocol with high proximity gain outperforms other cooperative communication protocols in high SNR regime.
Resumo:
This paper describes in detail our Security-Critical Program Analyser (SCPA). SCPA is used to assess the security of a given program based on its design or source code with regard to data flow-based metrics. Furthermore, it allows software developers to generate a UML-like class diagram of their program and annotate its confidential classes, methods and attributes. SCPA is also capable of producing Java source code for the generated design of a given program. This source code can then be compiled and the resulting Java bytecode program can be used by the tool to assess the program's overall security based on our security metrics.
Resumo:
Refactoring is a common approach to producing better quality software. Its impact on many software quality properties, including reusability, maintainability and performance, has been studied and measured extensively. However, its impact on the information security of programs has received relatively little attention. In this work, we assess the impact of a number of the most common code-level refactoring rules on data security, using security metrics that are capable of measuring security from the viewpoint of potential information flow. The metrics are calculated for a given Java program using a static analysis tool we have developed to automatically analyse compiled Java bytecode. We ran our Java code analyser on various programs which were refactored according to each rule. New values of the metrics for the refactored programs then confirmed that the code changes had a measurable effect on information security.
Resumo:
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is also proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Experiments based on several real-world data collections demonstrate that WebPut outperforms existing approaches.
Resumo:
This paper introduces PartSS, a new partition-based fil- tering for tasks performing string comparisons under edit distance constraints. PartSS offers improvements over the state-of-the-art method NGPP with the implementation of a new partitioning scheme and also improves filtering abil- ities by exploiting theoretical results on shifting and scaling ranges, thus accelerating the rate of calculating edit distance between strings. PartSS filtering has been implemented within two major tasks of data integration: similarity join and approximate membership extraction under edit distance constraints. The evaluation on an extensive range of real-world datasets demonstrates major gain in efficiency over NGPP and QGrams approaches.
Resumo:
Optimal Asset Maintenance decisions are imperative for efficient asset management. Decision Support Systems are often used to help asset managers make maintenance decisions, but high quality decision support must be based on sound decision-making principles. For long-lived assets, a successful Asset Maintenance decision-making process must effectively handle multiple time scales. For example, high-level strategic plans are normally made for periods of years, while daily operational decisions may need to be made within a space of mere minutes. When making strategic decisions, one usually has the luxury of time to explore alternatives, whereas routine operational decisions must often be made with no time for contemplation. In this paper, we present an innovative, flexible decision-making process model which distinguishes meta-level decision making, i.e., deciding how to make decisions, from the information gathering and analysis steps required to make the decisions themselves. The new model can accommodate various decision types. Three industrial case studies are given to demonstrate its applicability.
Resumo:
The Cross-Entropy (CE) is an efficient method for the estimation of rare-event probabilities and combinatorial optimization. This work presents a novel approach of the CE for optimization of a Soft-Computing controller. A Fuzzy controller was designed to command an unmanned aerial system (UAS) for avoiding collision task. The only sensor used to accomplish this task was a forward camera. The CE is used to reach a near-optimal controller by modifying the scaling factors of the controller inputs. The optimization was realized using the ROS-Gazebo simulation system. In order to evaluate the optimization a big amount of tests were carried out with a real quadcopter.
Resumo:
With the widespread application of healthcare Information and Communication Technology (ICT), constructing a stable and sustainable data sharing circumstance has attracted rapidly growing attention in both academic research area and healthcare industry. Cloud computing is one of long dreamed visions of Healthcare Cloud (HC), which matches the need of healthcare information sharing directly to various health providers over the Internet, regardless of their location and the amount of data. In this paper, we discuss important research tool related to health information sharing and integration in HC and investigate the arising challenges and issues. We describe many potential solutions to provide more opportunities to implement EHR cloud. As well, we introduce the development of a HC related collaborative healthcare research example, thus illustrating the prospective of applying Cloud Computing in the health information science research.
Resumo:
In this study, we explore the design and evaluation of a mobile online discussion system for motivating students to share their learning experiences. The system supports interaction with peers and academic staff anytime and anywhere using mobile devices. The application introduces a set of features that enables customisation for different purposes. This paper describes the application and explains the motivation for developing the application. We describe the methods and results of a case study that explores usage of the application among a small group of localised participants. Finally, we discuss the implications of this work and outline future areas of research and development.
Resumo:
The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Spatio-Temporal interest points are the most popular feature representation in the field of action recognition. A variety of methods have been proposed to detect and describe local patches in video with several techniques reporting state of the art performance for action recognition. However, the reported results are obtained under different experimental settings with different datasets, making it difficult to compare the various approaches. As a result of this, we seek to comprehensively evaluate state of the art spatio- temporal features under a common evaluation framework with popular benchmark datasets (KTH, Weizmann) and more challenging datasets such as Hollywood2. The purpose of this work is to provide guidance for researchers, when selecting features for different applications with different environmental conditions. In this work we evaluate four popular descriptors (HOG, HOF, HOG/HOF, HOG3D) using a popular bag of visual features representation, and Support Vector Machines (SVM)for classification. Moreover, we provide an in-depth analysis of local feature descriptors and optimize the codebook sizes for different datasets with different descriptors. In this paper, we demonstrate that motion based features offer better performance than those that rely solely on spatial information, while features that combine both types of data are more consistent across a variety of conditions, but typically require a larger codebook for optimal performance.