838 resultados para computer-based instrumentation
Resumo:
Prevailing video adaptation solutions change the quality of the video uniformly throughout the whole frame in the bitrate adjustment process; while region-of-interest (ROI)-based solutions selectively retains the quality in the areas of the frame where the viewers are more likely to pay more attention to. ROI-based coding can improve perceptual quality and viewer satisfaction while trading off some bandwidth. However, there has been no comprehensive study to measure the bitrate vs. perceptual quality trade-off so far. The paper proposes an ROI detection scheme for videos, which is characterized with low computational complexity and robustness, and measures the bitrate vs. quality trade-off for ROI-based encoding using a state-of-the-art H.264/AVC encoder to justify the viability of this type of encoding method. The results from the subjective quality test reveal that ROI-based encoding achieves a significant perceptual quality improvement over the encoding with uniform quality at the cost of slightly more bits. Based on the bitrate measurements and subjective quality assessments, the bitrate and the perceptual quality estimation models for non-scalable ROI-based video coding (AVC) are developed, which are found to be similar to the models for scalable video coding (SVC).
Resumo:
Current research in secure messaging for Vehicular Ad hoc Networks (VANETs) appears to focus on employing a digital certificate-based Public Key Cryptosystem (PKC) to support security. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This paper proposes a non-certificate-based public key management for VANETs. A comprehensive evaluation of performance and scalability of the proposed public key management regime is presented, which is compared to a certificate-based PKC by employing a number of quantified analyses and simulations. Not only does this paper demonstrate that the proposal can maintain security, but it also asserts that it can improve overall performance and scalability at a lower cost, compared to the certificate-based PKC. It is believed that the proposed scheme will add a new dimension to the key management and verification services for VANETs.
Resumo:
Recommender systems are one of the recent inventions to deal with ever growing information overload in relation to the selection of goods and services in a global economy. Collaborative Filtering (CF) is one of the most popular techniques in recommender systems. The CF recommends items to a target user based on the preferences of a set of similar users known as the neighbours, generated from a database made up of the preferences of past users. With sufficient background information of item ratings, its performance is promising enough but research shows that it performs very poorly in a cold start situation where there is not enough previous rating data. As an alternative to ratings, trust between the users could be used to choose the neighbour for recommendation making. Better recommendations can be achieved using an inferred trust network which mimics the real world "friend of a friend" recommendations. To extend the boundaries of the neighbour, an effective trust inference technique is required. This thesis proposes a trust interference technique called Directed Series Parallel Graph (DSPG) which performs better than other popular trust inference algorithms such as TidalTrust and MoleTrust. Another problem is that reliable explicit trust data is not always available. In real life, people trust "word of mouth" recommendations made by people with similar interests. This is often assumed in the recommender system. By conducting a survey, we can confirm that interest similarity has a positive relationship with trust and this can be used to generate a trust network for recommendation. In this research, we also propose a new method called SimTrust for developing trust networks based on user's interest similarity in the absence of explicit trust data. To identify the interest similarity, we use user's personalised tagging information. However, we are interested in what resources the user chooses to tag, rather than the text of the tag applied. The commonalities of the resources being tagged by the users can be used to form the neighbours used in the automated recommender system. Our experimental results show that our proposed tag-similarity based method outperforms the traditional collaborative filtering approach which usually uses rating data.
Resumo:
The low resolution of images has been one of the major limitations in recognising humans from a distance using their biometric traits, such as face and iris. Superresolution has been employed to improve the resolution and the recognition performance simultaneously, however the majority of techniques employed operate in the pixel domain, such that the biometric feature vectors are extracted from a super-resolved input image. Feature-domain superresolution has been proposed for face and iris, and is shown to further improve recognition performance by capitalising on direct super-resolving the features which are used for recognition. However, current feature-domain superresolution approaches are limited to simple linear features such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which are not the most discriminant features for biometrics. Gabor-based features have been shown to be one of the most discriminant features for biometrics including face and iris. This paper proposes a framework to conduct super-resolution in the non-linear Gabor feature domain to further improve the recognition performance of biometric systems. Experiments have confirmed the validity of the proposed approach, demonstrating superior performance to existing linear approaches for both face and iris biometrics.
Resumo:
The literature supporting the notion that active, student-centered learning is superior to passive, teacher-centered instruction is encyclopedic (Bonwell & Eison, 1991; Bruning, Schraw, & Ronning, 1999; Haile, 1997a, 1997b, 1998; Johnson, Johnson, & Smith, 1999). Previous action research demonstrated that introducing a learning activity in class improved the learning outcomes of students (Mejias, 2010). People acquire knowledge and skills through practice and reflection, not by watching and listening to others telling them how to do something. In this context, this project aims to find more insights about the level of interactivity in the curriculum a class should have and its alignment with assessment so the intended learning outcomes (ILOs) are achieved. In this project, interactivity is implemented in the form of problem- based learning (PBL). I present the argument that a more continuous formative feedback when implemented with the correct amount of PBL stimulates student engagement bringing enormous benefits to student learning. Different levels of practical work (PBL) were implemented together with two different assessment approaches in two subjects. The outcomes were measured using qualitative and quantitative data to evaluate the levels of student engagement and satisfaction in the terms of ILOs.
Resumo:
We address the problem of face recognition on video by employing the recently proposed probabilistic linear discrimi-nant analysis (PLDA). The PLDA has been shown to be robust against pose and expression in image-based face recognition. In this research, the method is extended and applied to video where image set to image set matching is performed. We investigate two approaches of computing similarities between image sets using the PLDA: the closest pair approach and the holistic sets approach. To better model face appearances in video, we also propose the heteroscedastic version of the PLDA which learns the within-class covariance of each individual separately. Our experi-ments on the VidTIMIT and Honda datasets show that the combination of the heteroscedastic PLDA and the closest pair approach achieves the best performance.
Resumo:
Distributed generators (DGs) are defined as generators that are connected to a distribution network. The direction of the power flow and short-circuit current in a network could be changed compared with one without DGs. The conventional protective relay scheme does not meet the requirement in this emerging situation. As the number and capacity of DGs in the distribution network increase, the problem of coordinating protective relays becomes more challenging. Given this background, the protective relay coordination problem in distribution systems is investigated, with directional overcurrent relays taken as an example, and formulated as a mixed integer nonlinear programming problem. A mathematical model describing this problem is first developed, and the well-developed differential evolution algorithm is then used to solve it. Finally, a sample system is used to demonstrate the feasiblity and efficiency of the developed method.
Resumo:
This paper demonstrates an experimental study that examines the accuracy of various information retrieval techniques for Web service discovery. The main goal of this research is to evaluate algorithms for semantic web service discovery. The evaluation is comprehensively benchmarked using more than 1,700 real-world WSDL documents from INEX 2010 Web Service Discovery Track dataset. For automatic search, we successfully use Latent Semantic Analysis and BM25 to perform Web service discovery. Moreover, we provide linking analysis which automatically links possible atomic Web services to meet the complex requirements of users. Our fusion engine recommends a final result to users. Our experiments show that linking analysis can improve the overall performance of Web service discovery. We also find that keyword-based search can quickly return results but it has limitation of understanding users’ goals.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
This paper describes a new system, dubbed Continuous Appearance-based Trajectory Simultaneous Localisation and Mapping (CAT-SLAM), which augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearance-based loop closure. As in other approaches to appearance-based mapping, loop closure is performed without calculating global feature geometry or performing 3D map construction. Loop-closure filtering uses a probabilistic distribution of possible loop closures along the robot’s previous trajectory, which is represented by a linked list of previously visited locations linked by odometric information. Sequential appearance-based place recognition and local metric pose filtering are evaluated simultaneously using a Rao–Blackwellised particle filter, which weights particles based on appearance matching over sequential frames and the similarity of robot motion along the trajectory. The particle filter explicitly models both the likelihood of revisiting previous locations and exploring new locations. A modified resampling scheme counters particle deprivation and allows loop-closure updates to be performed in constant time for a given environment. We compare the performance of CAT-SLAM with FAB-MAP (a state-of-the-art appearance-only SLAM algorithm) using multiple real-world datasets, demonstrating an increase in the number of correct loop closures detected by CAT-SLAM.
Resumo:
Exploiting wind-energy is one possible way to ex- tend flight duration for Unmanned Arial Vehicles. Wind-energy can also be used to minimise energy consumption for a planned path. In this paper, we consider uncertain time-varying wind fields and plan a path through them. A Gaussian distribution is used to determine uncertainty in the Time-varying wind fields. We use Markov Decision Process to plan a path based upon the uncertainty of Gaussian distribution. Simulation results that compare the direct line of flight between start and target point and our planned path for energy consumption and time of travel are presented. The result is a robust path using the most visited cell while sampling the Gaussian distribution of the wind field in each cell.
Resumo:
This paper introduces the Weighted Linear Discriminant Analysis (WLDA) technique, based upon the weighted pairwise Fisher criterion, for the purposes of improving i-vector speaker verification in the presence of high intersession variability. By taking advantage of the speaker discriminative information that is available in the distances between pairs of speakers clustered in the development i-vector space, the WLDA technique is shown to provide an improvement in speaker verification performance over traditional Linear Discriminant Analysis (LDA) approaches. A similar approach is also taken to extend the recently developed Source Normalised LDA (SNLDA) into Weighted SNLDA (WSNLDA) which, similarly, shows an improvement in speaker verification performance in both matched and mismatched enrolment/verification conditions. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that both WLDA and WSNLDA are viable as replacement techniques to improve the performance of LDA and SNLDA-based i-vector speaker verification.
Resumo:
The privacy of efficient tree-based RFID authentication protocols is heavily dependent on the branching factor on the top layer. Indefinitely increasing the branching factor, however, is not a viable option. This paper proposes the alternate-tree walking scheme as well as two protocols to circumvent this problem. The privacy of the resulting protocols is shown to be comparable to that of linear-time protocols, where there is no leakage of information, whilst reducing the computational load of the database by one-third of what is required of tree-based protocols during authentication. We also identify and address a limitation in quantifying privacy in RFID protocols.
Resumo:
A novel concept of producing high dc voltage for pulsed-power applications is proposed in this paper. The topology consists of an LC resonant circuit supplied through a tuned alternating waveform that is produced by an inverter. The control scheme is based on the detection of variations in the resonant frequency and adjustment of the switching signal patterns for the inverter to produce a square waveform with exactly the same frequencies. Therefore the capacitor voltage oscillates divergently with an increasing amplitude. A simple one-stage capacitor-diode voltage multiplier (CDVM) connected to the resonant capacitor then rectifies the alternating voltage and gives a dc level equal to twice the input voltage amplitude. The produced high voltage appears then in the form of high-voltage pulses across the load. A basic model is simulated by Simulink platform of MATLAB and the results are included in the paper.
Resumo:
Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAP’s application in a highly varied range of robotics research scenarios.