668 resultados para Bitrate overhead
Resumo:
Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways from payment systems to assisting the lives of elderly or disabled people. Security threats for these devices become increasingly dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level. Therefore, third-party developers have the opportunity to develop kernel-based low-level security tools which is not normal for smartphone platforms. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS for example, holding the greatest market share among all smartphone OSs, was closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners� privacy. In this work, we present our current results in analyzing the security of Android smartphones with a focus on its Linux side. Our results are not limited to Android, they are also applicable to Linux-based smartphones such as OpenMoko Neo FreeRunner. Our contribution in this work is three-fold. First, we analyze android framework and the Linux-kernel to check security functionalities. We survey wellaccepted security mechanisms and tools which can increase device security. We provide descriptions on how to adopt these security tools on Android kernel, and provide their overhead analysis in terms of resource usage. As open smartphones are released and may increase their market share similar to Symbian, they may attract attention of malware writers. Therefore, our second contribution focuses on malware detection techniques at the kernel level. We test applicability of existing signature and intrusion detection methods in Android environment. We focus on monitoring events on the kernel; that is, identifying critical kernel, log file, file system and network activity events, and devising efficient mechanisms to monitor them in a resource limited environment. Our third contribution involves initial results of our malware detection mechanism basing on static function call analysis. We identified approximately 105 Executable and Linking Format (ELF) executables installed to the Linux side of Android. We perform a statistical analysis on the function calls used by these applications. The results of the analysis can be compared to newly installed applications for detecting significant differences. Additionally, certain function calls indicate malicious activity. Therefore, we present a simple decision tree for deciding the suspiciousness of the corresponding application. Our results present a first step towards detecting malicious applications on Android-based devices.
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
An algorithm for computing dense correspondences between images of a stereo pair or image sequence is presented. The algorithm can make use of both standard matching metrics and the rank and census filters, two filters based on order statistics which have been applied to the image matching problem. Their advantages include robustness to radiometric distortion and amenability to hardware implementation. Results obtained using both real stereo pairs and a synthetic stereo pair with ground truth were compared. The rank and census filters were shown to significantly improve performance in the case of radiometric distortion. In all cases, the results obtained were comparable to, if not better than, those obtained using standard matching metrics. Furthermore, the rank and census have the additional advantage that their computational overhead is less than these metrics. For all techniques tested, the difference between the results obtained for the synthetic stereo pair, and the ground truth results was small.
Resumo:
The rank and census are two filters based on order statistics which have been applied to the image matching problem for stereo pairs. Advantages of these filters include their robustness to radiometric distortion and small amounts of random noise, and their amenability to hardware implementation. In this paper, a new matching algorithm is presented, which provides an overall framework for matching, and is used to compare the rank and census techniques with standard matching metrics. The algorithm was tested using both real stereo pairs and a synthetic pair with ground truth. The rank and census filters were shown to significantly improve performance in the case of radiometric distortion. In all cases, the results obtained were comparable to, if not better than, those obtained using standard matching metrics. Furthermore, the rank and census have the additional advantage that their computational overhead is less than these metrics. For all techniques tested, the difference between the results obtained for the synthetic stereo pair, and the ground truth results was small.
Resumo:
The increasing demand for mobile video has attracted much attention from both industry and researchers. To satisfy users and to facilitate the usage of mobile video, providing optimal quality to the users is necessary. As a result, quality of experience (QoE) becomes an important focus in measuring the overall quality perceived by the end-users, from the aspects of both objective system performance and subjective experience. However, due to the complexity of user experience and diversity of resources (such as videos, networks and mobile devices), it is still challenging to develop QoE models for mobile video that can represent how user-perceived value varies with changing conditions. Previous QoE modelling research has two main limitations: aspects influencing QoE are insufficiently considered; and acceptability as the user value is seldom studied. Focusing on the QoE modelling issues, two aims are defined in this thesis: (i) investigating the key influencing factors of mobile video QoE; and (ii) establishing QoE prediction models based on the relationships between user acceptability and the influencing factors, in order to help provide optimal mobile video quality. To achieve the first goal, a comprehensive user study was conducted. It investigated the main impacts on user acceptance: video encoding parameters such as quantization parameter, spatial resolution, frame rate, and encoding bitrate; video content type; mobile device display resolution; and user profiles including gender, preference for video content, and prior viewing experience. Results from both quantitative and qualitative analysis revealed the significance of these factors, as well as how and why they influenced user acceptance of mobile video quality. Based on the results of the user study, statistical techniques were used to generate a set of QoE models that predict the subjective acceptability of mobile video quality by using a group of the measurable influencing factors, including encoding parameters and bitrate, content type, and mobile device display resolution. Applying the proposed QoE models into a mobile video delivery system, optimal decisions can be made for determining proper video coding parameters and for delivering most suitable quality to users. This would lead to consistent user experience on different mobile video content and efficient resource allocation. The findings in this research enhance the understanding of user experience in the field of mobile video, which will benefit mobile video design and research. This thesis presents a way of modelling QoE by emphasising user acceptability of mobile video quality, which provides a strong connection between technical parameters and user-desired quality. Managing QoE based on acceptability promises the potential for adapting to the resource limitations and achieving an optimal QoE in the provision of mobile video content.
Resumo:
Building and maintaining software are not easy tasks. However, thanks to advances in web technologies, a new paradigm is emerging in software development. The Service Oriented Architecture (SOA) is a relatively new approach that helps bridge the gap between business and IT and also helps systems remain exible. However, there are still several challenges with SOA. As the number of available services grows, developers are faced with the problem of discovering the services they need. Public service repositories such as Programmable Web provide only limited search capabilities. Several mechanisms have been proposed to improve web service discovery by using semantics. However, most of these require manually tagging the services with concepts in an ontology. Adding semantic annotations is a non-trivial process that requires a certain skill-set from the annotator and also the availability of domain ontologies that include the concepts related to the topics of the service. These issues have prevented these mechanisms becoming widespread. This thesis focuses on two main problems. First, to avoid the overhead of manually adding semantics to web services, several automatic methods to include semantics in the discovery process are explored. Although experimentation with some of these strategies has been conducted in the past, the results reported in the literature are mixed. Second, Wikipedia is explored as a general-purpose ontology. The benefit of using it as an ontology is assessed by comparing these semantics-based methods to classic term-based information retrieval approaches. The contribution of this research is significant because, to the best of our knowledge, a comprehensive analysis of the impact of using Wikipedia as a source of semantics in web service discovery does not exist. The main output of this research is a web service discovery engine that implements these methods and a comprehensive analysis of the benefits and trade-offs of these semantics-based discovery approaches.
Resumo:
Predicate encryption (PE) is a new primitive which supports exible control over access to encrypted data. In PE schemes, users' decryption keys are associated with predicates f and ciphertexts encode attributes a that are specified during the encryption procedure. A user can successfully decrypt if and only if f(a) = 1. In this thesis, we will investigate several properties that are crucial to PE. We focus on expressiveness of PE, Revocable PE and Hierarchical PE (HPE) with forward security. For all proposed systems, we provide a security model and analysis using the widely accepted computational complexity approach. Our first contribution is to explore the expressiveness of PE. Existing PE supports a wide class of predicates such as conjunctions of equality, comparison and subset queries, disjunctions of equality queries, and more generally, arbitrary combinations of conjunctive and disjunctive equality queries. We advance PE to evaluate more expressive predicates, e.g., disjunctive comparison or disjunctive subset queries. Such expressiveness is achieved at the cost of computational and space overhead. To improve the performance, we appropriately revise the PE to reduce the computational and space cost. Furthermore, we propose a heuristic method to reduce disjunctions in the predicates. Our schemes are proved in the standard model. We then introduce the concept of Revocable Predicate Encryption (RPE), which extends the previous PE setting with revocation support: private keys can be used to decrypt an RPE ciphertext only if they match the decryption policy (defined via attributes encoded into the ciphertext and predicates associated with private keys) and were not revoked by the time the ciphertext was created. We propose two RPE schemes. Our first scheme, termed Attribute- Hiding RPE (AH-RPE), offers attribute-hiding, which is the standard PE property. Our second scheme, termed Full-Hiding RPE (FH-RPE), offers even stronger privacy guarantees, i.e., apart from possessing the Attribute-Hiding property, the scheme also ensures that no information about revoked users is leaked from a given ciphertext. The proposed schemes are also proved to be secure under well established assumptions in the standard model. Secrecy of decryption keys is an important pre-requisite for security of (H)PE and compromised private keys must be immediately replaced. The notion of Forward Security (FS) reduces damage from compromised keys by guaranteeing confidentiality of messages that were encrypted prior to the compromise event. We present the first Forward-Secure Hierarchical Predicate Encryption (FS-HPE) that is proved secure in the standard model. Our FS-HPE scheme offers some desirable properties: time-independent delegation of predicates (to support dynamic behavior for delegation of decrypting rights to new users), local update for users' private keys (i.e., no master authority needs to be contacted), forward security, and the scheme's encryption process does not require knowledge of predicates at any level including when those predicates join the hierarchy.
Resumo:
Singapore is a highly urbanized city-state country where walking is an important mode of travel. Pedestrians form about 25% of road fatalities every year, making them one of the most vulnerable road user groups in Singapore. Engineering measures like provision of overhead pedestrian crossings and raised zebra crossings tend to address pedestrian safety in general, but there may be occasions where pedestrians are particularly vulnerable so that targeted interventions are more appropriate. The objective of this study is to identify factors and situations that affect the injury severity of pedestrians involved in traffic crashes. Six years of crash data from 2003 to 2008 containing around four thousands pedestrian crashes at roadway segments were analyzed. Injury severity of pedestrians—recorded as slight injury, major injury and fatal—were modeled as a function of roadway characteristics, traffic features, environmental factors and pedestrian demographics by an ordered probit model. Results suggest that the injury severity of pedestrians involved in crashes during night time is higher indicating that pedestrian visibility during night is a key issue in pedestrian safety. The likelihood of fatal or serious injuries is higher for crashes on roads with high speed limit, center and median lane of multi-lane roads, school zones, roads with two-way divided traffic type, and when pedestrians cross the roads. Elderly pedestrians appear to be involved in fatal and serious injury crashes more when they attempt to cross the road without using nearby crossing facilities. Specific countermeasures are recommended based on the findings of this study.
Resumo:
An Application Specific Instruction-set Processor (ASIP) is a specialized processor tailored to run a particular application/s efficiently. However, when there are multiple candidate applications in the application’s domain it is difficult and time consuming to find optimum set of applications to be implemented. Existing ASIP design approaches perform this selection manually based on a designer’s knowledge. We help in cutting down the number of candidate applications by devising a classification method to cluster similar applications based on the special-purpose operations they share. This provides a significant reduction in the comparison overhead while resulting in customized ASIP instruction sets which can benefit a whole family of related applications. Our method gives users the ability to quantify the degree of similarity between the sets of shared operations to control the size of clusters. A case study involving twelve algorithms confirms that our approach can successfully cluster similar algorithms together based on the similarity of their component operations.
Resumo:
Quality of experience (QoE) measures the overall perceived quality of mobile video delivery from subjective user experience and objective system performance. Current QoE computing models have two main limitations: 1) insufficient consideration of the factors influencing QoE, and; 2) limited studies on QoE models for acceptability prediction. In this paper, a set of novel acceptability-based QoE models, denoted as A-QoE, is proposed based on the results of comprehensive user studies on subjective quality acceptance assessments. The models are able to predict users’ acceptability and pleasantness in various mobile video usage scenarios. Statistical regression analysis has been used to build the models with a group of influencing factors as independent predictors, including encoding parameters and bitrate, video content characteristics, and mobile device display resolution. The performance of the proposed A-QoE models has been compared with three well-known objective Video Quality Assessment metrics: PSNR, SSIM and VQM. The proposed A-QoE models have high prediction accuracy and usage flexibility. Future user-centred mobile video delivery systems can benefit from applying the proposed QoE-based management to optimize video coding and quality delivery decisions.
Resumo:
The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to various regulations that require data and operations to reside in specific geographic locations. Thus, cloud users may want to be sure that their stored data have not been relocated into unknown geographic regions that may compromise the security of their stored data. Albeshri et al. (2012) combined proof of storage (POS) protocols with distance-bounding protocols to address this problem. However, their scheme involves unnecessary delay when utilising typical POS schemes due to computational overhead at the server side. The aim of this paper is to improve the basic GeoProof protocol by reducing the computation overhead at the server side. We show how this can maintain the same level of security while achieving more accurate geographic assurance.
Resumo:
For industrial wireless sensor networks, maintaining the routing path for a high packet delivery ratio is one of the key objectives in network operations. It is important to both provide the high data delivery rate at the sink node and guarantee a timely delivery of the data packet at the sink node. Most proactive routing protocols for sensor networks are based on simple periodic updates to distribute the routing information. A faulty link causes packet loss and retransmission at the source until periodic route update packets are issued and the link has been identified as broken. We propose a new proactive route maintenance process where periodic update is backed-up with a secondary layer of local updates repeating with shorter periods for timely discovery of broken links. Proposed route maintenance scheme improves reliability of the network by decreasing the packet loss due to delayed identification of broken links. We show by simulation that proposed mechanism behaves better than the existing popular routing protocols (AODV, AOMDV and DSDV) in terms of end-to-end delay, routing overhead, packet reception ratio.
Resumo:
We present two unconditional secure protocols for private set disjointness tests. In order to provide intuition of our protocols, we give a naive example that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the intersection cardinality. More specifically, it discloses its lower bound. By using the Lagrange interpolation, we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. In this protocol, a verification test is applied to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are the first ones that have been designed without a generic secure function evaluation. More important, they are the most efficient protocols for private disjointness tests in the malicious adversary case.
Resumo:
NTRUEncrypt is a fast and practical lattice-based public-key encryption scheme, which has been standardized by IEEE, but until recently, its security analysis relied only on heuristic arguments. Recently, Stehlé and Steinfeld showed that a slight variant (that we call pNE) could be proven to be secure under chosen-plaintext attack (IND-CPA), assuming the hardness of worst-case problems in ideal lattices. We present a variant of pNE called NTRUCCA, that is IND-CCA2 secure in the standard model assuming the hardness of worst-case problems in ideal lattices, and only incurs a constant factor overhead in ciphertext and key length over the pNE scheme. To our knowledge, our result gives the first IND-CCA2 secure variant of NTRUEncrypt in the standard model, based on standard cryptographic assumptions. As an intermediate step, we present a construction for an All-But-One (ABO) lossy trapdoor function from pNE, which may be of independent interest. Our scheme uses the lossy trapdoor function framework of Peikert and Waters, which we generalize to the case of (k − 1)-of-k-correlated input distributions.
Resumo:
We study the multicast stream authentication problem when an opponent can drop, reorder and introduce data packets into the communication channel. In such a model, packet overhead and computing efficiency are two parameters to be taken into account when designing a multicast stream protocol. In this paper, we propose to use two families of erasure codes to deal with this problem, namely, rateless codes and maximum distance separable codes. Our constructions will have the following advantages. First, our packet overhead will be small. Second, the number of signature verifications to be performed at the receiver is O(1). Third, every receiver will be able to recover all the original data packets emitted by the sender despite losses and injection occurred during the transmission of information.