243 resultados para Verification


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flood related scientific and community-based data are rarely systematically collected and analysed in the Philippines. Over the last decades the Pagsangaan River Basin, Leyte, has experienced several flood events. However, documentation describing flood characteristics such as extent, duration or height of these floods are close to non-existing. To address this issue, computerized flood modelling was used to reproduce past events where there was data available for at least partial calibration and validation. The model was also used to provide scenario-based predictions based on A1B climate change assumptions for the area. The most important input for flood modelling is a Digital Elevation Model (DEM) of the river basin. No accurate topographic maps or Light Detection And Ranging (LIDAR)-generated data are available for the Pagsangaan River. Therefore, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Map (GDEM), Version 1, was chosen as the DEM. Although the horizontal spatial resolution of 30 m is rather desirable, it contains substantial vertical errors. These were identified, different correction methods were tested and the resulting DEM was used for flood modelling. The above mentioned data were combined with cross-sections at various strategic locations of the river network, meteorological records, river water level, and current velocity to develop the 1D-2D flood model. SOBEK was used as modelling software to create different rainfall scenarios, including historic flooding events. Due to the lack of scientific data for the verification of the model quality, interviews with local stakeholders served as the gauge to judge the quality of the generated flood maps. According to interviewees, the model reflects reality more accurately than previously available flood maps. The resulting flood maps are now used by the operations centre of a local flood early warning system for warnings and evacuation alerts. Furthermore these maps can serve as a basis to identify flood hazard areas for spatial land use planning purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Model-based testing (MBT) relies on models of a system under test and/or its environment to derive test cases for the system. This paper discusses the process of MBT and defines a taxonomy that covers the key aspects of MBT approaches. It is intended to help with understanding the characteristics, similarities and differences of those approaches, and with classifying the approach used in a particular MBT tool. To illustrate the taxonomy, a description of how three different examples of MBT tools fit into the taxonomy is provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase of online services, such as eBanks, WebMails, in which users are verified by a username and password, is increasingly exploited by Identity Theft procedures. Identity Theft is a fraud, in which someone pretends to be someone else is order to steal money or get other benefits. To overcome the problem of Identity Theft an additional security layer is required. Within the last decades the option of verifying users based on their keystroke dynamics was proposed during login verification. Thus, the imposter has to be able to type in a similar way to the real user in addition to having the username and password. However, verifying users upon login is not enough, since a logged station/mobile is vulnerable for imposters when the user leaves her machine. Thus, verifying users continuously based on their activities is required. Within the last decade there is a growing interest and use of biometrics tools, however, these are often costly and require additional hardware. Behavioral biometrics, in which users are verified, based on their keyboard and mouse activities, present potentially a good solution. In this paper we discuss the problem of Identity Theft and propose behavioral biometrics as a solution. We survey existing studies and list the challenges and propose solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Model calculations, which include the effects of turbulence during subsequent solar nebula evolution after the collapse of a cool interstellar cloud, can reconcile some of the apparent differences between physical parameters obtained from theory and the cosmochemical record. Two important aspects of turbulence in a protoplanetary cloud include the growth and transport of solid grains. While the physical effects of the process can be calculated and compared with the probable remains of the nebula formulation period, the more subtle effects on primitive grains and their survival in the cosmochemical record cannot be readily evaluated. The environment offered by the Space Station (or Space Shuttle) experimental facility can provide the vacuum and low gravity conditions for sufficiently long time periods required for experimental verification of these cosmochemical models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim To provide an overview of key governance matters relating to medical device trials and practical advice for nurses wishing to initiate or lead them. Background Medical device trials, which are formal research studies that examine the benefits and risks of therapeutic, non-drug treatment medical devices, have traditionally been the purview of physicians and scientists. The role of nurses in medical device trials historically has been as data collectors or co-ordinators rather than as principal investigators. Nurses more recently play an increasing role in initiating and leading medical device trials. Review Methods A review article of nurse-led trials of medical devices. Discussion Central to the quality and safety of all clinical trials is adherence to the International Conference on Harmonization Guidelines for Good Clinical Practice, which is the internationally-agreed standard for the ethically- and scientifically-sound design, conduct and monitoring of a medical device trial, as well as the analysis, reporting and verification of the data derived from that trial. Key considerations include the class of the medical device, type of medical device trial, regulatory status of the device, implementation of standard operating procedures, obligations of the trial sponsor, indemnity of relevant parties, scrutiny of the trial conduct, trial registration, and reporting and publication of the results. Conclusion Nurse-led trials of medical devices are demanding but rewarding research enterprises. As nursing practice and research increasingly embrace technical interventions, it is vital that nurse researchers contemplating such trials understand and implement the principles of Good Clinical Practice to protect both study participants and the research team.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The precise shape of the three-dimensional dose distributions created by intensity-modulated radiotherapy means that the verification of patient position and setup is crucial to the outcome of the treatment. In this paper, we investigate and compare the use of two different image calibration procedures that allow extraction of patient anatomy from measured electronic portal images of intensity-modulated treatment beams. Methods and Materials: Electronic portal images of the intensity-modulated treatment beam delivered using the dynamic multileaf collimator technique were acquired. The images were formed by measuring a series of frames or segments throughout the delivery of the beams. The frames were then summed to produce an integrated portal image of the delivered beam. Two different methods for calibrating the integrated image were investigated with the aim of removing the intensity modulations of the beam. The first involved a simple point-by-point division of the integrated image by a single calibration image of the intensity-modulated beam delivered to a homogeneous polymethyl methacrylate (PMMA) phantom. The second calibration method is known as the quadratic calibration method and required a series of calibration images of the intensity-modulated beam delivered to different thicknesses of homogeneous PMMA blocks. Measurements were made using two different detector systems: a Varian amorphous silicon flat-panel imager and a Theraview camera-based system. The methods were tested first using a contrast phantom before images were acquired of intensity-modulated radiotherapy treatment delivered to the prostate and pelvic nodes of cancer patients at the Royal Marsden Hospital. Results: The results indicate that the calibration methods can be used to remove the intensity modulations of the beam, making it possible to see the outlines of bony anatomy that could be used for patient position verification. This was shown for both posterior and lateral delivered fields. Conclusions: Very little difference between the two calibration methods was observed, so the simpler division method, requiring only the single extra calibration measurement and much simpler computation, was the favored method. This new method could provide a complementary tool to existing position verification methods, and it has the advantage that it is completely passive, requiring no further dose to the patient and using only the treatment fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A research project has been conducted with the aim of developing concentrated plasticity methods suitable for practical advanced analysis of steel frame structures comprising non-compact sections. This paper contains a comprehensive set of analytical benchmark solutions for steel frames comprising non-compact sections, which can be used to verify the accuracy of simplified concentrated plasticity methods of advanced analysis. The analytical benchmark solutions were obtained using a distributed plasticity shell finite element model that explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. A brief description and verification of the shell finite element model is provided in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most security models for authenticated key exchange (AKE) do not explicitly model the associated certification system, which includes the certification authority (CA) and its behaviour. However, there are several well-known and realistic attacks on AKE protocols which exploit various forms of malicious key registration and which therefore lie outside the scope of these models. We provide the first systematic analysis of AKE security incorporating certification systems (ASICS). We define a family of security models that, in addition to allowing different sets of standard AKE adversary queries, also permit the adversary to register arbitrary bitstrings as keys. For this model family we prove generic results that enable the design and verification of protocols that achieve security even if some keys have been produced maliciously. Our approach is applicable to a wide range of models and protocols; as a concrete illustration of its power, we apply it to the CMQV protocol in the natural strengthening of the eCK model to the ASICS setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New Zealand and Australia are leading the world in terms of automated land registry systems. Landonline was introduced some ten years ago for New Zealand, and the Electronic Conveyancing National Law (ECNL) is to be released over the next few years in support of a national electronic conveyancing system to be used throughout Australia. With the assistance of three proof requirements, developed for this purpose, this article measures the integrity of both systems as against the old, manual Torrens system. The authors take the position that any introduced system should at least have the same level of integrity and safety as the originally conceived manual system. The authors argue both Landonline and ECNL, as presently set up, have less credibility than the manual system as it was designed to operate, leading to the possibility of increased fraud or misuse.