923 resultados para correctness verification


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Embedded real-time programs rely on external interrupts to respond to events in their physical environment in a timely fashion. Formal program verification theories, such as the refinement calculus, are intended for development of sequential, block-structured code and do not allow for asynchronous control constructs such as interrupt service routines. In this article we extend the refinement calculus to support formal development of interrupt-dependent programs. To do this we: use a timed semantics, to support reasoning about the occurrence of interrupts within bounded time intervals; introduce a restricted form of concurrency, to model composition of interrupt service routines with the main program they may preempt; introduce a semantics for shared variables, to model contention for variables accessed by both interrupt service routines and the main program; and use real-time scheduling theory to discharge timing requirements on interruptible program code.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Client puzzles are moderately-hard cryptographic problems neither easy nor impossible to solve that can be used as a counter-measure against denial of service attacks on network protocols. Puzzles based on modular exponentiation are attractive as they provide important properties such as non-parallelisability, deterministic solving time, and linear granularity. We propose an efficient client puzzle based on modular exponentiation. Our puzzle requires only a few modular multiplications for puzzle generation and verification. For a server under denial of service attack, this is a significant improvement as the best known non-parallelisable puzzle proposed by Karame and Capkun (ESORICS 2010) requires at least 2k-bit modular exponentiation, where k is a security parameter. We show that our puzzle satisfies the unforgeability and difficulty properties defined by Chen et al. (Asiacrypt 2009). We present experimental results which show that, for 1024-bit moduli, our proposed puzzle can be up to 30 times faster to verify than the Karame-Capkun puzzle and 99 times faster than the Rivest et al.'s time-lock puzzle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the long term, with development of skill, knowledge, exposure and confidence within the engineering profession, rigorous analysis techniques have the potential to become a reliable and far more comprehensive method for design and verification of the structural adequacy of OPS, write Nimal J Perera, David P Thambiratnam and Brian Clark. This paper explores the potential to enhance operator safety of self-propelled mechanical plant subjected to roll over and impact of falling objects using the non-linear and dynamic response simulation capabilities of analytical processes to supplement quasi-static testing methods prescribed in International and Australian Codes of Practice for bolt on Operator Protection Systems (OPS) that are post fitted. The paper is based on research work carried out by the authors at the Queensland University of Technology (QUT) over a period of three years by instrumentation of prototype tests, scale model tests in the laboratory and rigorous analysis using validated Finite Element (FE) Models. The FE codes used were ABAQUS for implicit analysis and LSDYNA for explicit analysis. The rigorous analysis and dynamic simulation technique described in the paper can be used to investigate the structural response due to accident scenarios such as multiple roll over, impact of multiple objects and combinations of such events and thereby enhance the safety and performance of Roll Over and Falling Object Protection Systems (ROPS and FOPS). The analytical techniques are based on sound engineering principles and well established practice for investigation of dynamic impact on all self propelled vehicles. They are used for many other similar applications where experimental techniques are not feasible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current research in secure messaging for Vehicular Ad hoc Networks (VANETs) appears to focus on employing a digital certificate-based Public Key Cryptosystem (PKC) to support security. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This paper proposes a non-certificate-based public key management for VANETs. A comprehensive evaluation of performance and scalability of the proposed public key management regime is presented, which is compared to a certificate-based PKC by employing a number of quantified analyses and simulations. Not only does this paper demonstrate that the proposal can maintain security, but it also asserts that it can improve overall performance and scalability at a lower cost, compared to the certificate-based PKC. It is believed that the proposed scheme will add a new dimension to the key management and verification services for VANETs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: James Clerk Maxwell is usually recognized as being the first, in 1854, to consider using inhomogeneous media in optical systems. However, some fifty years earlier Thomas Young, stimulated by his interest in the optics of the eye and accommodation, had already modeled some applications of gradient-index optics. These applications included using an axial gradient to provide spherical aberration-free optics and a spherical gradient to describe the optics of the atmosphere and the eye lens. We evaluated Young’s contributions. Method: We attempted to derive Young’s equations for axial and spherical refractive index gradients. Raytracing was used to confirm accuracy of formula. Results: We did not confirm Young’s equation for the axial gradient to provide aberration-free optics, but derived a slightly different equation. We confirmed the correctness of his equations for deviation of rays in a spherical gradient index and for the focal length of a lens with a nucleus of fixed index surrounded by a cortex of reducing index towards the edge. Young claimed that the equation for focal length applied to a lens with part of the constant index nucleus of the sphere removed, such that the loss of focal length was a quarter of the thickness removed, but this is not strictly correct. Conclusion: Young’s theoretical work in gradient-index optics received no acknowledgement from either his contemporaries or later authors. While his model of the eye lens is not an accurate physiological description of the human lens, with the index reducing least quickly at the edge, it represented a bold attempt to approximate the characteristics of the lens. Thomas Young’s work deserves wider recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digital human modelling (DHM) has today matured from research into industrial application. In the automotive domain, DHM has become a commonly used tool in virtual prototyping and human-centred product design. While this generation of DHM supports the ergonomic evaluation of new vehicle design during early design stages of the product, by modelling anthropometry, posture, motion or predicting discomfort, the future of DHM will be dominated by CAE methods, realistic 3D design, and musculoskeletal and soft tissue modelling down to the micro-scale of molecular activity within single muscle fibres. As a driving force for DHM development, the automotive industry has traditionally used human models in the manufacturing sector (production ergonomics, e.g. assembly) and the engineering sector (product ergonomics, e.g. safety, packaging). In product ergonomics applications, DHM share many common characteristics, creating a unique subset of DHM. These models are optimised for a seated posture, interface to a vehicle seat through standardised methods and provide linkages to vehicle controls. As a tool, they need to interface with other analytic instruments and integrate into complex CAD/CAE environments. Important aspects of current DHM research are functional analysis, model integration and task simulation. Digital (virtual, analytic) prototypes or digital mock-ups (DMU) provide expanded support for testing and verification and consider task-dependent performance and motion. Beyond rigid body mechanics, soft tissue modelling is evolving to become standard in future DHM. When addressing advanced issues beyond the physical domain, for example anthropometry and biomechanics, modelling of human behaviours and skills is also integrated into DHM. Latest developments include a more comprehensive approach through implementing perceptual, cognitive and performance models, representing human behaviour on a non-physiologic level. Through integration of algorithms from the artificial intelligence domain, a vision of the virtual human is emerging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Construction is undoubtedly the most dangerous industry in Hong Kong, being responsible for 76 percent of all fatal accidents in industry in the region – around twenty times more than any other industry. In this paper, it is argued that while this rate can be largely reduced by improved production practices in isolation from the project’s physical design, there is some scope for the design team to contribute to site safety. A new safety assessment method, the Virtual Safety Assessment System (VSAS), is described which offers assistance. This involves individual construction workers being presented with 3D virtual risky scenarios of their project and a range of possible actions for selection. The method provides an analysis of results, including an assessment of the correctness or otherwise of the user’s selections, contributing to an iterative process of retraining and testing until a satisfactory level of knowledge and skill is achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work presented in this poster outlines the steps taken to model a 4 mm conical collimator (BrainLab, Germany) on a Novalis Tx linear accelerator (Varian, Palo Alto, USA) capable of producing a 6MV photon beam for treatment of Stereotactic Radiosurgery (SRS) patients. The verification of this model was performed by measurements in liquid water and in virtual water. The measurements involved scanning depth dose and profiles in a water tank plus measurement of output factors in virtual water using Gafchromic® EBT3 film.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The time consuming and labour intensive task of identifying individuals in surveillance video is often challenged by poor resolution and the sheer volume of stored video. Faces or identifying marks such as tattoos are often too coarse for direct matching by machine or human vision. Object tracking and super-resolution can then be combined to facilitate the automated detection and enhancement of areas of interest. The object tracking process enables the automatic detection of people of interest, greatly reducing the amount of data for super-resolution. Smaller regions such as faces can also be tracked. A number of instances of such regions can then be utilized to obtain a super-resolved version for matching. Performance improvement from super-resolution is demonstrated using a face verification task. It is shown that there is a consistent improvement of approximately 7% in verification accuracy, using both Eigenface and Elastic Bunch Graph Matching approaches for automatic face verification, starting from faces with an eye to eye distance of 14 pixels. Visual improvement in image fidelity from super-resolved images over low-resolution and interpolated images is demonstrated on a small database. Current research and future directions in this area are also summarized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Voltage drop and rise at network peak and off–peak periods along with voltage unbalance are the major power quality problems in low voltage distribution networks. Usually, the utilities try to use adjusting the transformer tap changers as a solution for the voltage drop. They also try to distribute the loads equally as a solution for network voltage unbalance problem. On the other hand, the ever increasing energy demand, along with the necessity of cost reduction and higher reliability requirements, are driving the modern power systems towards Distributed Generation (DG) units. This can be in the form of small rooftop photovoltaic cells (PV), Plug–in Electric Vehicles (PEVs) or Micro Grids (MGs). Rooftop PVs, typically with power levels ranging from 1–5 kW installed by the householders are gaining popularity due to their financial benefits for the householders. Also PEVs will be soon emerged in residential distribution networks which behave as a huge residential load when they are being charged while in their later generation, they are also expected to support the network as small DG units which transfer the energy stored in their battery into grid. Furthermore, the MG which is a cluster of loads and several DG units such as diesel generators, PVs, fuel cells and batteries are recently introduced to distribution networks. The voltage unbalance in the network can be increased due to the uncertainties in the random connection point of the PVs and PEVs to the network, their nominal capacity and time of operation. Therefore, it is of high interest to investigate the voltage unbalance in these networks as the result of MGs, PVs and PEVs integration to low voltage networks. In addition, the network might experience non–standard voltage drop due to high penetration of PEVs, being charged at night periods, or non–standard voltage rise due to high penetration of PVs and PEVs generating electricity back into the grid in the network off–peak periods. In this thesis, a voltage unbalance sensitivity analysis and stochastic evaluation is carried out for PVs installed by the householders versus their installation point, their nominal capacity and penetration level as different uncertainties. A similar analysis is carried out for PEVs penetration in the network working in two different modes: Grid to vehicle and Vehicle to grid. Furthermore, the conventional methods are discussed for improving the voltage unbalance within these networks. This is later continued by proposing new and efficient improvement methods for voltage profile improvement at network peak and off–peak periods and voltage unbalance reduction. In addition, voltage unbalance reduction is investigated for MGs and new improvement methods are proposed and applied for the MG test bed, planned to be established at Queensland University of Technology (QUT). MATLAB and PSCAD/EMTDC simulation softwares are used for verification of the analyses and the proposals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis presented in this paper is that the land fraud committed by Matthew Perrin in Queensland and inflicted upon Roger Mildenhall in Western Australia demonstrates the need for urgent procedural reform to the conveyancing process. Should this not occur, then calls to reform the substantive principles of the Torrens system will be heard throughout the jurisdictions that adopt title by registration, particularly in those places where immediate indefeasibility is still the norm. This paper closely examines the factual matrix behind both of these frauds, and asks what steps should have been taken to prevent them occurring. With 2012 bringing us Australian legislation embedding a national e-conveyancing system and a new Land Transfer Act for New Zealand we ask what legislative measures should be introduced to minimise the potential for such fraud. In undertaking this study, we reflect on whether the activities of Perrin and the criminals responsible for stealing Mildenhall's land would have succeeded under the present system for automated registration utilised in New Zealand.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Food modelling systems such as the Core Foods and the Australian Guide to Healthy Eating are frequently used as nutritional assessment tools for menus in ‘well’ groups (such as boarding schools, prisons and mental health facilities), with the draft Foundation and Total Diets (FATD) the latest revision. The aim of this paper is to apply the FATD to an assessment of food provision in a long stay, ‘well’, group setting to determine its usefulness as a tool. A detailed menu review was conducted in a 1000 bed male prison, including verification of all recipes. Full diet histories were collected on 106 prisoners which included foods consumed from the menu and self funded snacks. Both the menu and diet histories were analysed according to core foods, with recipes used to assist in quantification of mixed dishes. Comparison was made of average core foods with Foundation Diet recommendations (FDR) for males. Results showed that the standard menu provided sufficient quantity for 8 of 13 FDRs, however was low in nuts, legumes, refined cereals and marginally low in fruits and orange vegetables. The average prisoner diet achieved 9 of 13 FDRs, notably with margarines and oils less than half and legumes one seventh of recommended. Overall, although the menu and prisoner diets could easily be assessed using the FDRs, it was not consistent with recommendations. In long stay settings other Nutrient Reference Values not modelled in the FATDS need consideration, in particular, Suggested Dietary Targets and professional judgement is required in interpretation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the effects of limited speech data in the context of speaker verification using a probabilistic linear discriminant analysis (PLDA) approach. Being able to reduce the length of required speech data is important to the development of automatic speaker verification system in real world applications. When sufficient speech is available, previous research has shown that heavy-tailed PLDA (HTPLDA) modeling of speakers in the i-vector space provides state-of-the-art performance, however, the robustness of HTPLDA to the limited speech resources in development, enrolment and verification is an important issue that has not yet been investigated. In this paper, we analyze the speaker verification performance with regards to the duration of utterances used for both speaker evaluation (enrolment and verification) and score normalization and PLDA modeling during development. Two different approaches to total-variability representation are analyzed within the PLDA approach to show improved performance in short-utterance mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development. The results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset suggest that the HTPLDA system can continue to achieve better performance than Gaussian PLDA (GPLDA) as evaluation utterance lengths are decreased. We also highlight the importance of matching durations for score normalization and PLDA modeling to the expected evaluation conditions. Finally, we found that a pooled total-variability approach to PLDA modeling can achieve better performance than the traditional concatenated total-variability approach for short utterances in mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development.