971 resultados para Data integrity.
Resumo:
[ES]El proyecto está orientado a conseguir una comunicación inalámbrica y segura de una red de sensores IP. Por un lado, mediante el protocolo 6LoWPAN se consigue que los datos se transmitan mediante IPv6 y, por otro lado, gracias al protocolo LADON se establecen los servicios de seguridad de autenticación, integridad de datos, autorización y control de acceso.
Resumo:
移动代理技术是未来网络计算的一种新模式,特别适合于电子商务领域。其安全性是制约移动代理技术能否广泛应用的重要因素,数据完整性是其中的一个关键问题。 移动代理数据完整性保护协议的设计和安全性分析是移动代理安全性研究的重点和难点。本文从对移动代理环境下的数据完整性属性研究入手,给出了移动代理数据完整性新定义,以此为基础给出了一种分析移动代理协议完整性属性的有效方法,并以典型协议为实例验证了该方法的有效性,针对实例分析中所发现的缺陷,设计了一个可配置移动代理数据完整性保护协议。本文主要创新性工作包括: (1)对移动代理数据完整性进行了重新定义。通过划分安全目标和执行环境,从节点数据完整性和路径完整性两方面对比分析了移动代理数据完整性同传统消息完整性的联系和区别,给出了移动代理数据完整性的新内涵和形式化描述,为此类协议的形式化验证和设计提供了基础。 (2)采用CSP方法建立了典型移动代理数据完整性保护协议的形式化模型。通过对参与者信任程度的划分,建立了此类协议的通用模型结构,用CSP方法描述协议参与者进程和消息集合,建立了此类协议的形式化模型。该模型可以较好地表示协议在多轮次、参与者数量不确定情况下的执行情况。 (3)给出了移动代理路径完整性形式化规约,建立了基于阶函数的路径完整性分析方法。通过路径完整性形式化规约,建立了路径完整性同证据唯一性的关系,通过基于阶函数的路径完整性证明定理,给出了基于阶函数的移动代理路径证据唯一性证明方法,构建了一套移动代理路径完整性分析的形式化方法。 (4)构造了针对典型的移动代理数据完整性保护协议的新型攻击。采用基于阶函数的移动代理完整性分析方法,对PM协议和Cheng-Wei协议进行了分析,发现了协议中存在的路径完整性保护缺陷,并构造了针对该缺陷的概率攻击方案,验证了模型和分析方法的有效性。 (5)给出了移动代理数据完整性保护协议安全限制条件,设计了一个可配置移动代理完整性保护协议。该协议在一定条件下可以防止所发现的新攻击。并给出了该协议在具体条件下的形式化分析结果,证明了该协议在具体条件下能够保证数据的完整性。
Resumo:
Quantitative optical spectroscopy has the potential to provide an effective low cost, and portable solution for cervical pre-cancer screening in resource-limited communities. However, clinical studies to validate the use of this technology in resource-limited settings require low power consumption and good quality control that is minimally influenced by the operator or variable environmental conditions in the field. The goal of this study was to evaluate the effects of two sources of potential error: calibration and pressure on the extraction of absorption and scattering properties of normal cervical tissues in a resource-limited setting in Leogane, Haiti. Our results show that self-calibrated measurements improved scattering measurements through real-time correction of system drift, in addition to minimizing the time required for post-calibration. Variations in pressure (tested without the potential confounding effects of calibration error) caused local changes in vasculature and scatterer density that significantly impacted the tissue absorption and scattering properties Future spectroscopic systems intended for clinical use, particularly where operator training is not viable and environmental conditions unpredictable, should incorporate a real-time self-calibration channel and collect diffuse reflectance spectra at a consistent pressure to maximize data integrity.
Resumo:
Realizing scalable performance on high performance computing systems is not straightforward for single-phenomenon codes (such as computational fluid dynamics [CFD]). This task is magnified considerably when the target software involves the interactions of a range of phenomena that have distinctive solution procedures involving different discretization methods. The problems of addressing the key issues of retaining data integrity and the ordering of the calculation procedures are significant. A strategy for parallelizing this multiphysics family of codes is described for software exploiting finite-volume discretization methods on unstructured meshes using iterative solution procedures. A mesh partitioning-based SPMD approach is used. However, since different variables use distinct discretization schemes, this means that distinct partitions are required; techniques for addressing this issue are described using the mesh-partitioning tool, JOSTLE. In this contribution, the strategy is tested for a variety of test cases under a wide range of conditions (e.g., problem size, number of processors, asynchronous / synchronous communications, etc.) using a variety of strategies for mapping the mesh partition onto the processor topology.
Resumo:
The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance Com- puting (HPC) storage systems, which are at the forefront of handling the data del- uge, typically employ hardware RAID at the backend. However, such solutions are costly, do not ensure end-to-end data integrity, and can become a bottleneck during data reconstruction. In this paper, we design an innovative solution to achieve a flex- ible, fault-tolerant, and high-performance RAID-6 solution for a parallel file system (PFS). Our system utilizes low-cost, strategically placed GPUs — both on the client and server sides — to accelerate parity computation. In contrast to hardware-based approaches, we provide full control over the size, length and location of a RAID array on a per file basis, end-to-end data integrity checking, and parallelization of RAID array reconstruction. We have deployed our system in conjunction with the widely-used Lustre PFS, and show that our approach is feasible and imposes ac- ceptable overhead.
Resumo:
A report of the new Bourzutschky/Konoval chess endgame depth record of DTC = 330 moves, and of the progress of the Kryukov peer-group to disseminate Nalimov's DTM EGTs. The question of data integrity and assurance is raised.
Resumo:
Nowadays the use of information and communication technology is becoming prevalent in many aspects of healthcare services from patient registration, to consultation, treatment and pathology tests request. Manual interface techniques have dominated data-capture activities in primary care and secondary care settings for decades. Despites the improvements made in IT, usability issues still remain over the use of I/O devices like the computer keyboard, touch-sensitive screens, light pen and barcodes. Furthermore, clinicians have to use several computer applications when providing healthcare services to patients. One of the problems faced by medical professionals is the lack of data integrity between the different software applications which in turn can hinder the provision of healthcare services tailored to the needs of the patients. The use of digital pen and paper technology integrated with legacy medical systems hold the promise of improving healthcare quality. This paper discusses the issue of data integrity in e-health systems and proposes the modelling of "Smart Forms" via semiotics to potentially improve integrity between legacy systems, making the work of medical professionals easier and improve the quality of care in primary care practices and hospitals.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Come risposta positiva alle richieste provenienti dal mondo dei giuristi, spesso troppo distante da quello scientifico, si vuole sviluppare un sistema solido dal punto di vista tecnico e chiaro dal punto di vista giurico finalizzato ad migliore ricerca della verità. L’obiettivo ci si prefigge è quello di creare uno strumento versatile e di facile utilizzo da mettere a disposizione dell’A.G. ed eventualmente della P.G. operante finalizzato a consentire il proseguo dell’attività d’indagine in tempi molto rapidi e con un notevole contenimento dei costi di giustizia rispetto ad una normale CTU. La progetto verterà su analisi informatiche forensi di supporti digitali inerenti vari tipi di procedimento per cui si dovrebbe richiedere una CTU o una perizia. La sperimentazione scientifica prevede un sistema di partecipazione diretta della P.G. e della A.G. all’analisi informatica rendendo disponibile, sottoforma di macchina virtuale, il contenuto dei supporti sequestrati in modo che possa essere visionato alla pari del supporto originale. In questo modo il CT diventa una mera guida per la PG e l’AG nell’ambito dell’indagine informatica forense che accompagna il giudice e le parti alla migliore comprensione delle informazioni richieste dal quesito. Le fasi chiave della sperimentazione sono: • la ripetibilità delle operazioni svolte • dettare delle chiare linee guida per la catena di custodia dalla presa in carico dei supporti • i metodi di conservazione e trasmissione dei dati tali da poter garantire integrità e riservatezza degli stessi • tempi e costi ridotti rispetto alle normali CTU/perizie • visualizzazione diretta dei contenuti dei supporti analizzati delle Parti e del Giudice circoscritte alle informazioni utili ai fini di giustizia
Resumo:
This study will look at the passenger air bag (PAB) performance in a fix vehicle environment using Partial Low Risk Deployment (PLRD) as a strategy. This development will follow test methods against actual baseline vehicle data and Federal Motor Vehicle Safety Standards 208 (FMVSS 208). FMVSS 208 states that PAB compliance in vehicle crash testing can be met using one of three deployment methods. The primary method suppresses PAB deployment, with the use of a seat weight sensor or occupant classification sensor (OCS), for three-year old and six-year old occupants including the presence of a child seat. A second method, PLRD allows deployment on all size occupants suppressing only for the presents of a child seat. A third method is Low Risk Deployment (LRD) which allows PAB deployment in all conditions, all statures including any/all child seats. This study outlines a PLRD development solution for achieving FMVSS 208 performance. The results of this study should provide an option for system implementation including opportunities for system efficiency and other considerations. The objective is to achieve performance levels similar too or incrementally better than the baseline vehicles National Crash Assessment Program (NCAP) Star rating. In addition, to define systemic flexibility where restraint features can be added or removed while improving occupant performance consistency to the baseline. A certified vehicles’ air bag system will typically remain in production until the vehicle platform is redesigned. The strategy to enable the PLRD hypothesis will be to first match the baseline out of position occupant performance (OOP) for the three and six-year old requirements. Second, improve the 35mph belted 5th percentile female NCAP star rating over the baseline vehicle. Third establish an equivalent FMVSS 208 certification for the 25mph unbelted 50th percentile male. FMVSS 208 high-speed requirement defines the federal minimum crash performance required for meeting frontal vehicle crash-test compliance. The intent of NCAP 5-Star rating is to provide the consumer with information about crash protection, beyond what is required by federal law. In this study, two vehicles segments were used for testing to compare and contrast to their baseline vehicles performance. Case Study 1 (CS1) used a cross over vehicle platform and Case Study 2 (CS2) used a small vehicle segment platform as their baselines. In each case study, the restraints systems were from different restraint supplier manufactures and each case contained that suppliers approach to PLRD. CS1 incorporated a downsized twins shaped bag, a carryover inflator, standard vents, and a strategic positioned bag diffuser to help disperse the flow of gas to improve OOP. The twin shaped bag with two segregated sections (lobes) to enabled high-speed baseline performance correlation on the HYGE Sled. CS2 used an A-Symmetric (square shape) PAB with standard size vents, including a passive vent, to obtain OOP similar to the baseline. The A-Symmetric shape bag also helped to enabled high-speed baseline performance improvements in HYGE Sled testing in CS2. The anticipated CS1 baseline vehicle-pulse-index (VPI) target was in the range of 65-67. However, actual dynamic vehicle (barrier) testing was overshadowed with the highest crash pulse from the previous tested vehicles with a VPI of 71. The result from the 35mph NCAP Barrier test was a solid 4-Star (4.7 Star) respectfully. In CS2, the vehicle HYGE Sled development VPI range, from the baseline was 61-62 respectively. Actual NCAP test produced a chest deflection result of 26mm versus the anticipated baseline target of 12mm. The initial assessment of this condition was thought to be due to the vehicles significant VPI increase to 67. A subsequent root cause investigation confirmed a data integrity issue due to the instrumentation. In an effort to establish a true vehicle test data point a second NCAP test was performed but faced similar instrumentation issues. As a result, the chest deflect hit the target of 12.1mm; however a femur load spike, similar to the baseline, now skewed the results. With noted level of performance improvement in chest deflection, the NCAP star was assessed as directional for 5-Star capable performance. With an actual rating of 3-Star due to instrumentation, using data extrapolation raised the ratings to 5-Star. In both cases, no structural changes were made to the surrogate vehicle and the results in each case matched their perspective baseline vehicle platforms. These results proved the PLRD is viable for further development and production implementation.
Resumo:
El proyecto consiste en el diseño y estudio de un software cuyas prestaciones estén orientadas a gestionar una simulación de un sistema de radar. El prototipo de este entorno de simulación se ha realizado en el lenguaje Matlab debido a que inicialmente se considera el más adecuado para el tratamiento de las señales que los sistemas de radar manejan para realizar sus cálculos. Se ha escogido como modelo el software desarrollado por la compañía SAP para gestionar los E.R.P.s de grandes empresas. El motivo es que es un software cuyo diseño y funcionalidad es especialmente adecuado para la gestión ordenada de una cantidad grande de datos diversos de forma integrada. Diseñar e implementar el propio entorno es una tarea de enorme complejidad y que requerirá el esfuerzo de una cantidad importante de personas; por lo que este proyecto se ha limitado, a un prototipo básico con una serie de características mínimas; así como a indicar y dejar preparado el camino por el que deberán transcurrir las futuras agregaciones de funcionalidad o mejoras. Funcionalmente, esto es, independientemente de la implementación específica con la que se construya el entorno de simulación, se ha considerado dividir las características y prestaciones ofrecidas por el sistema en bloques. Estos bloques agruparán los componentes relacionados con un aspecto específico de la simulación, por ejemplo, el bloque 1, es el asignado a todo lo relacionado con el blanco a detectar. El usuario del entorno de simulación interactuará con el sistema ejecutando lo que se llaman transacciones, que son agrupaciones lógicas de datos a introducir/consultar en el sistema relacionados y que se pueden ejecutar de forma independiente. Un ejemplo de transacción es la que permite mantener una trayectoria de un blanco junto con sus parámetros, pero también puede ser una transacción la aplicación que permite por ejemplo, gestionar los usuarios con acceso al entorno. Es decir, las transacciones son el componente mínimo a partir del cual el usuario puede interactuar con el sistema. La interfaz gráfica que se le ofrecerá al usuario, está basada en modos, que se pueden considerar “ventanas” independientes entre sí dentro de las cuáles el usuario ejecuta sus transacciones. El usuario podrá trabajar con cuantos modos en paralelo desee y cambiar según desee entre ellos. La programación del software se ha realizado utilizando la metodología de orientación a objetos y se ha intentado maximizar la reutilización del código así como la configurabilidad de su funcionalidad. Una característica importante que se ha incorporado para garantizar la integridad de los datos es un diccionario sintáctico. Para permitir la persistencia de los datos entre sesiones del usuario se ha implementado una base de datos virtual (que se prevé se reemplace por una real), que permite manejar, tablas, campos clave, etc. con el fin de guardar todos los datos del entorno, tanto los de configuración que solo serían responsabilidad de los administradores/desarrolladores como los datos maestros y transaccionales que serían gestionados por los usuarios finales del entorno de simulación. ABSTRACT. This end-of-degree project comprises the design, study and implementation of a software based application able to simulate the various aspects and performance of a radar system. A blueprint for this application has been constructed upon the Matlab programming language. This is due to the fact that initially it was thought to be the one most suitable to the complex signals radar systems usually process; but it has proven to be less than adequate for all the other core processes the simulation environment must provide users with. The software’s design has been based on another existing software which is the one developed by the SAP company for managing enterprises, a software categorized (and considered the paradigm of) as E.R.P. software (E.R.P. stands for Enterprise Resource Planning). This software has been selected as a model because is very well suited (its basic features) for working in an orderly fashion with a pretty good quantity of data of very diverse characteristics, and for doing it in a way which protects the integrity of the data. To design and construct the simulation environment with all its potential features is a pretty hard task and requires a great amount of effort and work to be dedicated to its accomplishment. Due to this, the scope of this end-of-degree project has been focused to design and construct a very basic prototype with minimal features, but which way future developments and upgrades to the systems features should go has also been pointed. In a purely functional approach, i.e. disregarding completely the specific implementation which accomplishes the simulation features, the different parts or aspects of the simulation system have been divided and classified into blocks. The blocks will gather together and comprise the various components related with a specific aspect of the simulation landscape, for example, block number one will be the one dealing with all the features related to the radars system target. The user interaction with the system will be based on the execution of so called transactions, which essentially consist on bunches of information which logically belong together and can thus be managed together. A good example, could be a transaction which permits to maintain a series of specifications for target’s paths; but it could also be something completely unrelated with the radar system itself as for example, the management of the users who can access the system. Transactions will be thus the minimum unit of interaction of users with the system. The graphic interface provided to the user will be mode based, which can be considered something akin to a set of independent windows which are able on their own to sustain the execution of an independent transaction. The user ideally should be able to work with as many modes simultaneously as he wants to, switching his focus between them at will. The approach to the software construction has been based on the object based paradigm. An effort has been made to maximize the code’s reutilization and also in maximizing its customizing, i.e., same sets of code able to perform different tasks based on configuration data. An important feature incorporated to the software has been a data dictionary (a syntactic one) which helps guarantee data integrity. Another important feature that allow to maintain data persistency between user sessions, is a virtual relational data base (which should in future times become a real data base) which allows to store data in tables. The data store in this tables comprises both the system’s configuration data (which administrators and developers will maintain) and also master and transactional data whose maintenance will be the end users task.
Resumo:
Most object-based approaches to Geographical Information Systems (GIS) have concentrated on the representation of geometric properties of objects in terms of fixed geometry. In our road traffic marking application domain we have a requirement to represent the static locations of the road markings but also enforce the associated regulations, which are typically geometric in nature. For example a give way line of a pedestrian crossing in the UK must be within 1100-3000 mm of the edge of the crossing pattern. In previous studies of the application of spatial rules (often called 'business logic') in GIS emphasis has been placed on the representation of topological constraints and data integrity checks. There is very little GIS literature that describes models for geometric rules, although there are some examples in the Computer Aided Design (CAD) literature. This paper introduces some of the ideas from so called variational CAD models to the GIS application domain, and extends these using a Geography Markup Language (GML) based representation. In our application we have an additional requirement; the geometric rules are often changed and vary from country to country so should be represented in a flexible manner. In this paper we describe an elegant solution to the representation of geometric rules, such as requiring lines to be offset from other objects. The method uses a feature-property model embraced in GML 3.1 and extends the possible relationships in feature collections to permit the application of parameterized geometric constraints to sub features. We show the parametric rule model we have developed and discuss the advantage of using simple parametric expressions in the rule base. We discuss the possibilities and limitations of our approach and relate our data model to GML 3.1. © 2006 Springer-Verlag Berlin Heidelberg.
Resumo:
The advent of personal communication systems within the last decade has depended upon the utilization of advanced digital schemes for source and channel coding and for modulation. The inherent digital nature of the communications processing has allowed the convenient incorporation of cryptographic techniques to implement security in these communications systems. There are various security requirements, of both the service provider and the mobile subscriber, which may be provided for in a personal communications system. Such security provisions include the privacy of user data, the authentication of communicating parties, the provision for data integrity, and the provision for both location confidentiality and party anonymity. This thesis is concerned with an investigation of the private-key and public-key cryptographic techniques pertinent to the security requirements of personal communication systems and an analysis of the security provisions of Second-Generation personal communication systems is presented. Particular attention has been paid to the properties of the cryptographic protocols which have been employed in current Second-Generation systems. It has been found that certain security-related protocols implemented in the Second-Generation systems have specific weaknesses. A theoretical evaluation of these protocols has been performed using formal analysis techniques and certain assumptions made during the development of the systems are shown to contribute to the security weaknesses. Various attack scenarios which exploit these protocol weaknesses are presented. The Fiat-Sharmir zero-knowledge cryptosystem is presented as an example of how asymmetric algorithm cryptography may be employed as part of an improved security solution. Various modifications to this cryptosystem have been evaluated and their critical parameters are shown to be capable of being optimized to suit a particular applications. The implementation of such a system using current smart card technology has been evaluated.