942 resultados para security architectures
Resumo:
One hundred and twenty subjects with Chagas' cardiopathy and 120 non-infected subjects were randomly selected from first time claimants of sickness benefits in the National Institute of Social Security (INPS) in Goiás. Cases of Chagas' cardiopathy were defined based on serological test, history of residence in an endemic area and, clinical and/or electrocardiogram (ECG) alterations suggestive of Chagas' cardiomyopathy. Controls were defined as subjects with at least two negative serological tests. Case and controls were compared in the analysis for age, sex, place of birth, migration history, socio-economic level, occupation, physical exertion at work, age at affiliation and years of contribution to the social security scheme, clinical course of their disease and ECG abnormalities. Chagas' disease patients were younger than other subjects and predominantly of rural origin. Non-infected subjects presented a better socio-economic level, were performing more skilled activities and had less changes of job than cases. No important difference was observed in relation to age at affiliation to INPS. About 60% of cases have claimed for benefits within the first four years of contribution while among controls this proportion was 38.5%. Cases were involved, proportionally more than controls, in "heavy" activities. A risk of 2.3 (95%CL 1.5 - 4.6) and 1.8 (95%CL 1.2- 3.5) was obtained comparing respectively "heavy" and "moderate" physical activity against "light". A relative risk of 8.5 (95%CL 4.9 - 14.8) associated with the presence of cardiopathy was estimated comparing the initial sample of seropositive subjects and controls. A high relative risk was observed in relation to right bundle branch block (RR = 37.1 95%CL = 8.8 - 155.6) and left anterior hemiblock (RR = 4.4, 95%CL = 2.1 - 9.1).
Resumo:
A família de especificações WS-* define um modelo de segurança para web services, baseado nos conceitos de claim, security token e Security Token Service (STS). Neste modelo, a informação de segurança dos originadores de mensagens (identidade, privilégios, etc.) é representada através de conjuntos de claims, contidos dentro de security tokens. A emissão e obtenção destes security tokens, por parte dos originadores de mensagens, são realizadas através de protocolos legados ou através de serviços especiais, designados de Security Token Services, usando as operações e os protocolos definidos na especificação WS-Trust. O conceito de Security Token Service não é usado apenas no contexto dos web services. Propostas como o modelo dos Information Cards, aplicável no contexto de aplicações web, também utilizam este conceito. Os Security Token Services desempenham vários papéis, dependendo da informação presente no token emitido. São exemplos o papel de Identity Provider, quando os tokens emitidos contêm informação de identidade, ou o papel de Policy Decision Point, quando os tokens emitidos definem autorizações. Este documento descreve o projecto duma biblioteca software para a realização de Security Token Services, tal como definidos na norma WS-Trust, destinada à plataforma .NET 3.5. Propõem-se uma arquitectura flexível e extensível, de forma a suportar novas versões das normas e as diversas variantes que os Security Token Services possuem, nomeadamente: o tipo dos security token emitidos e das claims neles contidas, a inferência das claims e os métodos de autenticação das entidades requerentes. Apresentam-se aspectos de implementação desta arquitectura, nomeadamente a integração com a plataforma WCF, a sua extensibilidade e o suporte a modelos e sistemas externos à norma. Finalmente, descrevem-se as plataformas de teste implementadas para a validação da biblioteca realizada e os módulos de extensão da biblioteca para: suporte do modelo associado aos Information Cards, do modelo OpenID e para a integração com o Authorization Manager.
Resumo:
One of the major problems that prevents the spread of elections with the possibility of remote voting over electronic networks, also called Internet Voting, is the use of unreliable client platforms, such as the voter's computer and the Internet infrastructure connecting it to the election server. A computer connected to the Internet is exposed to viruses, worms, Trojans, spyware, malware and other threats that can compromise the election's integrity. For instance, it is possible to write a virus that changes the voter's vote to a predetermined vote on election's day. Another possible attack is the creation of a fake election web site where the voter uses a malicious vote program on the web site that manipulates the voter's vote (phishing/pharming attack). Such attacks may not disturb the election protocol, therefore can remain undetected in the eyes of the election auditors. We propose the use of Code Voting to overcome insecurity of the client platform. Code Voting consists in creating a secure communication channel to communicate the voter's vote between the voter and a trusted component attached to the voter's computer. Consequently, no one controlling the voter's computer can change the his/her's vote. The trusted component can then process the vote according to a cryptographic voting protocol to enable cryptographic verification at the server's side.
Resumo:
The characteristics of tunable wavelength filters based on a-SiC:H multilayered stacked pin cells are studied both theoretically and experimentally. The optical transducers were produced by PECVD and tested for a proper fine tuning of the cyan and yellow fluorescent proteins emission. The active device consists of a p-i'(a-SiC:H)-n/p-i(a-Si:H)-n heterostructures sandwiched between two transparent contacts. Experimental data on spectral response analysis, current-voltage characteristics and color and transmission rate discrimination are reported. Cyan and yellow fluorescent input channels were transmitted together, each one with a specific transmission rate and different intensities. The multiplexed optical signal was analyzed by reading out, under positive and negative applied voltages, the generated photocurrents. Results show that the optimized optical transducer has the capability of combining the transient fluorescent signals onto a single output signal without losing any specificity (color and intensity). It acts as a voltage controlled optical filter: when the applied voltages are chosen appropriately the transducer can select separately the cyan and yellow channel emissions (wavelength and frequency) and also to quantify their relative intensities. A theoretical analysis supported by a numerical simulation is presented.
Resumo:
Distribution systems are the first volunteers experiencing the benefits of smart grids. The smart grid concept impacts the internal legislation and standards in grid-connected and isolated distribution systems. Demand side management, the main feature of smart grids, acquires clear meaning in low voltage distribution systems. In these networks, various coordination procedures are required between domestic, commercial and industrial consumers, producers and the system operator. Obviously, the technical basis for bidirectional communication is the prerequisite of developing such a coordination procedure. The main coordination is required when the operator tries to dispatch the producers according to their own preferences without neglecting its inherent responsibility. Maintenance decisions are first determined by generating companies, and then the operator has to check and probably modify them for final approval. In this paper the generation scheduling from the viewpoint of a distribution system operator (DSO) is formulated. The traditional task of the DSO is securing network reliability and quality. The effectiveness of the proposed method is assessed by applying it to a 6-bus and 9-bus distribution system.
Resumo:
Urban Computing (UrC) provides users with the situation-proper information by considering context of users, devices, and social and physical environment in urban life. With social network services, UrC makes it possible for people with common interests to organize a virtual-society through exchange of context information among them. In these cases, people and personal devices are vulnerable to fake and misleading context information which is transferred from unauthorized and unauthenticated servers by attackers. So called smart devices which run automatically on some context events are more vulnerable if they are not prepared for attacks. In this paper, we illustrate some UrC service scenarios, and show important context information, possible threats, protection method, and secure context management for people.
Resumo:
Mestrado em Engenharia Informática
Resumo:
Red, green and blue optical signals were directed to an a-SiC:H multilayered device, each one with a specific transmission rate. The combined optical signal was analyzed by reading out, under different applied voltages, the generated photocurrent. Results show that when a chromatic time dependent wavelength combination with different transmission rates irradiates the multilayered structure, the device operates as a tunable wavelength filter and can be used in wavelength division multiplexing systems for short range communications. An application to fluorescent proteins detection is presented. (C) 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Resumo:
The relation between the information/knowledge expression and the physical expression can be involved as one of items for an ambient intelligent computing [2],[3]. Moreover, because there are so many contexts around user/spaces during a user movement, all appplcation which are using AmI for users are based on the relation between user devices and environments. In these situations, it is possible that the AmI may output the wrong result from unreliable contexts by attackers. Recently, establishing a server have been utilizes, so finding secure contexts and make contexts of higher security level for save communication have been given importance. Attackers try to put their devices on the expected path of all users in order to obtain users informationillegally or they may try to broadcast their SPAMS to users. This paper is an extensionof [11] which studies the Security Grade Assignment Model (SGAM) to set Cyber-Society Organization (CSO).
Resumo:
The demonstration proposal moves from the capabilities of a wireless biometric badge [4], which integrates a localization and tracking service along with an automatic personal identification mechanism, to show how a full system architecture is devised to enable the control of physical accesses to restricted areas. The system leverages on the availability of a novel IEEE 802.15.4/Zigbee Cluster Tree network model, on enhanced security levels and on the respect of all the users' privacy issues.
Resumo:
The problem of providing a hybrid wired/wireless communications for factory automation systems is still an open issue, notwithstanding the fact that already there are some solutions. This paper describes the role of simulation tools on the validation and performance analysis of two wireless extensions for the PROFIBUS protocol. In one of them, the Intermediate Systems, which connect wired and wireless network segments, operate as repeaters. In the other one the Intermediate Systems operate as bridge. We also describe how the analytical analysis proposed for these kinds of networks can be used for the setting of some network parameters and for the guaranteeing real-time behaviour of the system. Additionally, we also compare the bridge-based solution simulation results with the analytical results.
Resumo:
We analyze the advantages and drawbacks of a vector delay/frequency-locked loop (VDFLL) architecture regarding the conventional scalar and the vector delay-locked loop (VDLL) architectures for GNSS receivers in harsh scenarios that include ionospheric scintillation, multipath, and high dynamics motion. The VDFLL is constituted by a bank of code and frequency discriminators feeding a central extended Kaiman filter (EKF) that estimates the receiver's position, velocity, and clock bias. Both code and frequency loops are closed vectorially through the EKF. The VDLL closes the code loop vectorially and the phase loops through individual PLLs while the scalar receiver closes both loops by means of individual independent PLLs and DLLs.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
The LMS plays an indisputable role in the majority of the eLearning environments. This eLearning system type is often used for presenting, solving and grading simple exercises. However, exercises from complex domains, such as computer programming, require heterogeneous systems such as evaluation engines, learning objects repositories and exercise resolution environments. The coordination of networks of such disparate systems is rather complex. This work presents a standard approach for the coordination of a network of eLearning systems supporting the resolution of exercises. The proposed approach use a pivot component embedded in the LMS with two roles: provide an exercise resolution environment and coordinate the communication between the LMS and other systems exposing their functions as web services. The integration of the pivot component with the LMS relies on the Learning Tools Interoperability. The validation of this approach is made through the integration of the component with LMSs from two vendors.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.