932 resultados para document security
Resumo:
McInnes, C., 'HIV/AIDS and national security', in: AIDS and Governance, N. Poku, A. Whiteside and B. Sandkjaer (eds.),(Aldershot: Ashgate, 2007), pp.93-111 RAE2008
Resumo:
Williams, Mike, Culture and Security: Symbolic Power and the Politics of International Security (Oxon: Routledge, 2007), pp.xii+172 RAE2008
Resumo:
Among the many issues that were raised in the White Book on National Security of the Republic of Poland (WBNSRP), there were also those related to the functioning of a Common Security and Defence Policy (CSDP). Its importance for the security of Poland was indicated, as well as the need on the part of EU Member States to broaden collaboration in the sphere of security and defence. The key problems occurring in the context of CSDP were also emphasised and their causes indicated. The aim of the article is to present the factors responsible for the weakening of CSDP effectiveness, ones taken into account in the White Book, and subsequently to present a Framework for their analysis in the light of further scholarship.
Resumo:
Celem niniejszego opracowania jest analiza założeń polityki ochrony cyberprzestrzeni RP zaprezentowanych w dokumencie zatytułowanym Polityka ochrony cyberprzestrzeni Rzeczypospolitej Polskiej, zaprezentowanym w 2013 r. przez Ministerstwo Administracji i Cyfryzacji i Agencję Bezpieczeństwa Wewnętrznego. Artykuł poddaje analizie postulaty i wytyczne tam zamieszczone, jak również konfrontuje te założenia z elementami systemu ochrony cyberprzestrzeni RP. Zgodzić należy się z twórcami tej strategii, iż zapewnienie stanu pełnego bezpieczeństwa teleinformatycznego, jest niemożliwe. Można mówić jedynie osiągnięciu pewnego, akceptowalnego jego poziomu. Wydaje się, że do osiągnięcia tego celu, powinna w znaczącym stopniu przyczynić się realizacja priorytetów polityki ochrony cyberprzestrzeni RP, a wśród nich w szczególności określenie kompetencji podmiotów odpowiedzialnych za bezpieczeństwo cyberprzestrzeni, stworzenie i realizacja spójnego dla wszystkich podmiotów administracji rządowej systemu zarządzania bezpieczeństwem cyberprzestrzeni oraz ustanowienie wytycznych w tym zakresie dla podmiotów niepublicznych, stworzenie trwałego systemu koordynacji i wymiany informacji pomiędzy podmiotami odpowiedzialnymi za bezpieczeństwo cyberprzestrzeni i użytkownikami cyberprzestrzeni, zwiększenie świadomości użytkowników cyberprzestrzeni w zakresie metod i środków bezpieczeństwa.
Resumo:
The text addresses the issue of information security as exemplified by clandestine collaboration and the influence exerted by the Internal Security Agency officers upon journalists. The texts analyzes the de lege lata regulations as well as the de lege ferenda ones. As for the former, the penal provisions of the Act, that is Articles 153b–153d (Chapter 10a) are applicable, whereas as for the latter, the applicable regulations are the 2013 Bill Articles numbered 197-199 (Chapter 10). In both the 2002 Act on the Internal Security Agency and Foreign Intelligence Agency as well as in the 2013 draft Bill of the Internal Security Agency, the legislator penalizes the employment by the officers of the information acquired while fulfilling or in connection with official duties for the purpose of affecting the operation of public authority bodies, entrepreneurs or broadcasters, editors-in-chief, journalists and persons conducting publishing activity. Also, the text analyzes regulations concerned with the penalization of clandestine collaboration engaged in by ABW officers with a broadcaster, editor-in-chief, a journalist and a person conducting publishing activity.
Resumo:
The paper reviews the provisions of the White Book on National Security of the Republic of Poland. It states that the issue of health security is not given adequate significance there. The accessibility of health services is considered, in general, solely in terms of their availability. The assumptions concerning the concept of providing the number of beds required in a state of threat to national security and in time of war do not take into account the current socio-economic conditions and need to be reviewed. The conclusions emphasize the dilemmas that emerge as a result of the unilateral promotion of a single category of national security, that is military security, in the context of ensuring health security.
Resumo:
We present a type system, StaXML, which employs the stacked type syntax to represent essential aspects of the potential roles of XML fragments to the structure of complete XML documents. The simplest application of this system is to enforce well-formedness upon the construction of XML documents without requiring the use of templates or balanced "gap plugging" operators; this allows it to be applied to programs written according to common imperative web scripting idioms, particularly the echoing of unbalanced XML fragments to an output buffer. The system can be extended to verify particular XML applications such as XHTML and identifying individual XML tags constructed from their lexical components. We also present StaXML for PHP, a prototype precompiler for the PHP4 scripting language which infers StaXML types for expressions without assistance from the programmer.
Resumo:
With the increasing demand for document transfer services such as the World Wide Web comes a need for better resource management to reduce the latency of documents in these systems. To address this need, we analyze the potential for document caching at the application level in document transfer services. We have collected traces of actual executions of Mosaic, reflecting over half a million user requests for WWW documents. Using those traces, we study the tradeoffs between caching at three levels in the system, and the potential for use of application-level information in the caching system. Our traces show that while a high hit rate in terms of URLs is achievable, a much lower hit rate is possible in terms of bytes, because most profitably-cached documents are small. We consider the performance of caching when applied at the level of individual user sessions, at the level of individual hosts, and at the level of a collection of hosts on a single LAN. We show that the performance gain achievable by caching at the session level (which is straightforward to implement) is nearly all of that achievable at the LAN level (where caching is more difficult to implement). However, when resource requirements are considered, LAN level caching becomes much more desirable, since it can achieve a given level of caching performance using a much smaller amount of cache space. Finally, we consider the use of organizational boundary information as an example of the potential for use of application-level information in caching. Our results suggest that distinguishing between documents produced locally and those produced remotely can provide useful leverage in designing caching policies, because of differences in the potential for sharing these two document types among multiple users.
Resumo:
We analyzed the logs of our departmental HTTP server http://cs-www.bu.edu as well as the logs of the more popular Rolling Stones HTTP server http://www.stones.com. These servers have very different purposes; the former caters primarily to local clients, whereas the latter caters exclusively to remote clients all over the world. In both cases, our analysis showed that remote HTTP accesses were confined to a very small subset of documents. Using a validated analytical model of server popularity and file access profiles, we show that by disseminating the most popular documents on servers (proxies) closer to the clients, network traffic could be reduced considerably, while server loads are balanced. We argue that this process could be generalized so as to provide for an automated demand-based duplication of documents. We believe that such server-based information dissemination protocols will be more effective at reducing both network bandwidth and document retrieval times than client-based caching protocols [2].
Resumo:
Wireless Intrusion Detection Systems (WIDS) monitor 802.11 wireless frames (Layer-2) in an attempt to detect misuse. What distinguishes a WIDS from a traditional Network IDS is the ability to utilize the broadcast nature of the medium to reconstruct the physical location of the offending party, as opposed to its possibly spoofed (MAC addresses) identity in cyber space. Traditional Wireless Network Security Systems are still heavily anchored in the digital plane of "cyber space" and hence cannot be used reliably or effectively to derive the physical identity of an intruder in order to prevent further malicious wireless broadcasts, for example by escorting an intruder off the premises based on physical evidence. In this paper, we argue that Embedded Sensor Networks could be used effectively to bridge the gap between digital and physical security planes, and thus could be leveraged to provide reciprocal benefit to surveillance and security tasks on both planes. Toward that end, we present our recent experience integrating wireless networking security services into the SNBENCH (Sensor Network workBench). The SNBENCH provides an extensible framework that enables the rapid development and automated deployment of Sensor Network applications on a shared, embedded sensing and actuation infrastructure. The SNBENCH's extensible architecture allows an engineer to quickly integrate new sensing and response capabilities into the SNBENCH framework, while high-level languages and compilers allow novice SN programmers to compose SN service logic, unaware of the lower-level implementation details of tools on which their services rely. In this paper we convey the simplicity of the service composition through concrete examples that illustrate the power and potential of Wireless Security Services that span both the physical and digital plane.
Resumo:
The Java programming language has been widely described as secure by design. Nevertheless, a number of serious security vulnerabilities have been discovered in Java, particularly in the component known as the Bytecode Verifier. This paper describes a method for representing Java security constraints using the Alloy modeling language. It further describes a system for performing a security analysis on any block of Java bytecodes by converting the bytes into relation initializers in Alloy. Any counterexamples found by the Alloy analyzer correspond directly to insecure code. Analysis of a real-world malicious applet is given to demonstrate the efficacy of the approach.
Resumo:
The TCP/IP architecture was originally designed without taking security measures into consideration. Over the years, it has been subjected to many attacks, which has led to many patches to counter them. Our investigations into the fundamental principles of networking have shown that carefully following an abstract model of Interprocess Communication (IPC) addresses many problems [1]. Guided by this IPC principle, we designed a clean-slate Recursive INternet Architecture (RINA) [2]. In this paper, we show how, without the aid of cryptographic techniques, the bare-bones architecture of RINA can resist most of the security attacks faced by TCP/IP. We also show how hard it is for an intruder to compromise RINA. Then, we show how RINA inherently supports security policies in a more manageable, on-demand basis, in contrast to the rigid, piecemeal approach of TCP/IP.
Resumo:
Classifying novel terrain or objects front sparse, complex data may require the resolution of conflicting information from sensors working at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods described here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among objects are assumed to be unknown to the automated system or the human user. The ARTMAP information fusion system used distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchical knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships.
Resumo:
Ongoing research at Boston University has produced computational models of biological vision and learning that embody a growing corpus of scientific data and predictions. Vision models perform long-range grouping and figure/ground segmentation, and memory models create attentionally controlled recognition codes that intrinsically cornbine botton-up activation and top-down learned expectations. These two streams of research form the foundation of novel dynamically integrated systems for image understanding. Simulations using multispectral images illustrate road completion across occlusions in a cluttered scene and information fusion from incorrect labels that are simultaneously inconsistent and correct. The CNS Vision and Technology Labs (cns.bu.edulvisionlab and cns.bu.edu/techlab) are further integrating science and technology through analysis, testing, and development of cognitive and neural models for large-scale applications, complemented by software specification and code distribution.
Resumo:
Classifying novel terrain or objects from sparse, complex data may require the resolution of conflicting information from sensors woring at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when eveidence variously suggests that and object's class is car, truck, or airplane. The methods described her address a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among classes are assumed to be unknown to the autonomated system or the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierachical knowlege structures. The fusion system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples, but is not limited to image domain.