957 resultados para GENESIS (Computer system)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Even though web security protocols are designed to make computer communication secure, it is widely known that there is potential for security breakdowns at the human-machine interface. This paper examines findings from a qualitative study investigating the identification of security decisions used on the web. The study was designed to uncover how security is perceived in an individual user's context. Study participants were tertiary qualified individuals, with a focus on HCI designers, security professionals and the general population. The study identifies that security frameworks for the web are inadequate from an interaction perspective, with even tertiary qualified users having a poor or partial understanding of security, of which they themselves are acutely aware. The result is that individuals feel they must protect themselves on the web. The findings contribute a significant mapping of the ways in which individuals reason and act to protect themselves on the web. We use these findings to highlight the need to design for trust at three levels, and the need to ensure that HCI design does not impact on the users' main identified protection mechanism: separation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A fundamental part of many authentication protocols which authenticate a party to a human involves the human recognizing or otherwise processing a message received from the party. Examples include typical implementations of Verified by Visa in which a message, previously stored by the human at a bank, is sent by the bank to the human to authenticate the bank to the human; or the expectation that humans will recognize or verify an extended validation certificate in a HTTPS context. This paper presents general definitions and building blocks for the modelling and analysis of human recognition in authentication protocols, allowing the creation of proofs for protocols which include humans. We cover both generalized trawling and human-specific targeted attacks. As examples of the range of uses of our construction, we use the model presented in this paper to prove the security of a mutual authentication login protocol and a human-assisted device pairing protocol.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Social networking sites (SNSs), with their large numbers of users and large information base, seem to be perfect breeding grounds for exploiting the vulnerabilities of people, the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as “social engineering.” While technology-based security has been addressed by research and may be well understood, social engineering is more challenging to understand and manage, especially in new environments such as SNSs, owing to some factors of SNSs that reduce the ability of users to detect the attack and increase the ability of attackers to launch it. This work will contribute to the knowledge of social engineering by presenting the first two conceptual models of social engineering attacks in SNSs. Phase-based and source-based models are presented, along with an intensive and comprehensive overview of different aspects of social engineering threats in SNSs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While social engineering represents a real and ominous threat to many organizations, companies, governments, and individuals, social networking sites (SNSs), have been identified as among the most common means of social engineering attacks. Owing to factors that reduce the ability of users to detect social engineering tricks and increase the ability of attackers to launch them, SNSs seem to be perfect breeding ground for exploiting the vulnerabilities of people, and the weakest link in security. This work will contribute to the knowledge of social engineering by identifying different entities and subentities that affect social engineering based attacks in SNSs. Moreover, this paper includes an intensive and comprehensive overview of different aspects of social engineering threats in SNSs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is no doubt that social engineering plays a vital role in compromising most security defenses, and in attacks on people, organizations, companies, or even governments. It is the art of deceiving and tricking people to reveal critical information or to perform an action that benefits the attacker in some way. Fraudulent and deceptive people have been using social engineering traps and tactics using information technology such as e-mails, social networks, web sites, and applications to trick victims into obeying them, accepting threats, and falling victim to various crimes and attacks such as phishing, sexual abuse, financial abuse, identity theft, impersonation, physical crime, and many other forms of attack. Although organizations, researchers, practitioners, and lawyers recognize the severe risk of social engineering-based threats, there is a severe lack of understanding and controlling of such threats. One side of the problem is perhaps the unclear concept of social engineering as well as the complexity of understand human behaviors in behaving toward, approaching, accepting, and failing to recognize threats or the deception behind them. The aim of this paper is to explain the definition of social engineering based on the related theories of the many related disciplines such as psychology, sociology, information technology, marketing, and behaviourism. We hope, by this work, to help researchers, practitioners, lawyers, and other decision makers to get a fuller picture of social engineering and, therefore, to open new directions of collaboration toward detecting and controlling it.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The advances made within the aviation industry over the past several decades have significantly improved the availability, affordability and convenience of air travel and have been greatly beneficial in both social and economic terms. Air transport has developed into an irreplaceable service being relied on by millions of people each day and as such airports have become critical elements of national infrastructure to facilitate the movement of people and goods. As components of critical infrastructure (CI), airports are integral parts of a national economy supporting regional as well as national trade, commercial activity and employment. Therefore, any disruption or crisis which impacts the continuity of operations at airports can have significant negative consequences for the airport as a business, for the local economy and other nodes of transport infrastructure as well as for society. Due to the highly dynamic and volatile environment in which airports operate in, the aviation industry has faced many different challenges over the years ranging from terrorist attacks such as September 11, to health crises such as the SARS epidemic to system breakdowns such as the recent computer system outage at Virgin Blue Airlines in Australia. All these events have highlighted the vulnerability of airport systems to a range of disturbances as well as the gravity and widespread impact of any kind of discontinuity in airport functions. Such incidents thus emphasise the need for increasing resilience and reliability of airports and ensuring business continuity in the event of a crisis...

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Enterprises, both public and private, have rapidly commenced using the benefits of enterprise resource planning (ERP) combined with business analytics and “open data sets” which are often outside the control of the enterprise to gain further efficiencies, build new service operations and increase business activity. In many cases, these business activities are based around relevant software systems hosted in a “cloud computing” environment. “Garbage in, garbage out”, or “GIGO”, is a term long used to describe problems in unqualified dependency on information systems, dating from the 1960s. However, a more pertinent variation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems, such as ERP and usage of open datasets in a cloud environment, the ability to verify the authenticity of those data sets used may be almost impossible, resulting in dependence upon questionable results. Illicit data set “impersonation” becomes a reality. At the same time the ability to audit such results may be an important requirement, particularly in the public sector. This paper discusses the need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment and analyses some current technologies that are offered and which may be appropriate. However, severe limitations to addressing these requirements have been identified and the paper proposes further research work in the area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An increasing number of countries are faced with an aging population increasingly needing healthcare services. For any e-health information system, the need for increased trust by such clients with potentially little knowledge of any security scheme involved is paramount. In addition notable scalability of any system has become a critical aspect of system design, development and ongoing management. Meanwhile cryptographic systems provide the security provisions needed for confidentiality, authentication, integrity and non-repudiation. Cryptographic key management, however, must be secure, yet efficient and effective in developing an attitude of trust in system users. Digital certificate-based Public Key Infrastructure has long been the technology of choice or availability for information security/assurance; however, there appears to be a notable lack of successful implementations and deployments globally. Moreover, recent issues with associated Certificate Authority security have damaged trust in these schemes. This paper proposes the adoption of a centralised public key registry structure, a non-certificate based scheme, for large scale e-health information systems. The proposed structure removes complex certificate management, revocation and a complex certificate validation structure while maintaining overall system security. Moreover, the registry concept may be easier for both healthcare professionals and patients to understand and trust.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Enterprise resource planning (ERP) systems are rapidly being combined with “big data” analytics processes and publicly available “open data sets”, which are usually outside the arena of the enterprise, to expand activity through better service to current clients as well as identifying new opportunities. Moreover, these activities are now largely based around relevant software systems hosted in a “cloud computing” environment. However, the over 50- year old phrase related to mistrust in computer systems, namely “garbage in, garbage out” or “GIGO”, is used to describe problems of unqualified and unquestioning dependency on information systems. However, a more relevant GIGO interpretation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems based around ERP and open datasets as well as “big data” analytics, particularly in a cloud environment, the ability to verify the authenticity and integrity of the data sets used may be almost impossible. In turn, this may easily result in decision making based upon questionable results which are unverifiable. Illicit “impersonation” of and modifications to legitimate data sets may become a reality while at the same time the ability to audit any derived results of analysis may be an important requirement, particularly in the public sector. The pressing need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment is discussed in this paper. Some current and appropriate technologies currently being offered are also examined. However, severe limitations in addressing the problems identified are found and the paper proposes further necessary research work for the area. (Note: This paper is based on an earlier unpublished paper/presentation “Identity, Addressing, Authenticity and Audit Requirements for Trust in ERP, Analytics and Big/Open Data in a ‘Cloud’ Computing Environment: A Review and Proposal” presented to the Department of Accounting and IT, College of Management, National Chung Chen University, 20 November 2013.)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Digital signatures are often used by trusted authorities to make unique bindings between a subject and a digital object; for example, certificate authorities certify a public key belongs to a domain name, and time-stamping authorities certify that a certain piece of information existed at a certain time. Traditional digital signature schemes however impose no uniqueness conditions, so a trusted authority could make multiple certifications for the same subject but different objects, be it intentionally, by accident, or following a (legal or illegal) coercion. We propose the notion of a double-authentication-preventing signature, in which a value to be signed is split into two parts: a subject and a message. If a signer ever signs two different messages for the same subject, enough information is revealed to allow anyone to compute valid signatures on behalf of the signer. This double-signature forgeability property discourages signers from misbehaving---a form of self-enforcement---and would give binding authorities like CAs some cryptographic arguments to resist legal coercion. We give a generic construction using a new type of trapdoor functions with extractability properties, which we show can be instantiated using the group of sign-agnostic quadratic residues modulo a Blum integer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Distributed Network Protocol Version 3 (DNP3) is the de-facto communication protocol for power grids. Standard-based interoperability among devices has made the protocol useful to other infrastructures such as water, sewage, oil and gas. DNP3 is designed to facilitate interaction between master stations and outstations. In this paper, we apply a formal modelling methodology called Coloured Petri Nets (CPN) to create an executable model representation of DNP3 protocol. The model facilitates the analysis of the protocol to ensure that the protocol will behave as expected. Also, we illustrate how to verify and validate the behaviour of the protocol, using the CPN model and the corresponding state space tool to determine if there are insecure states. With this approach, we were able to identify a Denial of Service (DoS) attack against the DNP3 protocol.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Secure protocols for password-based user authentication are well-studied in the cryptographic literature but have failed to see wide-spread adoption on the Internet; most proposals to date require extensive modifications to the Transport Layer Security (TLS) protocol, making deployment challenging. Recently, a few modular designs have been proposed in which a cryptographically secure password-based mutual authentication protocol is run inside a confidential (but not necessarily authenticated) channel such as TLS; the password protocol is bound to the established channel to prevent active attacks. Such protocols are useful in practice for a variety of reasons: security no longer relies on users' ability to validate server certificates and can potentially be implemented with no modifications to the secure channel protocol library. We provide a systematic study of such authentication protocols. Building on recent advances in modelling TLS, we give a formal definition of the intended security goal, which we call password-authenticated and confidential channel establishment (PACCE). We show generically that combining a secure channel protocol, such as TLS, with a password authentication protocol, where the two protocols are bound together using either the transcript of the secure channel's handshake or the server's certificate, results in a secure PACCE protocol. Our prototype based on TLS is available as a cross-platform client-side Firefox browser extension and a server-side web application which can easily be installed on deployed web browsers and servers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Building information models have created a paradigm shift in how buildings are built and managed by providing a dynamic repository for building data that is useful in many new operational scenarios. This change has also created an opportunity to use building information models as an integral part of security operations and especially as a tool to facilitate fine-grained access control to building spaces in smart buildings and critical infrastructure environments. In this paper, we identify the requirements for a security policy model for such an access control system and discuss why the existing policy models are not suitable for this application. We propose a new policy language extension to XACML, with BIM specific data types and functions based on the IFC specification, which we call BIM-XACML.