858 resultados para IT Security, Internet, Personal Firewall, Security Mechanism, Security System, Security Threat, Security Usability, Security Vulnerability


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is an emerging technology and it utilizes the cloud power to many technical solutions. The e-learning solution is one of those technologies where it implements the cloud power in its existing system to enhance the functionality providing to e-learners. Cloud technology has numerous advantages over the existing traditional e-learning systems. However security is a major concern in cloud based e-learning. Therefore security measures are unavoidable to prevent the loss of users’ valuable data from the security vulnerabilities. This paper investigates various security issues involved in cloud based e-learning technology with an aim to suggest remedial in the form of security measures and security management standards. These will help to overcome the security threats in cloud based e-learning technology. Solving the key problems will also encourage the widespread adoption of cloud computing in educational institutes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New national infrastructure initiatives such as the National Broadband Network (NBN) allow small and medium-sized enterprises (SMEs) in Australia to have greater access to cost effective Cloud computing. However, the ability of Cloud computing to store data remotely and share services in a dynamic environment brings with it security and privacy concerns. Evaluating these concerns is critical to address the Cloud computing underutilisation issue and leverage the benefits of costly NBN investment. This paper examines the influence of privacy and security factors on Cloud adoption by Australian SMEs in metropolitan and regional area. Data were collected from 150 Australian SMEs (specifically, 79 metropolitan SMEs and 71 regional SMEs) and structural equation modelling was used for the analysis. The findings reveal that privacy and security factors do not significantly influence the decision-making of Australian SMEs in the adoption of Cloud computing. Moreover, the results indicate that Cloud computing adoption is not influenced by the geographical location (i.e., metropolitan or regional location) of the SMEs. The findings extend the current understanding of Cloud computing adoption by Australian SMEs. The results will be useful to SMEs, Cloud service providers and policy makers devising Cloud security and privacy policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A szerzk tanulmnyukban az informcibiztonsg egy merben j, minsgi vltozst hoz tallmnyval, a kvantumkulcscservel (QKD-vel quantum key distribution) foglalkoznak. Cljuk az, hogy az jdonsgra mint informatikai biztonsgi termkre tekintsenek, s megvizsgljk a bevezetsrl szl vllalati dnts sorn felmerl rveket, ellenrveket. Munkjuk egyarnt mszaki s zleti szemllet. Elbb elklntik a kvantumkulcscsere hagyomnyos eljrsokkal szembeni hasznlatnak motivl tnyezit, s megllaptjk, milyen krlmnyek kztt szksges a napi mkdsben alkalmazni. Ezt kveten a forgalomban is kaphat QKD-termkek tulajdonsgait s gyrtit szemgyre vve megfogalmazzk a termk szles kr elterjedsnek korltait. Vgl a kvantumkulcscsere-termk bevezetsrl szl vllalati dntshozs klnbz aspektusait tekintik t. Informcibiztonsgi s zleti szempontbl sszehasonltjk az j, valamint a hagyomnyosan hasznlt kulcscsereeszkzket. Javaslatot tesznek a vdend informci rtknek becslsre, amely a hasznlatbavtel kltsg-haszon elemzst tmaszthatja al. Ebbl levezetve megllaptjk, hogy mely szervezetek alkotjk a QKD lehetsges clcsoportjt. Utols lpsknt pedig arra keresik a vlaszt, melyik idpont lehet idelis a termk bevezetsre. _____ This study aims to illuminate Quantum Key Distribution (QKD), a new invention that has the potential to bring sweeping changes to information security. The authors goal is to present QKD as a product in the field of IT security, and to examine several pro and con arguments regarding the installation of this product. Their work demonstrates both the technical and the business perspectives of applying QKD. First they identify motivational factors of using Quantum Key Distribution over traditional methods. Then the authors assess under which circumstances QKD could be necessary to be used in daily business. Furthermore, to evaluate the limitations of its broad spread, they introduce the vendors and explore the properties of their commercially available QKD products. Bearing all this in mind, they come out with numerous factors that can influence corporate decision making regarding the installation of QKD. The authors compare the traditional and the new tools of key distribution from an IT security and business perspective. They also take efforts to estimate the value of the pieces of information to be protected. This could be useful for a subsequent costbenefit analysis. Their findings try to provide support for determining the target audience of QKD in the IT security market. Finally the authors attempt to find an ideal moment for an organization to invest in Quantum Key Distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collaborative sharing of information is becoming much more needed technique to achieve complex goals in today's fast-paced tech-dominant world. Personal Health Record (PHR) system has become a popular research area for sharing patients informa- tion very quickly among health professionals. PHR systems store and process sensitive information, which should have proper security mechanisms to protect patients' private data. Thus, access control mechanisms of the PHR should be well-defined. Secondly, PHRs should be stored in encrypted form. Cryptographic schemes offering a more suitable solution for enforcing access policies based on user attributes are needed for this purpose. Attribute-based encryption can resolve these problems, we propose a patient-centric framework that protects PHRs against untrusted service providers and malicious users. In this framework, we have used Ciphertext Policy Attribute Based Encryption scheme as an efficient cryptographic technique, enhancing security and privacy of the system, as well as enabling access revocation. Patients can encrypt their PHRs and store them on untrusted storage servers. They also maintain full control over access to their PHR data by assigning attribute-based access control to selected data users, and revoking unauthorized users instantly. In order to evaluate our system, we implemented CP-ABE library and web services as part of our framework. We also developed an android application based on the framework that allows users to register into the system, encrypt their PHR data and upload to the server, and at the same time authorized users can download PHR data and decrypt it. Finally, we present experimental results and performance analysis. It shows that the deployment of the proposed system would be practical and can be applied into practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Privacy is an important component of freedom and plays a key role in protecting fundamental human rights. It is becoming increasingly difficult to ignore the fact that without appropriate levels of privacy, a persons rights are diminished. Users want to protect their privacy - particularly in privacy invasive areas such as social networks. However, Social Network users seldom know how protect their own privacy through online mechanisms. What is required is an emerging concept that provides users legitimate control over their own personal information, whilst preserving and maintaining the advantages of engaging with online services such as Social Networks. This paper reviews Privacy by Design (PbD) and shows how it applies to diverse privacy areas. Such an approach will move towards mitigating many of the privacy issues in online information systems and can be a potential pathway for protecting users personal information. The research has posed many questions in need of further investigation for different open source distributed Social Networks. Findings from this research will lead to a novel distributed architecture that provides more transparent and accountable privacy for the users of online information systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have implemented a Firewall with this architecture in reconflgurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results in both speed and area improvement when it is implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields. High throughput classification invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly in terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for the worst case packet size. The Firewall rule update involves only memory re-initialization in software without any hardware change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have Implemented a Firewall with this architecture in reconfigurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using, our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results In both speed and area Improvement when It is Implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields.High throughput classification Invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly In terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware Implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for file worst case packet size. The Firewall rule update Involves only memory re-initialiization in software without any hardware change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The SafeWeb anonymizing system has been lauded by the press and loved by its users; self-described as "the most widely used online privacy service in the world," it served over 3,000,000 page views per day at its peak. SafeWeb was designed to defeat content blocking by firewalls and to defeat Web server attempts to identify users, all without degrading Web site behavior or requiring users to install specialized software. In this article we describe how these fundamentally incompatible requirements were realized in SafeWeb's architecture, resulting in spectacular failure modes under simple JavaScript attacks. These exploits allow adversaries to turn SafeWeb into a weapon against its users, inflicting more damage on them than would have been possible if they had never relied on SafeWeb technology. By bringing these problems to light, we hope to remind readers of the chasm that continues to separate popular and technical notions of security.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statutory adjudication was introduced in the security of payment legislation to quickly and fairly resolve payment disputes in the construction industry. One of the interesting features in some legislation is the availability of an express limited right of aggrieved parties to apply for review against erroneous adjudication decisions. In Singapore, the legislation has no equivalent elsewhere in that it provides for a full review mechanism of erroneous determinations considering the fact that adjudicators often have to grapple with complex issues as sheer volume of documents within a very tight timeframe. This paper discusses the various review mechanisms of erroneous adjudication determinations then asks the question as to whether an appropriately devised legislative review mechanism on the merits, should be an essential characteristic of any effective statutory adjudication scheme. The paper concludes by making the case that an appropriately designed review mechanism as proposed in the paper could be the most pragmatic and effective measure to improve the quality of adjudication outcome and increase the disputants' confidence in statutory adjudication. This paper is based upon a paper by the author which received a High Commendation in the Student Division of the Society of Construction Law Australia Brooking Prize for 2016.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smartphones started being targets for malware in June 2004 while malware count increased steadily until the introduction of a mandatory application signing mechanism for Symbian OS in 2006. From this point on, only few news could be read on this topic. Even despite of new emerging smartphone platforms, e.g. android and iPhone, malware writers seemed to lose interest in writing malware for smartphones giving users an unappropriate feeling of safety. In this paper, we revisit smartphone malware evolution for completing the appearance list until end of 2008. For contributing to smartphone malware research, we continue this list by adding descriptions on possible techniques for creating the first malware(s) for Android platform. Our approach involves usage of undocumented Android functions enabling us to execute native Linux application even on retail Android devices. This can be exploited to create malicious Linux applications and daemons using various methods to attack a device. In this manner, we also show that it is possible to bypass the Android permission system by using native Linux applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the popularity of security cameras in public places, it is of interest to design an intelligent system that can efficiently detect events automatically. This paper proposes a novel algorithm for multi-person event detection. To ensure greater than real-time performance, features are extracted directly from compressed MPEG video. A novel histogram-based feature descriptor that captures the angles between extracted particle trajectories is proposed, which allows us to capture motion patterns of multi-person events in the video. To alleviate the need for fine-grained annotation, we propose the use of Labelled Latent Dirichlet Allocation, a weakly supervised method that allows the use of coarse temporal annotations which are much simpler to obtain. This novel system is able to run at approximately ten times real-time, while preserving state-of-theart detection performance for multi-person events on a 100-hour real-world surveillance dataset (TRECVid SED).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

IT consumerization is both a major opportunity and significant challenge for organizations. However, IS research has hardly discussed the implications for IT management so far. In this paper we address this topic by empirically identifying organizational themes for IT consumerization and conceptually exploring the direct and indirect effects on the business value of IT, IT capabilities, and the IT function. More specifically, based on two case studies, we identify eight organizational themes: consumer IT strategy, policy development and responsibilities, consideration of private life of employees, user involvement into IT-related processes, individualization, updated IT infrastructure, end user support, and data and system security. The contributions of this paper are: (1) the identification of organizational themes for IT consumerization; (2) the proposed effects on the business value of IT, IT capabilities and the IT function, and; (3) combining empirical insights into IT consumerization with managerial theories in the IS discipline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Legacy of Poverty. A Study of the substance and continuity of cultural knowledge in Finnish biographical and proverbial texts The study focuses on the idea of the cultural knowledge and shared understanding that ordinary people, folk , have of the concepts and ideas about rural based poverty in Finland. Throughout 19th century and well into 20th century, the majority of the population remained agrarian and poor. By the 1950s, most people still lived in rural areas and a majority of them earned their living primarily from agriculture and forestry. Urbanization proceeded rapidly from the 1960s onwards. Even though the Nordic welfare state was firmly established in Finland by the 1970s, old forms of agrarian poverty still remained in the culture. The source material for the study consists of 99 biographies and 502 proverbs. Biographical texts include written autobiographies and interviewed biographies. A primary analyzing concept is called a poverty speech. The poverty speech has been analyzed by providing answers to the following three questions: What connotations do people attach to poverty when they speak about it? What sort of social relations arise when people speak about poverty? How is the past experience of poverty constructed in the present and in the welfare state context? Cultural knowledge is a theoretical and analytical tool that enables people to categorize information. The three questions stated above are crucial in revealing the schematic structure that people use to communicate about agrarian poverty. Categories are analyzed and processed in terms of cultural themes that contain the ideals and stereotypes of spoken motif and sub-themes. The application of theoretical and analytical premises to the poverty speech has shown that there are four cultural themes. The first theme is Power. The social connotations in the poverty speech are mostly in relation to the better-off people. Poverty does not exist without an awareness of welfare, i.e. the understanding of a certain standard of welfare above that of one's own. The second theme is about family ties as a resource and welfare network. In poverty speech, marriage is represented as a means to upgrade one's livelihood. Family members are described as supporting one another, but at the same time as being antagonists. The third theme, Work represents the work ethic that is being connected to the poverty. Hard working as a representation is attached to eligibility for `a good life that in Finland was to become an owner-occupier of a cottage or a flat. The fourth theme is Security. The resentment of unfair treatment is expressed by using moral superiority and rational explanations. The ruling classes in the agrarian society are portrayed as being evil and selfish with no social conscience because they did not provide enough assistance to those who needed it. During the period when the welfare benefit system was undeveloped, the poor expected the wealthier people to make a contribution to the distribution of material wealth. In the premises of cultural knowledge, both oral and written traditions are about human thinking: they deal with topics, ideas and evaluations that are relevant to their bearers. Many elements expressed in poverty speech, such as classifications and customs derived from the rural world, have been carried over into the next generation in newer contexts and a different cultural environment. Keywords: cultural knowledge, cognitive categorization, poverty, life stories, proverbs

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A routing protocol in a mobile ad hoc network (MANET) should be secure against both the outside attackers which do not hold valid security credentials and the inside attackers which are the compromised nodes in the network. The outside attackers can be prevented with the help of an efficient key management protocol and cryptography. However, to prevent inside attackers, it should be accompanied with an intrusion detection system (IDS). In this paper, we propose a novel secure routing with an integrated localized key management (SR-LKM) protocol, which is aimed to prevent both inside and outside attackers. The localized key management mechanism is not dependent on any routing protocol. Thus, unlike many other existing schemes, the protocol does not suffer from the key management - secure routing interdependency problem. The key management mechanism is lightweight as it optimizes the use of public key cryptography with the help of a novel neighbor based handshaking and Least Common Multiple (LCM) based broadcast key distribution mechanism. The protocol is storage scalable and its efficiency is confirmed by the results obtained from simulation experiments.