99 resultados para Internet security applications
Resumo:
There is value of using brands to build relationships with customers and improve brand performance on the web. Products and services are easily replicated; therefore to simplify the buyer decision making process, brands have become important. Building strong brands is important as they can create contrasts between other brands, connections to consumers, and relevance through building customer relationships. Branding in an online environment is important for three reasons: security, recognition and associated costs.
Resumo:
We present a new penalty-based genetic algorithm for the multi-source and multi-sink minimum vertex cut problem, and illustrate the algorithm’s usefulness with two real-world applications. It is proved in this paper that the genetic algorithm always produces a feasible solution by exploiting some domain-specific knowledge. The genetic algorithm has been implemented on the example applications and evaluated to show how well it scales as the problem size increases.
Resumo:
The Open and Trusted Health Information Systems (OTHIS) Research Group has formed in response to the health sector’s privacy and security requirements for contemporary Health Information Systems (HIS). Due to recent research developments in trusted computing concepts, it is now both timely and desirable to move electronic HIS towards privacy-aware and security-aware applications. We introduce the OTHIS architecture in this paper. This scheme proposes a feasible and sustainable solution to meeting real-world application security demands using commercial off-the-shelf systems and commodity hardware and software products.
Resumo:
Web 1.0 referred to the early, read-only internet; Web 2.0 refers to the ‘read-write web’ in which users actively contribute to as well as consume online content; Web 3.0 is now being used to refer to the convergence of mobile and Web 2.0 technologies and applications. One of the most important developments in mobile 3.0 is geography: with many mobile phones now equipped with GPS, mobiles promise to “bring the internet down to earth” through geographically-aware, or locative media. The internet was earlier heralded as “the death of geography” with predictions that with anyone able to access information from anywhere, geography would no longer matter. But mobiles are disproving this. GPS allows the location of the user to be pinpointed, and the mobile internet allows the user to access locally-relevant information, or to upload content which is geotagged to the specific location. It also allows locally-specific content to be sent to the user when the user enters a specific space. Location-based services are one of the fastest-growing segments of the mobile internet market: the 2008 AIMIA report indicates that user access of local maps increased by 347% over the previous 12 months, and restaurant guides/reviews increased by 174%. The central tenet of cultural geography is that places are culturally-constructed, comprised of the physical space itself, culturally-inflected perceptions of that space, and people’s experiences of the space (LeFebvre 1991). This paper takes a cultural geographical approach to locative media, anatomising the various spaces which have emerged through locative media, or “the geoweb” (Lake 2004). The geoweb is such a new concept that to date, critical discourse has treated it as a somewhat homogenous spatial formation. In order to counter this, and in order to demonstrate the dynamic complexity of the emerging spaces of the geoweb, the paper provides a topography of different types of locative media space: including the personal/aesthetic in which individual users geotag specific physical sites with their own content and meanings; the commercial, like the billboards which speak to individuals as they pass in Minority Report; and the social, in which one’s location is defined by the proximity of friends rather than by geography.
Resumo:
The traditional means for isolating applications from each other is via the use of operating system provided “process” abstraction facilities. However, as applications now consist of multiple fine-grained components, the traditional process abstraction model is proving to be insufficient in ensuring this isolation. Statistics indicate that a high percentage of software failure occurs due to propagation of component failures. These observations are further bolstered by the attempts by modern Internet browser application developers, for example, to adopt multi-process architectures in order to increase robustness. Therefore, a fresh look at the available options for isolating program components is necessary and this paper provides an overview of previous and current research on the area.
Resumo:
Monitoring Internet traffic is critical in order to acquire a good understanding of threats to computer and network security and in designing efficient computer security systems. Researchers and network administrators have applied several approaches to monitoring traffic for malicious content. These techniques include monitoring network components, aggregating IDS alerts, and monitoring unused IP address spaces. Another method for monitoring and analyzing malicious traffic, which has been widely tried and accepted, is the use of honeypots. Honeypots are very valuable security resources for gathering artefacts associated with a variety of Internet attack activities. As honeypots run no production services, any contact with them is considered potentially malicious or suspicious by definition. This unique characteristic of the honeypot reduces the amount of collected traffic and makes it a more valuable source of information than other existing techniques. Currently, there is insufficient research in the honeypot data analysis field. To date, most of the work on honeypots has been devoted to the design of new honeypots or optimizing the current ones. Approaches for analyzing data collected from honeypots, especially low-interaction honeypots, are presently immature, while analysis techniques are manual and focus mainly on identifying existing attacks. This research addresses the need for developing more advanced techniques for analyzing Internet traffic data collected from low-interaction honeypots. We believe that characterizing honeypot traffic will improve the security of networks and, if the honeypot data is handled in time, give early signs of new vulnerabilities or breakouts of new automated malicious codes, such as worms. The outcomes of this research include: • Identification of repeated use of attack tools and attack processes through grouping activities that exhibit similar packet inter-arrival time distributions using the cliquing algorithm; • Application of principal component analysis to detect the structure of attackers’ activities present in low-interaction honeypots and to visualize attackers’ behaviors; • Detection of new attacks in low-interaction honeypot traffic through the use of the principal component’s residual space and the square prediction error statistic; • Real-time detection of new attacks using recursive principal component analysis; • A proof of concept implementation for honeypot traffic analysis and real time monitoring.
Resumo:
The technological environment in which contemporary small and medium-sized enterprises (SMEs) operate can only be described as dynamic. The exponential rate of technological change, characterised by perceived increases in the benefits associated with various technologies, shortening product life cycles and changing standards, provides for the SME a complex and challenging operational context. The primary aim of this research was to concentrate on those SMEs that had already adopted technology in order to identify their needs for the new mobile data technologies (MDT), the mobile Internet. The research design utilised a mixed approach whereby both qualitative and quantitative data was collected to address the question. Overall, the needs of these SMEs for MDT can be conceptualised into three areas where the technology will assist business practices; communication, eCommerce and security.
Resumo:
GMPLS is a generalized form of MPLS (MultiProtocol Label Switching). MPLS is IP packet based and it uses MPLS-TE for Packet Traffic Engineering. GMPLS is extension to MPLS capabilities. It provides separation between transmission, control and management plane and network management. Control plane allows various applications like traffic engineering, service provisioning, and differentiated services. GMPLS control plane architecture includes signaling (RSVP-TE, CR-LDP) and routing (OSPF-TE, ISIS-TE) protocols. This paper provides an overview of the signaling protocols, describes their main functionalities, and provides a general evaluation of both the protocols.
Resumo:
This article presents the design and implementation of a trusted sensor node that provides Internet-grade security at low system cost. We describe trustedFleck, which uses a commodity Trusted Platform Module (TPM) chip to extend the capabilities of a standard wireless sensor node to provide security services such as message integrity, confidentiality, authenticity, and system integrity based on RSA public-key and XTEA-based symmetric-key cryptography. In addition trustedFleck provides secure storage of private keys and provides platform configuration registers (PCRs) to store system configurations and detect code tampering. We analyze system performance using metrics that are important for WSN applications such as computation time, memory size, energy consumption and cost. Our results show that trustedFleck significantly outperforms previous approaches (e.g., TinyECC) in terms of these metrics while providing stronger security levels. Finally, we describe a number of examples, built on trustedFleck, of symmetric key management, secure RPC, secure software update, and remote attestation.
Resumo:
Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.
Resumo:
Geographic information is increasingly being touted for use in research and industrial projects. While the technology is now available and affordable, there is a lack of easy to use software that takes advantage of geographic information. This is an important problem because users are often researchers or scientists who have insufficient software skills, and by providing applications that are easier to use, time and financial resources can be taken from training and be better applied to the actual research and development work. A solution for this problem must cater for the user and research needs. In particular it must allow for mobile operation for fieldwork, flexibility or customisability of data input, sharing of data with other tools and collaborative capabilities for the usual teamwork environment. This thesis has developed a new architecture and data model to achieve the solution. The result is the Mobile Collaborative Annotation framework providing an implementation of the new architecture and data model. Mobile Collaborative Mapping implements the framework as a Web 2.0 mashup rich internet application and has proven to be an effective solution through its positive application to a case study with fieldwork scientists. This thesis has contributed to research into mobile computing, collaborative computing and geospatial systems by creating a simpler entry point to mobile geospatial applications, enabling simplified collaboration and providing tangible time savings.