880 resultados para Internet security applications
Resumo:
Arbor Network's annual Internet security report for 2011/12. We will discuss this report in INFO6003 lectures.
Resumo:
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.
Resumo:
Corporates are entering the brave new world of the internet and digitization without much regard for the fine print of a growing regulation regime. More traditional outsourcing arrangements are already falling foul of the regulators as rules and supervision intensifies. Furthermore, ‘shadow IT’ is proliferating as the attractions of SaaS, mobile, cloud services, social media, and endless new ‘apps’ drive usage outside corporate IT. Initial cost-benefit analyses of the Cloud make such arrangements look immediately attractive but losing control of architecture, security, applications and deployment can have far reaching and damaging regulatory consequences. From research in financial services, this paper details the increasing body of regulations, their inherent risks for businesses and how the dangers can be pre-empted and managed. We then delineate a model for managing these risks specifically focused on investigating, strategizing and governing outsourcing arrangements and related regulatory obligations
Resumo:
O comércio eletrônico já é uma realidade brasileira. Contudo, esta modalidade de negócio eletrônico ainda não atingiu o seu pleno potencial, especialmente nas negociações orientadas para o consumidor (B2C). Vários fatores são apontados como restrições ao seu crescimento, mas nenhum deles é tão destacado e controvertido quanto a segurança na Internet, especialmente nas transações eletrônicas. Este trabalho analisa a questão da segurança do ponto de vista dos usuários de Internet, uma vez que a percepção de segurança dos internautas determina a sua confiança, e a sua confiança influencia a sua decisão de compra eletrônica e a abrangência das compras realizadas através da Internet. A segurança, vista freqüentemente como a grande vilã no mundo digital, passa a ser entendida alternativamente como um dos fundamentos do comércio eletrônico e, conseqüentemente, uma grande vantagem competitiva para os negócios eletrônicos.
Resumo:
This article studies the determinants of the labor force participation of the elderly and investigates the factors that may account for the increase in retirement in the second half of the last century. We develop a life-cycle general equilibrium model with endogenous retirement that embeds Social Security legislation and Medicare. Individuals are ex ante heterogeneous with respect to their preferences for leisure and face uncertainty about labor productivity, health status and out-of-pocket medical expenses. The model is calibrated to the U.S. economy in 2000 and is able to reproduce very closely the retirement behavior of the American population. It reproduces the peaks in the distribution of Social Security applications at ages 62 and 65 and the observed facts that low earners and unhealthy individuals retire earlier. It also matches very closely the increase in retirement from 1950 to 2000. Changes in Social Security policy - which became much more generous - and the introduction of Medicare account for most of the expansion of retirement. In contrast, the isolated impact of the increase in longevity was a delaying of retirement.
Resumo:
Key management is a core mechanism to ensure the security of applications and network services in wireless sensor networks. It includes two aspects: key distribution and key revocation. Many key management protocols have been specifically designed for wireless sensor networks. However, most of the key management protocols focus on the establishment of the required keys or the removal of the compromised keys. The design of these key management protocols does not consider the support of higher level security applications. When the applications are integrated later in sensor networks, new mechanisms must be designed. In this paper, we propose a security framework, uKeying, for wireless sensor networks. This framework can be easily extended to support many security applications. It includes three components: a security mechanism to provide secrecy for communications in sensor networks, an efficient session key distribution scheme, and a centralized key revocation scheme. The proposed framework does not depend on a specific key distribution scheme and can be used to support many security applications, such as secure group communications. Our analysis shows that the framework is secure, efficient, and extensible. The simulation and results also reveal for the first time that a centralized key revocation scheme can also attain a high efficiency.
Resumo:
This paper develops an Internet geographical information system (GIS) and spatial model application that provides socio-economic information and exploratory spatial data analysis for local government authorities (LGAs) in Queensland, Australia. The application aims to improve the means by which large quantities of data may be analysed, manipulated and displayed in order to highlight trends and patterns as well as provide performance benchmarking that is readily understandable and easily accessible for decision-makers. Measures of attribute similarity and spatial proximity are combined in a clustering model with a spatial autocorrelation index for exploratory spatial data analysis to support the identification of spatial patterns of change. Analysis of socio-economic changes in Queensland is presented. The results demonstrate the usefulness and potential appeal of the Internet GIS applications as a tool to inform the process of regional analysis, planning and policy.
Resumo:
One of the obstacles to improved security of the Internet is ad hoc development of technologies with different design goals and different security goals. This paper proposes reconceptualizing the Internet as a secure distributed system, focusing specifically on the application layer. The notion is to redesign specific functionality, based on principles discovered in research on distributed systems in the decades since the initial development of the Internet. Because of the problems in retrofitting new technology across millions of clients and servers, any options with prospects of success must support backward compatibility. This paper outlines a possible new architecture for internet-based mail which would replace existing protocols by a more secure framework. To maintain backward compatibility, initial implementation could offer a web browser-based front end but the longer-term approach would be to implement the system using appropriate models of replication. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
With the development of the Internet culture applications are becoming simpler and simpler, users need less IT knowledge than earlier; from the ‘reader’ status they have reached that of the content creator and editor. In our days, the effects of the web are becoming stronger and stronger— computer-aided work is conventional almost everywhere. The spread of the Internet applications has several reasons: first of all, their accessibility is widespread; second, their use is not limited to only one computer or network on which they have been installed. Also, the quantity of accessible information now and earlier is not even comparable. Not counting the applications which need high broadband or high counting capacity (for example video editing), Internet applications are reaching the functionality of the thick clients associates. The most serious disadvantage of Internet applications – for security reasons — is that the resources of the client computer are not fully accessible or accessible only to a restricted extent. Still thick clients do have some advantages: better multimedia perdormance with more flexibility due to local resources and the possibility for offline working.
Resumo:
The purpose of this study was to empirically investigate the adoption of retail electronic commerce (REC). REC is a business transaction which takes place over the Internet between a casual consumer and a firm. The consumer has no long-term relationship with the firm, orders a good or service, and pays with a credit card. To date, most REC applications have not been profitable. To build profitable REC applications a better understanding of the system's users is required. ^ The research model hypothesizes that the level of REC buying is dependent upon the Buying Characteristics of Internet Use and Search Experience plus the Channel Characteristics of Beliefs About Internet Vendors and Beliefs About Internet Security. The effect of these factors is modified by Time. Additional research questions ask about the different types of REC buyers, the differences between these groups, and how these groups evolved over time. ^ To answer these research questions I analyzed publicly available data collected over a three-year period by the Georgia Institute of Technology Graphics and Visualization Unit over the Internet. Findings indicate the model best predicts Number of Purchases in a future period, and that Buyer Characteristics are most important to this determination. Further, this model is evolving over Time making Buyer Characteristics predict Number of Purchases better in more recent survey administrations. Buyers clustered into five groups based on level of buying and move through various levels and buy increasing Number of Purchases over time. ^ This is the first large scale research project to investigate the evolution of REC. This implications are significant. Practitioners with casual consumer customers need to deploy a finely tuned REC strategy, understand their buyers, capitalize on the company reputation on the Internet, install an Internet-compatible infrastructure, and web-enable order-entry/inventory/fulliment/shipping applications. Researchers might wish to expand on the Buyer Characteristics of the model and/or explore alternative dependent variables. Further, alternative theories such as Population Ecology or Transaction Cost Economics might further illuminate this new I.S. research domain. ^
Resumo:
The purpose of this study was to empirically investigate the adoption of retail electronic commerce (REC). REC is a business transaction which takes place over the Internet between a casual consumer and a firm. The consumer has no long-term relationship with the firm, orders a good or service, and pays with a credit card. To date, most REC applications have not been profitable. To build profitable REC applications a better understanding of the system's users is required. The research model hypothesizes that the level of REC buying is dependent upon the Buying Characteristics of Internet Use and Search Experience plus the Channel Characteristics of Beliefs About Internet Vendors and Beliefs About Internet Security. The effect of these factors is modified by Time. Additional research questions ask about the different types of REC buyers, the differences between these groups, and how these groups evolved over time. To answer these research questions I analyzed publically available data collected over a three-year period by the Georgia Institute of Technology Graphics and Visualization Unit over the Internet. Findings indicate the model best predicts Number of Purchases in a future period, and that Buyer Characteristics are most important to this determination. Further, this model is evolving over Time making Buyer Characteristics predict Number of Purchases better in more recent survey administrations. Buyers clustered into five groups based on level of buying and move through various levels and buy increasing Number of Purchases over time. This is the first large scale research project to investigate the evolution of REC. This implications are significant. Practitioners with casual consumer customers need to deploy a finely tuned REC strategy, understand their buyers, capitalize on the company reputation on the Internet, install an Internet-compatible infrastructure, and web-enable order-entry/inventory/fulfillment/ shipping applications. Researchers might wish to expand on the Buyer Characteristics of the model and/or explore alternative dependent variables. Further, alternative theories such as Population Ecology or Transaction Cost Economics might further illuminate this new I.S. research domain.
Resumo:
Several studies in the past have revealed that network end user devices are left powered up 24/7 even when idle just for the sake of maintaining Internet connectivity. Network devices normally support low power states but are kept inactive due to their inability to maintain network connectivity. The Network Connectivity Proxy (NCP) has recently been proposed as an effective mechanism to impersonate network connectivity on behalf of high power devices and enable them to sleep when idle without losing network presence. The NCP can efficiently proxy basic networking protocol, however, proxying of Internet based applications have no absolute solution due to dynamic and non-predictable nature of the packets they are sending and receiving periodically. This paper proposes an approach for proxying Internet based applications and presents the basic software architectures and capabilities. Further, this paper also practically evaluates the proposed framework and analyzes expected energy savings achievable under-different realistic conditions.
Resumo:
As ever more devices are connected to the internet, and applications turn ever more interactive, it becomes more important that the network can be counted on to respond reliably and without unnecessary delay. However, this is far from always the case today, as there can be many potential sources of unnecessary delay. In this thesis we focus on one of them: Excess queueing delay in network routers along the path, also known as bufferbloat. We focus on the home network, and treat the issue in three stages. We examine latency variation and queueing delay on the public internet and show that significant excess delay is often present. Then, we evaluate several modern AQM algorithms and packet schedulers in a residential setting, and show that modern AQMs can almost entirely eliminate bufferbloat and extra queueing latency for wired connections, but that they are not as effective for WiFi links. Finally, we go on to design and implement a solution for bufferbloat at the WiFi link, and also design a workable scheduler-based solution for realising airtime fairness in WiFi. Also included in this thesis is a description of Flent, a measurement tool used to perform most of the experiments in the other papers, and also used widely in the bufferbloat community.
Resumo:
In this study, we examine an important factor that affects consumers' acceptance of business-to-commerce (B2C) electronic commerce - perceived risk. The objective of this paper is to examine the definition of perceived risk in the context of B2C electronic commerce. The paper highlights the importance of perceived risk and the interwoven relation between perceived risk and trust. It discusses the problem of defining perceived risk in prior B2C research. This study proposes a new classification of consumers' perceived risk based on sources. It highlights the importance of identifying the sources of consumer's risk perceptions in addition to the consequences dimensions. Two focus group discussion sessions were conducted to verify the proposed classification. Results indicate that Internet consumers perceive three sources of risk in B2C electronic commerce: technology, vendor, and product. © 2003 Elsevier B.V. All rights reserved.
Resumo:
A Realidade Aumentada veio alterar a percepção que o ser humano tem do mundo real. A expansão da nossa realidade à Realidade Virtual possibilita a criação de novas experiencias, cuja aplicabilidade é já tida como natural em diversas situações. No entanto, potenciar este tipo de interacção pode ser um processo complexo, quer por limitações tecnológicas, quer pela gestão dos recursos envolvidos. O desenvolvimento de projectos com realidade aumentada para fins comerciais passa assim muitas vezes pela optimização dos recursos utilizados tendo em consideração as limitações das tecnologias envolventes (sistemas de detecção de movimento e voz, detecção de padrões, GPS, análise de imagens, sensores biométricos, etc.). Com a vulgarização e aceitação das técnicas de Realidade Aumentada em muitas áreas (medicina, educação, lazer, etc.), torna-se também necessário que estas técnicas sejam transversais aos dispositivos que utilizamos diariamente (computadores, tablets, telemóveis etc.). Um dominador comum entre estes dispositivos é a internet uma vez que as aplicações online conseguem abarcar um maior número de pessoas. O objectivo deste projecto era o de criar uma aplicação web com técnicas de Realidade Aumentada e cujos conteúdos fossem geridos pelos utilizadores. O processo de investigação e desenvolvimento deste trabalho passou assim por uma fase fundamental de prototipagem para seleccionar as tecnologias que melhor se enquadravam no tipo de arquitectura pretendida para a aplicação e nas ferramentas de desenvolvimento utilizadas pela empresa onde o projecto foi desenvolvido. A aplicação final é composta por um FrontOffice, responsável por mostrar e interpretar as aplicações criadas e possibilitar a integração com outras aplicações, e um BackOffice que possibilita aos utilizadores, sem conhecimentos de programação, criar novas aplicações de realidade aumentada e gerir os conteúdos multimédia utilizados. A aplicação desenvolvida pode servir de base para outras aplicações e ser reutilizável noutros âmbitos, sempre com o objectivo de reduzir custos de desenvolvimento e de gestão de conteúdos, proporcionando assim a implementação de uma Framework que permite a gestão de conteúdos em diferentes áreas (medicina, educação, lazer, etc.), onde os utilizadores podem criar as suas próprias aplicações, jogos e ferramentas de trabalho. No decorrer do projecto, a aplicação foi validada por especialistas garantindo o cumprimento dos objectivos propostos.