980 resultados para Internet - Serviços
Resumo:
International Review of Law, Computers & Technology, Volume 17, Issue 2 July 2003 , pages 163 - 174 RAE2008
Resumo:
null RAE2008
Resumo:
Salmon, Naomi, 'The Internet and the Human Right to Food: The European Rapid Alert System for Food and Feed', Information and Communications Technology Law, (2005) 14 (1), pp. 43-57 Special Issue: GATED COMMUNITIES RAE2008
Resumo:
Jones, E. H. G., Thomas, Ned, and King, Alan, 'Machine Translation and the Internet', Mercator Media Forums (2001) 5(1) pp.84-98 RAE2008
Resumo:
Wydział Neofilologii: Instytut Filologii Angielskiej
Resumo:
Dissertação de Mestrado apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências da Comunicação, especialização em Marketing e Publicidade.
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Empresariais
Resumo:
Dissertação de Mestrado apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências da Comunicação, especialização em Relações Públicas.
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para a obtenção do grau de Mestre em Psicologia, ramo de Psicologia Clínica e da Saúde
Resumo:
The Google AdSense Program is a successful internet advertisement program where Google places contextual adverts on third-party websites and shares the resulting revenue with each publisher. Advertisers have budgets and bid on ad slots while publishers set reserve prices for the ad slots on their websites. Following previous modelling efforts, we model the program as a two-sided market with advertisers on one side and publishers on the other. We show a reduction from the Generalised Assignment Problem (GAP) to the problem of computing the revenue maximising allocation and pricing of publisher slots under a first-price auction. GAP is APX-hard but a (1-1/e) approximation is known. We compute truthful and revenue-maximizing prices and allocation of ad slots to advertisers under a second-price auction. The auctioneer's revenue is within (1-1/e) second-price optimal.
Resumo:
Internet Traffic Managers (ITMs) are special machines placed at strategic places in the Internet. itmBench is an interface that allows users (e.g. network managers, service providers, or experimental researchers) to register different traffic control functionalities to run on one ITM or an overlay of ITMs. Thus itmBench offers a tool that is extensible and powerful yet easy to maintain. ITM traffic control applications could be developed either using a kernel API so they run in kernel space, or using a user-space API so they run in user space. We demonstrate the flexibility of itmBench by showing the implementation of both a kernel module that provides a differentiated network service, and a user-space module that provides an overlay routing service. Our itmBench Linux-based prototype is free software and can be obtained from http://www.cs.bu.edu/groups/itm/.
Resumo:
In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network element from much of its capacity, or significantly reduce its service quality, while evading detection by consuming an unsuspicious, small fraction of that element's hijacked capacity. This type of attack stands in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed attacks that exploit specific protocol settings such as TCP timeouts. We exemplify what we term as Reduction of Quality (RoQ) attacks by exposing the vulnerabilities of common adaptation mechanisms. We develop control-theoretic models and associated metrics to quantify these vulnerabilities. We present numerical and simulation results, which we validate with observations from real Internet experiments. Our findings motivate the need for the development of adaptation mechanisms that are resilient to these new forms of attacks.
Resumo:
With the increasing demand for document transfer services such as the World Wide Web comes a need for better resource management to reduce the latency of documents in these systems. To address this need, we analyze the potential for document caching at the application level in document transfer services. We have collected traces of actual executions of Mosaic, reflecting over half a million user requests for WWW documents. Using those traces, we study the tradeoffs between caching at three levels in the system, and the potential for use of application-level information in the caching system. Our traces show that while a high hit rate in terms of URLs is achievable, a much lower hit rate is possible in terms of bytes, because most profitably-cached documents are small. We consider the performance of caching when applied at the level of individual user sessions, at the level of individual hosts, and at the level of a collection of hosts on a single LAN. We show that the performance gain achievable by caching at the session level (which is straightforward to implement) is nearly all of that achievable at the LAN level (where caching is more difficult to implement). However, when resource requirements are considered, LAN level caching becomes much more desirable, since it can achieve a given level of caching performance using a much smaller amount of cache space. Finally, we consider the use of organizational boundary information as an example of the potential for use of application-level information in caching. Our results suggest that distinguishing between documents produced locally and those produced remotely can provide useful leverage in designing caching policies, because of differences in the potential for sharing these two document types among multiple users.