978 resultados para Internet. Network neutrality. Network neutrality mandates.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented here demonstrates the feasibility of using the single-mode fibers of an optical Internet network to deliver visible light between separate laboratories as a way to perform remote spectroscopy in the visible for teaching purposes. The coupling of a broadband light source into the single-mode fiber (SMF) and the characterization of optical losses as a function of the wavelength are discussed. Sample spectra were measured with a portable spectrometer controlled by an acquisition program developed with the LabVIEW software that allows the data to be collected and analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ways in which Internet traffic is managed have direct consequences on Internet users’ rights as well as on their capability to compete on a level playing field. Network neutrality mandates to treat Internet traffic in a non-discriminatory fashion in order to maximise end users’ freedom and safeguard an open Internet. This book is the result of a collective work aimed at providing deeper insight into what is network neutrality, how does it relates to human rights and free competition and how to properly frame this key issue through sustainable policies and regulations. The Net Neutrality Compendium stems from three years of discussions nurtured by the members of the Dynamic Coalition on Network Neutrality (DCNN), an open and multistakeholder group, established under the aegis of the United Nations Internet Governance Forum (IGF).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ways in which Internet traffic is managed have direct consequences on Internet users’ rights as well as on their capability to compete on a level playing field. Network neutrality mandates to treat Internet traffic in a non-discriminatory fashion in order to maximise end users’ freedom and safeguard an open Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When they look at Internet policy, EU policymakers seem mesmerised, if not bewitched, by the word ‘neutrality. Originally confined to the infrastructure layer, today the neutrality rhetoric is being expanded to multi-sided platforms such as search engines and more generally online intermediaries. Policies for search neutrality and platform neutrality are invoked to pursue a variety of policy objectives, encompassing competition, consumer protection, privacy and media pluralism. This paper analyses this emerging debate and comes to a number of conclusions. First, mandating net neutrality at the infrastructure layer might have some merit, but it certainly would not make the Internet neutral. Second, since most of the objectives initially associated with network neutrality cannot be realistically achieved by such a rule, the case for network neutrality legislation would have to stand on different grounds. Third, the fact that the Internet is not neutral is mostly a good thing for end users, who benefit from intermediaries that provide them with a selection of the over-abundant information available on the Web. Fourth, search neutrality and platform neutrality are fundamentally flawed principles that contradict the economics of the Internet. Fifth, neutrality is a very poor and ineffective recipe for media pluralism, and as such should not be invoked as the basis of future media policy. All these conclusions have important consequences for the debate on the future EU policy for the Digital Single Market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This book focuses on network management and traffic engineering for Internet and distributed computing technologies, as well as present emerging technology trends and advanced platform

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ACKNOWLEDGMENTS MW and RVD have been supported by the German Federal Ministry for Education and Research (BMBF) via the Young Investigators Group CoSy-CC2 (grant no. 01LN1306A). JFD thanks the Stordalen Foundation and BMBF (project GLUES) for financial support. JK acknowledges the IRTG 1740 funded by DFG and FAPESP. MT Gastner is acknowledged for providing his data on the airline, interstate, and Internet network. P Menck thankfully provided his data on the Scandinavian power grid. We thank S Willner on behalf of the entire zeean team for providing the data on the world trade network. All computations have been performed using the Python package pyunicorn [41] that is available at https://github.com/pik-copan/pyunicorn.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

本文针对基于 Internet机器人控制 ,首先分析了 Internet络时间延迟的主要组成及其基本特性 ,然后介绍了网络时延测试实验的结果及其结论 ,最后研究了网络时延预估算法及其有效性.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poder clasificar de manera precisa la aplicación o programa del que provienen los flujos que conforman el tráfico de uso de Internet dentro de una red permite tanto a empresas como a organismos una útil herramienta de gestión de los recursos de sus redes, así como la posibilidad de establecer políticas de prohibición o priorización de tráfico específico. La proliferación de nuevas aplicaciones y de nuevas técnicas han dificultado el uso de valores conocidos (well-known) en puertos de aplicaciones proporcionados por la IANA (Internet Assigned Numbers Authority) para la detección de dichas aplicaciones. Las redes P2P (Peer to Peer), el uso de puertos no conocidos o aleatorios, y el enmascaramiento de tráfico de muchas aplicaciones en tráfico HTTP y HTTPS con el fin de atravesar firewalls y NATs (Network Address Translation), entre otros, crea la necesidad de nuevos métodos de detección de tráfico. El objetivo de este estudio es desarrollar una serie de prácticas que permitan realizar dicha tarea a través de técnicas que están más allá de la observación de puertos y otros valores conocidos. Existen una serie de metodologías como Deep Packet Inspection (DPI) que se basa en la búsqueda de firmas, signatures, en base a patrones creados por el contenido de los paquetes, incluido el payload, que caracterizan cada aplicación. Otras basadas en el aprendizaje automático de parámetros de los flujos, Machine Learning, que permite determinar mediante análisis estadísticos a qué aplicación pueden pertenecer dichos flujos y, por último, técnicas de carácter más heurístico basadas en la intuición o el conocimiento propio sobre tráfico de red. En concreto, se propone el uso de alguna de las técnicas anteriormente comentadas en conjunto con técnicas de minería de datos como son el Análisis de Componentes Principales (PCA por sus siglas en inglés) y Clustering de estadísticos extraídos de los flujos procedentes de ficheros de tráfico de red. Esto implicará la configuración de diversos parámetros que precisarán de un proceso iterativo de prueba y error que permita dar con una clasificación del tráfico fiable. El resultado ideal sería aquel en el que se pudiera identificar cada aplicación presente en el tráfico en un clúster distinto, o en clusters que agrupen grupos de aplicaciones de similar naturaleza. Para ello, se crearán capturas de tráfico dentro de un entorno controlado e identificando cada tráfico con su aplicación correspondiente, a continuación se extraerán los flujos de dichas capturas. Tras esto, parámetros determinados de los paquetes pertenecientes a dichos flujos serán obtenidos, como por ejemplo la fecha y hora de llagada o la longitud en octetos del paquete IP. Estos parámetros serán cargados en una base de datos MySQL y serán usados para obtener estadísticos que ayuden, en un siguiente paso, a realizar una clasificación de los flujos mediante minería de datos. Concretamente, se usarán las técnicas de PCA y clustering haciendo uso del software RapidMiner. Por último, los resultados obtenidos serán plasmados en una matriz de confusión que nos permitirá que sean valorados correctamente. ABSTRACT. Being able to classify the applications that generate the traffic flows in an Internet network allows companies and organisms to implement efficient resource management policies such as prohibition of specific applications or prioritization of certain application traffic, looking for an optimization of the available bandwidth. The proliferation of new applications and new technics in the last years has made it more difficult to use well-known values assigned by the IANA (Internet Assigned Numbers Authority), like UDP and TCP ports, to identify the traffic. Also, P2P networks and data encapsulation over HTTP and HTTPS traffic has increased the necessity to improve these traffic analysis technics. The aim of this project is to develop a number of techniques that make us able to classify the traffic with more than the simple observation of the well-known ports. There are some proposals that have been created to cover this necessity; Deep Packet Inspection (DPI) tries to find signatures in the packets reading the information contained in them, the payload, looking for patterns that can be used to characterize the applications to which that traffic belongs; Machine Learning procedures work with statistical analysis of the flows, trying to generate an automatic process that learns from those statistical parameters and calculate the likelihood of a flow pertaining to a certain application; Heuristic Techniques, finally, are based in the intuition or the knowledge of the researcher himself about the traffic being analyzed that can help him to characterize the traffic. Specifically, the use of some of the techniques previously mentioned in combination with data mining technics such as Principal Component Analysis (PCA) and Clustering (grouping) of the flows extracted from network traffic captures are proposed. An iterative process based in success and failure will be needed to configure these data mining techniques looking for a reliable traffic classification. The perfect result would be the one in which the traffic flows of each application is grouped correctly in each cluster or in clusters that contain group of applications of similar nature. To do this, network traffic captures will be created in a controlled environment in which every capture is classified and known to pertain to a specific application. Then, for each capture, all the flows will be extracted. These flows will be used to extract from them information such as date and arrival time or the IP length of the packets inside them. This information will be then loaded to a MySQL database where all the packets defining a flow will be classified and also, each flow will be assigned to its specific application. All the information obtained from the packets will be used to generate statistical parameters in order to describe each flow in the best possible way. After that, data mining techniques previously mentioned (PCA and Clustering) will be used on these parameters making use of the software RapidMiner. Finally, the results obtained from the data mining will be compared with the real classification of the flows that can be obtained from the database. A Confusion Matrix will be used for the comparison, letting us measure the veracity of the developed classification process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper will consider some of the wider contextual and policy questions arising out of four major public inquiries that took place in Australia over 2011–2012: the Convergence Review, the National Classification Scheme Review, the Independent Media Inquiry (Finkelstein Review) and the National Cultural Policy. This paper considers whether we are now witnessing a ‘convergent media policy moment’ akin to the ‘cultural policy moment’ theorized by Australian cultural researchers in the early 1990s, and the limitations of various approaches to understanding policy – including critiques of neoliberalism – in understanding such shifts. It notes the rise of ‘soft law’ as a means of addressing the challenges of regulatory design in an era of rapid media change, with consideration of two cases: the approach to media influence taken in the Convergence Review, and the concept of ‘deeming’ developed in the National Classification Scheme Review.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter considers the implications of convergence for media policy from three perspectives. First, it discusses what have been the traditional concerns of media policy, and the challenges it faces, from the perspectives of public interest theories, economic capture theories, and capitalist state theories. Second, it looks at what media convergence involves, and some of the dilemmas arising from convergent media policy including: (1) determining who is a media company; (2) regulatory parity between ‘old’ and ‘new’ media; (3) treatment of similar media content across different platforms; (4) distinguishing ‘big media’ from user-created content, and: (5) maintaining a distinction between media regulation and censorship of personal communication. Finally, it discusses attempts to reform media policy in light of these changes, including Australian media policy reports from 2011-12 including the Convergence Review, the Finkelstein Review of News Media, and the Australian Law Reform Commission’s National Classification Scheme Review. It concludes by arguing that ‘public interest’ approaches to media policy continue to have validity, even as they grapple with the complex question of how to understand the concept of influence in a convergent media environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter considers the implications of convergence for media policy from three perspectives. First, it discusses what have been the traditional concerns of media policy, and the challenges it faces, from the perspectives of public interest theories, economic capture theories, and capitalist state theories. Second, it looks at what media convergence involves, and some of the dilemmas arising from convergent media policy including: (1) determining who is a media company; (2) regulatory parity between ‘old’ and ‘new’ media; (3) treatment of similar media content across different platforms; (4) distinguishing ‘big media’ from user-created content; and (5) maintaining a distinction between media regulation and censorship of personal communication. Finally, it discusses attempts to reform media policy in light of these changes, including Australian media policy reports from 2011-12 including the Convergence Review, the Finkelstein Review of News Media, and the Australian Law Reform Commission’s National Classification Scheme Review. It concludes by arguing that ‘public interest’ approaches to media policy continue to have validity, even as they grapple with the complex question of how to understand the concept of influence in a convergent media environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many networks such as social networks and organizational networks in global companies consist of self-interested agents. The topology of these networks often plays a crucial role in important tasks such as information diffusion and information extraction. Consequently, growing a stable network having a certain topology is of interest. Motivated by this, we study the following important problem: given a certain desired network topology, under what conditions would best response (link addition/deletion) strategies played by self-interested agents lead to formation of a stable network having that topology. We study this interesting reverse engineering problem by proposing a natural model of recursive network formation and a utility model that captures many key features. Based on this model, we analyze relevant network topologies and derive a set of sufficient conditions under which these topologies emerge as pairwise stable networks, wherein no node wants to delete any of its links and no two nodes would want to create a link between them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La création du droit, phénomène « mal connu » des sciences juridiques semble désormais entretenir une intimité problématique très forte avec le réseau Internet. Elle serait même au cœur du « grand bougé internétique » vers l’ère postmoderne. Ce nouvel espace juridique apparaît ainsi pour la création du droit un accélérateur réseautique, néanmoins respectueux du pluriel. Or, ce changement initié par Internet en entraîne un autre d’une ampleur tout aussi considérable : il force en effet les ordres juridiques à faire globalement évoluer leurs structures internes et les jette dans un univers où leur pertinence d’action est singulièrement réduite. Seul l’ordre juridique européen, en tant que première esquisse de l’Etat-réseau, y conserverait une certaine efficacité d’action faisant alors de lui un guide structurel à l’usage de ses congénères mis en difficulté.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le réseau Internet a probablement causé un tourbillon économique et social sans équivalent dans l’histoire du monde moderne. De tels bouleversements ont amené plusieurs personnes à s’interroger sur la puissance de cet outil, sa liberté et son impact sur l’humanité. Nombreux sont ceux qui ont ainsi voulu attribuer à l’Internet un rôle primordial dans l’ensemble des maux de notre siècle, notamment en l’accusant de favoriser la propagation des messages de haine, la perversion des mœurs et le contournement de la loi. L’auteur se propose ici de prendre le contre-pied de ce courant pessimiste, et de faire la démonstration des avancées considérables que cet outil merveilleux a permises dans le domaine des droits économiques et sociaux. En s’appuyant sur les textes internationaux ayant donné naissance à ces droits voulus pragmatiques, l’auteur précise d’abord le contenu de prérogatives qui concernent directement le quotidien de chacun, sans pour autant faire l’objet d’une réelle application. Il met ensuite en parallèle l’objectif initial recherché par le législateur international et les avancées concrètes permises par le réseau dans des domaines aussi variés que la liberté d’expression, l’accès au droit, et la participation de tous à la vie culturelle et politique de la société. Cet article est une tentative de redonner au réseau ses lettres de noblesse, et d’encourager les lecteurs à utiliser mieux encore ses fantastiques ressources.