901 resultados para COMMUNICATION-NETWORKS
Resumo:
Investigação sobre a regionalização das redes de comunicação, em especial a televisão, fenômeno que representa uma realidade de segmentação da comunicação massiva. Objetiva-se analisar e classificar as emissoras regionais de televisão com relação aos seus modos de inserção local, observando-se suas especificidades, programação, estratégias de comunicação e ações de conquista de identidade com a comunidade onde estão inseridas, além de tentar compreender como se deu a expansão da televisão nessa região, desde a implantação da primeira emissora, em 1988, na cidade de São José dos Campos SP. Tomando-se como recorte de estudo, as emissoras de televisão regional de sinal aberto no Vale do Paraíba, estado de São Paulo, foram realizadas entrevistas semi-abertas com profissionais das áreas comercial e de programação das mesmas e aplicados questionários junto a uma amostra da população do Vale do Paraíba que representa os telespectadores potenciais da área de cobertura dessas emissoras, a fim de se identificar a percepção que o público receptor tem a respeito da presença e atuação das televisões locais. Conclui-se que os diferentes modos de inserção local das emissoras influem diretamente na relação de identidade das mesmas com os telespectadores da região.(AU)
Resumo:
Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications. © 2013 IOP Publishing Ltd.
Resumo:
Advances in statistical physics relating to our understanding of large-scale complex systems have recently been successfully applied in the context of communication networks. Statistical mechanics methods can be used to decompose global system behavior into simple local interactions. Thus, large-scale problems can be solved or approximated in a distributed manner with iterative lightweight local messaging. This survey discusses how statistical physics methodology can provide efficient solutions to hard network problems that are intractable by classical methods. We highlight three typical examples in the realm of networking and communications. In each case we show how a fundamental idea of statistical physics helps solve the problem in an efficient manner. In particular, we discuss how to perform multicast scheduling with message passing methods, how to improve coding using the crystallization process, and how to compute optimal routing by representing routes as interacting polymers.
Resumo:
All-optical signal processing is a powerful tool for the processing of communication signals and optical network applications have been routinely considered since the inception of optical communication. There are many successful optical devices deployed in today’s communication networks, including optical amplification, dispersion compensation, optical cross connects and reconfigurable add drop multiplexers. However, despite record breaking performance, all-optical signal processing devices have struggled to find a viable market niche. This has been mainly due to competition from electro-optic alternatives, either from detailed performance analysis or more usually due to the limited market opportunity for a mid-link device. For example a wavelength converter would compete with a reconfigured transponder which has an additional market as an actual transponder enabling significantly more economical development. Never-the-less, the potential performance of all-optical devices is enticing. Motivated by their prospects of eventual deployment, in this chapter we analyse the performance and energy consumption of digital coherent transponders, linear coherent repeaters and modulator based pulse shaping/frequency conversion, setting a benchmark for the proposed all-optical implementations.
Resumo:
В статье выполнен анализ живучести и оптимизация MPLS сетей. Введен индекс выживаемости и предложен метод ее оценки. Сформулирована задача оптимизации структуры сети MPLS, исходя из ее живучести, и разработан алгоритм ее решения. Рассмотрена задача реконфигура- ции сети в случае отказа ее элементов и предложен метод ее решения.
Resumo:
During medical emergencies, the ability to communicate the state and position of injured individuals is essential. In critical situations or crowd aggregations, this may result difficult or even impossible due to the inaccuracy of verbal communication, the lack of precise localization for the medical events, and/or the failure/congestion of infrastructure-based communication networks. In such a scenario, a temporary (ad hoc) wireless network for disseminating medical alarms to the closest hospital, or medical field personnel, can be usefully employed to overcome the mentioned limitations. This is particularly true if the ad hoc network relies on the mobile phones that people normally carry, since they are automatically distributed where the communication needs are. Nevertheless, the feasibility and possible implications of such a network for medical alarm dissemination need to be analysed. To this aim, this paper presents a study on the feasibility of medical alarm dissemination through mobile phones in an urban environment, based on realistic people mobility. The results showed the dependence between the medical alarm delivery rates and both people and hospitals density. With reference to the considered urban scenario, the time needed to delivery medical alarms to the neighbour hospital with high reliability is in the order of minutes, thus revealing the practicability of the reported network for medical alarm dissemination. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Computing and information technology have made significant advances. The use of computing and technology is a major aspect of our lives, and this use will only continue to increase in our lifetime. Electronic digital computers and high performance communication networks are central to contemporary information technology. The computing applications in a wide range of areas including business, communications, medical research, transportation, entertainments, and education are transforming local and global societies around the globe. The rapid changes in the fields of computing and information technology also make the study of ethics exciting and challenging, as nearly every day, the media report on a new invention, controversy, or court ruling. This tutorial will explore a broad overview on the scientific foundations, technological advances, social implications, and ethical and legal issues related to computing. It will provide the milestones in computing and in networking, social context of computing, professional and ethical responsibilities, philosophical frameworks, and social, ethical, historical, and political implications of computer and information technology. It will outline the impact of the tremendous growth of computer and information technology on people, ethics and law. Political and legal implications will become clear when we analyze how technology has outpaced the legal and political arenas.
Resumo:
The Internet has become a universal communication network tool. It has evolved from a platform that supports best-effort traffic to one that now carries different traffic types including those involving continuous media with quality of service (QoS) requirements. As more services are delivered over the Internet, we face increasing risk to their availability given that malicious attacks on those Internet services continue to increase. Several networks have witnessed denial of service (DoS) and distributed denial of service (DDoS) attacks over the past few years which have disrupted QoS of network services, thereby violating the Service Level Agreement (SLA) between the client and the Internet Service Provider (ISP). Hence DoS or DDoS attacks are major threats to network QoS. In this paper we survey techniques and solutions that have been deployed to thwart DoS and DDoS attacks and we evaluate them in terms of their impact on network QoS for Internet services. We also present vulnerabilities that can be exploited for QoS protocols and also affect QoS if exploited. In addition, we also highlight challenges that still need to be addressed to achieve end-to-end QoS with recently proposed DoS/DDoS solutions. © 2010 John Wiley & Sons, Ltd.
Resumo:
Az elektronikus hírközlő hálózat rohamszerű fejlesztésének igénye az elektronikus szolgáltatások széles körű elterjedésével az állami döntéshozókat is fejlesztéspolitikai koncepciók kidolgozására és azok végrehajtására ösztönzi. Az (információs) társadalom fejlődése és az ennek alapjául szolgáló infokommunikációs szolgáltatások használata alapvetően függ a szélessávú infrastruktúra fejlesztésétől, az elektronikus hírközlő hálózat elérésének lehetőségétől. Az állami szerepvállalási hajlandóság 2011-től kezdődően jelentősen megnőtt az elektronikus hírközlési területen. Az MVM NET Zrt. megalapítása, a NISZ Zrt. átszervezése, a GOP 3.1.2-es pályázat és a 4. mobilszolgáltató létrehozásának terve mind mutatják a kormányzat erőteljes szándékát a terület fejlesztésére. A tanulmányban bemutatásra kerül, hogy az állam milyen beavatkozási eszközökkel rendelkezik az elektronikus hírközlő hálózat fejlesztésének ösztönzésére. A szerző ezt követően a négy, jelentős állami beavatkozás elemzését végzi el annak vizsgálatára, hogy megfelelő alapozottsággal született-e döntés az állami szerepvállalásról. _____ With the widespread use of the Internet, the need for the rapid development of the digital communication networks has prompted government policy makers also to conceptualize and implement development policy. The advancement of the (information) society and the use of information communication technology as a prerequisite of it are fundamentally determined by the development of broadband infrastructure and whether broadband access to the digital telecommunication network is available. The propensity of the government to play a bigger role in the field of electronical communication has increased significantly from 2011. The setup of MVM NET Zrt. / Hungarian Electricity NET Ltd./, the realignment of NISZ Zrt. / National Info communication Services Company Limited by Shares - NISZ Ltd./, the GOP 3.1.2. tender and the plan to enable a new, i.e. the fourth mobile network operator to enter the market all indicate the robust intention of the government to develop this field. The study shows the tools of government intervention for the incentive of the development of the electronical communication network. Then the author analyses the four main government interventions to examine whether the decision on the role of the state was adequately well-founded.
Resumo:
Recently, wireless network technology has grown at such a pace that scientific research has become a practical reality in a very short time span. One mobile system that features high data rates and open network architecture is 4G. Currently, the research community and industry, in the field of wireless networks, are working on possible choices for solutions in the 4G system. The researcher considers one of the most important characteristics of future 4G mobile systems the ability to guarantee reliable communications at high data rates, in addition to high efficiency in the spectrum usage. On mobile wireless communication networks, one important factor is the coverage of large geographical areas. In 4G systems, a hybrid satellite/terrestrial network is crucial to providing users with coverage wherever needed. Subscribers thus require a reliable satellite link to access their services when they are in remote locations where a terrestrial infrastructure is unavailable. The results show that good modulation and access technique are also required in order to transmit high data rates over satellite links to mobile users. The dissertation proposes the use of OFDM (Orthogonal Frequency Multiple Access) for the satellite link by increasing the time diversity. This technique will allow for an increase of the data rate, as primarily required by multimedia applications, and will also optimally use the available bandwidth. In addition, this dissertation approaches the use of Cooperative Satellite Communications for hybrid satellite/terrestrial networks. By using this technique, the satellite coverage can be extended to areas where there is no direct link to the satellite. The issue of Cooperative Satellite Communications is solved through a new algorithm that forwards the received data from the fixed node to the mobile node. This algorithm is very efficient because it does not allow unnecessary transmissions and is based on signal to noise ratio (SNR) measures.
Resumo:
Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.
Resumo:
Based on an original and comprehensive database of all feature fiction films produced in Mercosur between 2004 and 2012, the paper analyses whether the Mercosur film industry has evolved towards an integrated and culturally more diverse market. It provides a summary of policy opportunities in terms of integration and diversity, emphasizing the limiter role played by regional policies. It then shows that although the Mercosur film industry remains rather disintegrated, it tends to become more integrated and culturally more diverse. From a methodological point of view, the combination of Social Network Analysis and the Stirling Model opens up interesting research tracks to analyse creative industries in terms of their market integration and their cultural diversity.
Resumo:
Network security monitoring remains a challenge. As global networks scale up, in terms of traffic, volume and speed, effective attribution of cyber attacks is increasingly difficult. The problem is compounded by a combination of other factors, including the architecture of the Internet, multi-stage attacks and increasing volumes of nonproductive traffic. This paper proposes to shift the focus of security monitoring from the source to the target. Simply put, resources devoted to detection and attribution should be redeployed to efficiently monitor for targeting and prevention of attacks. The effort of detection should aim to determine whether a node is under attack, and if so, effectively prevent the attack. This paper contributes by systematically reviewing the structural, operational and legal reasons underlying this argument, and presents empirical evidence to support a shift away from attribution to favour of a target-centric monitoring approach. A carefully deployed set of experiments are presented and a detailed analysis of the results is achieved.
Resumo:
This study examined team processes and outcomes among 12 multi-university distributed project teams from 11 universities during its early and late development stages over a 14-month project period. A longitudinal model of team interaction is presented and tested at the individual level to consider the extent to which both formal and informal network connections—measured as degree centrality—relate to changes in team members’ individual perceptions of cohesion and conflict in their teams, and their individual performance as a team member over time. The study showed a negative network centrality-cohesion relationship with significant temporal patterns, indicating that as team members perceive less degree centrality in distributed project teams, they report more team cohesion during the last four months of the project. We also found that changes in team cohesion from the first three months (i.e., early development stage) to the last four months (i.e., late development stage) of the project relate positively to changes in team member performance. Although degree centrality did not relate significantly to changes in team conflict over time, a strong inverse relationship was found between changes in team conflict and cohesion, suggesting that team conflict emphasizes a different but related aspect of how individuals view their experience with the team process. Changes in team conflict, however, did not relate to changes in team member performance. Ultimately, we showed that individuals, who are less central in the network and report higher levels of team cohesion, performed better in distributed teams over time.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.