920 resultados para personal communications service networks
Resumo:
The study of complex networks has attracted the attention of the scientific community for many obvious reasons. A vast number of systems, from the brain to ecosystems, power grid, and the Internet, can be represented as large complex networks, i.e, assemblies of many interacting components with nontrivial topological properties. The link between these components can describe a global behaviour such as the Internet traffic, electricity supply service, market trend, etc. One of the most relevant topological feature of graphs representing these complex systems is community structure which aims to identify the modules and, possibly, their hierarchical organization, by only using the information encoded in the graph topology. Deciphering network community structure is not only important in order to characterize the graph topologically, but gives some information both on the formation of the network and on its functionality.
Resumo:
Scalable video coding allows an efficient provision of video services at different quality levels with different energy demands. According to the specific type of service and network scenario, end users and/or operators may decide to choose among different energy versus quality combinations. In order to deal with the resulting trade-off, in this paper we analyze the number of video layers that are worth to be received taking into account the energy constraints. A single-objective optimization is proposed based on dynamically selecting the number of layers, which is able to minimize the energy consumption with the constraint of a minimal quality threshold to be reached. However, this approach cannot reflect the fact that the same increment of energy consumption may result in different increments of visual quality. Thus, a multiobjective optimization is proposed and a utility function is defined in order to weight the energy consumption and the visual quality criteria. Finally, since the optimization solving mechanism is computationally expensive to be implemented in mobile devices, a heuristic algorithm is proposed. This way, significant energy consumption reduction will be achieved while keeping reasonable quality levels.
Resumo:
One of the major concerns in an Intelligent Transportation System (ITS) scenario, such as that which may be found on a long-distance train service, is the provision of efficient communication services, satisfying users' expectations, and fulfilling even highly demanding application requirements, such as safety-oriented services. In an ITS scenario, it is common to have a significant amount of onboard devices that comprise a cluster of nodes (a mobile network) that demand connectivity to the outside networks. This demand has to be satisfied without service disruption. Consequently, the mobility of the mobile network has to be managed. Due to the nature of mobile networks, efficient and lightweight protocols are desired in the ITS context to ensure adequate service performance. However, the security is also a key factor in this scenario. Since the management of the mobility is essential for providing communications, the protocol for managing this mobility has to be protected. Furthermore, there are safety-oriented services in this scenario, so user application data should also be protected. Nevertheless, providing security is expensive in terms of efficiency. Based on this considerations, we have developed a solution for managing the network mobility for ITS scenarios: the NeMHIP protocol. This approach provides a secure management of network mobility in an efficient manner. In this article, we present this protocol and the strategy developed to maintain its security and efficiency in satisfactory levels. We also present the developed analytical models to analyze quantitatively the efficiency of the protocol. More specifically, we have developed models for assessing it in terms of signaling cost, which demonstrates that NeMHIP generates up to 73.47% less signaling compared to other relevant approaches. Therefore, the results obtained demonstrate that NeMHIP is the most efficient and secure solution for providing communications in mobile network scenarios such as in an ITS context.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time. Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks. This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events. In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.
Resumo:
Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.
Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.
The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.
Resumo:
Barneko ikerkuntza-txostena
Resumo:
One of the most challenging problems in mobile broadband networks is how to assign the available radio resources among the different mobile users. Traditionally, research proposals are either speci c to some type of traffic or deal with computationally intensive algorithms aimed at optimizing the delivery of general purpose traffic. Consequently, commercial networks do not incorporate these mechanisms due to the limited hardware resources at the mobile edge. Emerging 5G architectures introduce cloud computing principles to add flexible computational resources to Radio Access Networks. This paper makes use of the Mobile Edge Computing concepts to introduce a new element, denoted as Mobile Edge Scheduler, aimed at minimizing the mean delay of general traffic flows in the LTE downlink. This element runs close to the eNodeB element and implements a novel flow-aware and channel-aware scheduling policy in order to accommodate the transmissions to the available channel quality of end users.
Resumo:
O presente trabalho analisa a formação da cidade digital nas relações sociais, ressaltando os efeitos da garantia do direito à privacidade no ambiente dos navegantes de sites e redes sociais, em função das repercussões jurídicas do vazamento de informações da vida pessoal dos usuários da rede, e do tratamento dos dados coletados pelos prestadores de serviço. Através do ciberespaço formam-se comunidades virtuais que ultrapassam a necessidade de localidade e sociabilidade, criando um isolamento social e abandonando as interações face a face em ambientes reais, originando uma sociabilidade baseada no individualismo. Avaliamos os novos padrões de interação que se originam nesta nova formatação de coletividade informacional e suas repercussões no âmbito do direito. Em uma perspectiva mais detalhada, esse estudo indica quais as hipóteses de responsabilidade civil dos provedores na Internet em decorrência de atos ilícitos cometidos por terceiros e as alternativas de um sistema de tutela da privacidade à proteção de dados, face à lesão no ambiente informacional. O levantamento das possíveis situações de responsabilização civil foi efetuado através da análise da jurisprudência e da doutrina dominante, ressaltando os aspectos fáticos que caracterizam sua formatação. Esse modelo se impõe, através de uma relação hierárquica a uma multiplicidade de indivíduos, criando um encarceramento perfeito através do exercício do biopoder. Tais papéis são reforçados por uma cultura consumista e a sociedade do espetáculo, que transforma o indivíduo em mercadoria levantando perfis de usuários conectados em rede, propiciando uma categorização dos consumidores. Nesse contexto, apresentamos os riscos de uma sociedade de vigilância que se apresenta factível como um produto das relações de mercado, que possibilita dispor livremente de um conjunto crescente de informações. Esta constante vigilância invade todos os espaços, custodiando nosso comportamento independente do tempo, com uma implacável memória no âmbito das comunicações eletrônicas, tornando nosso passado eternamente visível e fazendo surgir situações constrangedoras a nos assombrar.
Resumo:
[ES] La necesidad de gestionar y repartir eficazmente los recursos escasos entre las diferentes operaciones de las empresas, hacen que éstas recurran a aplicar técnicas de la Investigación de Operaciones. Éste es el caso de los centros de llamadas, un sector emergente y dinámico que se encuentra en constante desarrollo. En este sector, la administración del trabajo requiere de técnicas predictivas para determinar el número de trabajadores adecuado y así evitar en la medida de lo posible tanto el exceso como la escasez del mismo. Este trabajo se centrará en el estudio del centro de llamadas de emergencias 112 de Andalucía. Partiendo de los datos estadísticos del número medio de llamadas que se realiza en cada franja horaria, facilitados por la Junta de esta Comunidad Autónoma, formularemos y modelizaremos el problema aplicando la Programación Lineal. Posteriormente, lo resolveremos con dos programas de software, con la finalidad de obtener una distribución óptima de agentes que minimice el coste salarial, ya que supone un 65% del gasto de explotación total. Finalmente, mediante la teoría de colas, observaremos los tiempos de espera en cola y calcularemos el número objetivo de agentes que permita no sólo minimizar el coste salarial sino mejorar la calidad de servicio teniendo unos tiempos de espera razonables.
Resumo:
Service provisioning in assisted living environments faces distinct challenges due to the heterogeneity of networks, access technology, and sensing/actuation devices in such an environment. Existing solutions, such as SOAP-based web services, can interconnect heterogeneous devices and services, and can be published, discovered and invoked dynamically. However, it is considered heavier than what is required in the smart environment-like context and hence suffers from performance degradation. Alternatively, REpresentational State Transfer (REST) has gained much attention from the community and is considered as a lighter and cleaner technology compared to the SOAP-based web services. Since it is simple to publish and use a RESTful web service, more and more service providers are moving toward REST-based solutions, which promote a resource-centric conceptualization as opposed to a service-centric conceptualization. Despite such benefits of REST, the dynamic discovery and eventing of RESTful services are yet considered a major hurdle to utilization of the full potential of REST-based approaches. In this paper, we address this issue, by providing a RESTful discovery and eventing specification and demonstrate it in an assisted living healthcare scenario. We envisage that through this approach, the service provisioning in ambient assisted living or other smart environment settings will be more efficient, timely, and less resource-intensive.