906 resultados para distribution network


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We studied the impact of the last glacial (late Weichselian) sea level cycle on sediment architecture in the inner Kara Sea using high-resolution acoustic sub-bottom profiling. The acoustic lines were ground-truthed with dated sediment cores. Furthermore we refined the location of the eastern LGM ice margin, by new sub bottom profiles. New model results of post-Last Glacial Maximum (LGM) isostatic rebound for this area allow a well-constrained interpretation of acoustic units in terms of sequence stratigraphy. The lowstand (or regressive) system tract sediments are absent but are represented by an unconformity atop of Pleistocene sediments on the shelf and by a major incised dendritic paleo-river network. The subsequent transgressive and highstand system tracts are best preserved in the incised channels and the recent estuaries while only minor sediment accumulation on the adjacent shelf areas is documented. The Kara Sea can be subdivided into three areas: estuaries (A), the shelf (B) and (C) deeper lying areas that accumulated a total of 114 * 10**10 t of Holocene sediments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planktonic foraminiferal assemblages and artificial neural network estimates of sea-surface temperature (SST) at ODP Site 1123 (41°47.2'S, 171°29.9'W; 3290 m deep), east of New Zealand, reveal a high-resolution history of glacial-interglacial (G-I) variability at the Subtropical Front (STF) for the last 1.2 million years, including the Mid-Pleistocene climate transition (MPT). Most G-I cycles of ~100 kyr duration have short periods of cold glacial and warm deglacial climate centred on glacial terminations, followed by long temperate interglacial periods. During glacial-deglacial transitions, maximum abundances of subantarctic and subtropical taxa coincide with SST minima and maxima, and lead ice volume by up to 8 kyrs. Such relationships reflect the competing influence of subantarctic and subtropical surface inflows during glacial and deglacial periods, respectively, suggesting alternate polar and tropical forcing of southern mid-latitude ocean climate. The lead of SSTs and subtropical inflow over ice volume points to tropical forcing of southern mid-latitude ocean-climate during deglacial warming. This contrasts with the established hypothesis that southern hemisphere ocean climate is driven by the influence of continental glaciations. Based on wholesale changes in subantarctic and subtropical faunas, the last 1.2 million years are subdivided into 4-distinct periods of ocean climate. 1) The pre-MPT (1185-870 ka) has high amplitude 41-kyr fluctuations in SST, superimposed on a general cooling trend and heightened productivity, reflecting long-term strengthening of subantarctic inflow under an invigorated Antarctic Circumpolar Current. 2) The early MPT (870-620 ka) is marked by abrupt warming during MIS 21, followed by a period of unstable periodicities within the 40-100 kyr orbital bands, decreasing SST amplitudes, and long intervals of temperate interglacial climate punctuated by short glacial and deglacial phases, reflecting lower meridional temperature gradients. 3) The late MPT (620-435 ka) encompasses an abrupt decrease in the subantarctic inflow during MIS 15, followed by a period of warm equable climate. Poorly defined, low amplitude G-I variations in SSTs during this interval are consistent with a relatively stable STF and evenly balanced subantarctic and subtropical inflows, possibly in response to smaller, less dynamic polar icesheets. 4) The post-MPT (435-0 ka) is marked by a major climatic deterioration during MIS 12, and a return to higher amplitude 100 kyr-frequency SST variations, superimposed on a long term trend towards cooler SSTs and increased mixed-layer productivity as the subantarctic inflow strengthened and polar icesheets expanded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding species distribution patterns and the corresponding environmental determinants is a crucial step in the development of effective strategies for the conservation and management of plant communities and ecosystems. Therefore, a central prerequisite is the biogeographical and macroecological analysis of factors and processes that determine contemporary, potential, as well as future geographic distribution of species. This thesis has been conducted in the framework of the BIOMAPS-BIOTA project at the Nees Institute of Biodiversity of Plants, which was funded by the German Federal Ministry of Education and Research (BMBF). The study investigated patterns of plants species richness and phytogeographic regions under contemporary environmental conditions and forecasted future climate change in the area of West Africa covering five countries: Benin, Burkina Faso, Côte d'Ivoire, Ghana and Togo. Firstly, geographic patterns of vascular plant species richness have been depicted at a relatively fine spatial resolution based on the potential distribution of 3,393 species. Species richness is closely related to the steep climatic gradient existing in the region with a high concentration of species in the most humid areas in the south and decreases towards the northern drier areas. The investigation of the effectiveness of the existing network of protected areas shows an overall good coverage of species in the study area. However, the proportion of covered species is considerably lower at national extent for some countries, thus calling for more protected areas in order to cover adequately a maximum number of plants species in these countries. Secondly, based on the potential distribution range of vascular plant species, seven phytogeographic regions have been delineated that broadly reflect the vegetation zones as defined by White (1983). However notable differences to the delineation of White (1983) occur at the margins of some regions. Corresponding to a general southward shifted of all regions. And expansion of the Sahel vegetation zone is observed in the north, while the rainforest zone is decreased in the very south.This is alarming since the rainforest shelters a high number of species and a high proportion of range-restricted or endemic species, despite their relatively small extent compared to the other regions. Finally, the evaluation of the potential impact of climate change on plant species richness in the study area, results in a severe loss of future suitable habitat for up to 50% of species per grid cell, particularly in the rainforest region. Moreover, the analysis of the possible shift of phytogeographic regions shows in general a strong deterioration of the West African rainforest. In contrast the drier areas are expanding continuously, although a slight gain in species number can be observed in some particular regions. The overall lesson to retain from the results of this study is that the West African rainforest should be fixed as a high priority area for the conservation of biodiversity of plants, since it is subject to severe contemporary and projected future threats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the issue of upgrading industrial clusters from the perspective of external linkages. It is taken for granted that in most developing countries, due to the limited domestic market and poor traditional commercial networks, industrial clusters are able to upgrade only when they are involved in global value chains. However, the rise of China’s industrial clusters challenges this view. Historically, China has had a lot of industrial clusters with their own traditional commercial networks. This fact combined with its huge population resulted in the formation of a unique external linage to China’s industrial clusters after the socialist planning period ended. In concrete terms, since the 1980s, a traditional commercial institution . the transaction market . began to appear in most clusters. These markets within the clusters get connected to those in the cities due to interaction between traditional merchants and local governments. This has resulted in the formation of a powerful market network-based distribution system which has played a crucial role for China’s industrial clusters in responding to exploding domestic demand. This paper explains these features in detail, using Yiwu China Commodity City as a case study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IP multicast allows the efficient support of group communication services by reducing the number of IP flows needed for such communication. The increasing generalization in the use of multicast has also triggered the need for supporting IP multicast in mobile environments. Proxy Mobile IPv6 (PMIPv6) is a network-based mobility management solution, where the functionality to support the terminal movement resides in the network. Recently, a baseline solution has been adopted for multicast support in PMIPv6. Such base solution has inefficiencies in multicast routing because it may require multiple copies of a single stream to be received by the same access gateway. Nevertheless, there is an alternative solution to support multicast in PMIPv6 that avoids this issue. This paper evaluates by simulation the scalability of both solutions under realistic conditions, and provides an analysis of the sensitivity of the two proposals against a number of parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Communications Based Train Control Systems require high quality radio data communications for train signaling and control. Actually most of these systems use 2.4GHz band with proprietary radio transceivers and leaky feeder as distribution system. All them demand a high QoS radio network to improve the efficiency of railway networks. We present narrow band, broad band and data correlated measurements taken in Madrid underground with a transmission system at 2.4 GHz in a test network of 2 km length in subway tunnels. The architecture proposed has a strong overlap in between cells to improve reliability and QoS. The radio planning of the network is carefully described and modeled with narrow band and broadband measurements and statistics. The result is a network with 99.7% of packets transmitted correctly and average propagation delay of 20ms. These results fulfill the specifications QoS of CBTC systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes the optimization relaxation approach based on the analogue Hopfield Neural Network (HNN) for cluster refinement of pre-classified Polarimetric Synthetic Aperture Radar (PolSAR) image data. We consider the initial classification provided by the maximum-likelihood classifier based on the complex Wishart distribution, which is then supplied to the HNN optimization approach. The goal is to improve the classification results obtained by the Wishart approach. The classification improvement is verified by computing a cluster separability coefficient and a measure of homogeneity within the clusters. During the HNN optimization process, for each iteration and for each pixel, two consistency coefficients are computed, taking into account two types of relations between the pixel under consideration and its corresponding neighbors. Based on these coefficients and on the information coming from the pixel itself, the pixel under study is re-classified. Different experiments are carried out to verify that the proposed approach outperforms other strategies, achieving the best results in terms of separability and a trade-off with the homogeneity preserving relevant structures in the image. The performance is also measured in terms of computational central processing unit (CPU) times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through the use of the Distributed Fiber Optic Temperature Measurement (DFOT) method, it is possible to measure the temperature in small intervals (on the order of centimeters) for long distances (on the order of kilometers) with a high temporal frequency and great accuracy. The heat pulse method consists of applying a known amount of heat to the soil and monitoring the temperature evolution, which is primarily dependent on the soil moisture content. The use of both methods, which is called the active heat pulse method with fiber optic temperature sensing (AHFO), allows accurate soil moisture content measurements. In order to experimentally study the wetting patterns, i.e. shape, size, and the water distribution, from a drip irrigation emitter, a soil column of 0.5 m of diameter and 0.6 m high was built. Inside the column, a fiber optic cable with a stainless steel sheath was placed forming three concentric helixes of diameters 0.2 m, 0.4 m and 0.6 m, leading to a 148 measurement point network. Before, during, and after the irrigation event, heat pulses were performed supplying electrical power of 20 W/m to the steel. The soil moisture content was measured with a capacitive sensor in one location at depths of 0.1 m, 0.2 m, 0.3 m and 0.4 m during the irrigation. It was also determined by the gravimetric method in several locations and depths before and right after the irrigation. The emitter bulb dimensions and shape evolution was satisfactorily measured during infiltration. Furthermore, some bulb's characteristics difficult to predict (e.g. preferential flow) were detected. The results point out that the AHFO is a useful tool to estimate the wetting pattern of drip irrigation emitters in soil columns and show a high potential for its use in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Video on Demand (VoD) service is becoming a dominant service in the telecommunication market due to the great convenience regarding the choice of content items and their independent viewing time. However, it comes with the downsides of high server storage and capacity demands because of the large variety of content items and the high amount of traffic generated for serving all requests. Storing part of the popular contents on the peers brings certain advantages but, it still has issues regarding the overall traffic in the core of the network and the scalability. Therefore, we propose a P2P assisted model for streaming VoD contents that takes advantage of the clients unused uplink and storage capacity to serve requests of other clients and we present popularity based schemes for distribution of both the popular and unpopular contents on the peers. The proposed model and the schemes prove to reduce the streaming traffic in the core of the network, improve the responsiveness of the system and increase its scalability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La demanda de contenidos de vídeo ha aumentado rápidamente en los últimos años como resultado del gran despliegue de la TV sobre IP (IPTV) y la variedad de servicios ofrecidos por los operadores de red. Uno de los servicios que se ha vuelto especialmente atractivo para los clientes es el vídeo bajo demanda (VoD) en tiempo real, ya que ofrece una transmisión (streaming) inmediata de gran variedad de contenidos de vídeo. El precio que los operadores tienen que pagar por este servicio es el aumento del tráfico en las redes, que están cada vez más congestionadas debido a la mayor demanda de contenidos de VoD y al aumento de la calidad de los propios contenidos de vídeo. Así, uno de los principales objetivos de esta tesis es encontrar soluciones que reduzcan el tráfico en el núcleo de la red, manteniendo la calidad del servicio en el nivel adecuado y reduciendo el coste del tráfico. La tesis propone un sistema jerárquico de servidores de streaming en el que se ejecuta un algoritmo para la ubicación óptima de los contenidos de acuerdo con el comportamiento de los usuarios y el estado de la red. Debido a que cualquier algoritmo óptimo de distribución de contenidos alcanza un límite en el que no se puede llegar a nuevas mejoras, la inclusión de los propios clientes del servicio (los peers) en el proceso de streaming puede reducir aún más el tráfico de red. Este proceso se logra aprovechando el control que el operador tiene en las redes de gestión privada sobre los equipos receptores (Set-Top Box) ubicados en las instalaciones de los clientes. El operador se reserva cierta capacidad de almacenamiento y streaming de los peers para almacenar los contenidos de vídeo y para transmitirlos a otros clientes con el fin de aliviar a los servidores de streaming. Debido a la incapacidad de los peers para sustituir completamente a los servidores de streaming, la tesis propone un sistema de streaming asistido por peers. Algunas de las cuestiones importantes que se abordan en la tesis son saber cómo los parámetros del sistema y las distintas distribuciones de los contenidos de vídeo en los peers afectan al rendimiento general del sistema. Para dar respuesta a estas preguntas, la tesis propone un modelo estocástico preciso y flexible que tiene en cuenta parámetros como las capacidades de enlace de subida y de almacenamiento de los peers, el número de peers, el tamaño de la biblioteca de contenidos de vídeo, el tamaño de los contenidos y el esquema de distribución de contenidos para estimar los beneficios del streaming asistido por los peers. El trabajo también propone una versión extendida del modelo matemático mediante la inclusión de la probabilidad de fallo de los peers y su tiempo de recuperación en el conjunto de parámetros del modelo. Estos modelos se utilizan como una herramienta para la realización de exhaustivos análisis del sistema de streaming de VoD asistido por los peers para la amplia gama de parámetros definidos en los modelos. Abstract The demand of video contents has rapidly increased in the past years as a result of the wide deployment of IPTV and the variety of services offered by the network operators. One of the services that has especially become attractive to the customers is real-time Video on Demand (VoD) because it offers an immediate streaming of a large variety of video contents. The price that the operators have to pay for this convenience is the increased traffic in the networks, which are becoming more congested due to the higher demand for VoD contents and the increased quality of the videos. Therefore, one of the main objectives of this thesis is finding solutions that would reduce the traffic in the core of the network, keeping the quality of service on satisfactory level and reducing the traffic cost. The thesis proposes a system of hierarchical structure of streaming servers that runs an algorithm for optimal placement of the contents according to the users’ behavior and the state of the network. Since any algorithm for optimal content distribution reaches a limit upon which no further improvements can be made, including service customers themselves (the peers) in the streaming process can further reduce the network traffic. This process is achieved by taking advantage of the control that the operator has in the privately managed networks over the Set-Top Boxes placed at the clients’ premises. The operator reserves certain storage and streaming capacity on the peers to store the video contents and to stream them to the other clients in order to alleviate the streaming servers. Because of the inability of the peers to completely substitute the streaming servers, the thesis proposes a system for peer-assisted streaming. Some of the important questions addressed in the thesis are how the system parameters and the various distributions of the video contents on the peers would impact the overall system performance. In order to give answers to these questions, the thesis proposes a precise and flexible stochastic model that takes into consideration parameters like uplink and storage capacity of the peers, number of peers, size of the video content library, size of contents and content distribution scheme to estimate the benefits of the peer-assisted streaming. The work also proposes an extended version of the mathematical model by including the failure probability of the peers and their recovery time in the set of parameters. These models are used as tools for conducting thorough analyses of the peer-assisted system for VoD streaming for the wide range of defined parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main obstacles to the widespread adoption of quantum cryptography has been the difficulty of integration into standard optical networks, largely due to the tremendous difference in power of classical signals compared with the single quantum used for quantum key distribution. This makes the technology expensive and hard to deploy. In this letter, we show an easy and straightforward integration method of quantum cryptography into optical access networks. In particular, we analyze how a quantum key distribution system can be seamlessly integrated in a standard access network based on the passive optical and time division multiplexing paradigms. The novelty of this proposal is based on the selective post-processing that allows for the distillation of secret keys avoiding the noise produced by other network users. Importantly, the proposal does not require the modification of the quantum or classical hardware specifications neither the use of any synchronization mechanism between the network and quantum cryptography devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract—In this paper we explore how recent technologies can improve the security of optical networks. In particular, we study how to use quantum key distribution(QKD) in common optical network infrastructures and propose a method to overcome its distance limitations. QKD is the first technology offering information theoretic secretkey distribution that relies only on the fundamental principles of quantum physics. Point-to-point QKDdevices have reached a mature industrial state; however, these devices are severely limited in distance, since signals at the quantum level (e.g., single photons) are highly affected by the losses in the communication channel and intermediate devices. To overcome this limitation, intermediate nodes (i.e., repeaters) are used. Both quantum-regime and trusted, classical repeaters have been proposed in the QKD literature, but only the latter can be implemented in practice. As a novelty, we propose here a new QKD network model based on the use of not fully trusted intermediate nodes, referred to as weakly trusted repeaters. This approach forces the attacker to simultaneously break several paths to get access to the exchanged key, thus improving significantly the security of the network. We formalize the model using network codes and provide real scenarios that allow users to exchange secure keys over metropolitan optical networks using only passive components. Moreover, the theoretical framework allows one to extend these scenarios not only to accommodate more complex trust constraints, but also to consider robustness and resiliency constraints on the network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantum Key Distribution (QKD) is maturing quickly. However, the current approaches to its network use require conditions that make it an expensive technology. All the QKD networks deployed to date are designed as a collection of dedicated point-to-point links that use the trusted repeater paradigm. Instead, we propose a novel network model in which QKD systems use simultaneously quantum and conventional signals that are wavelength multiplexed over a common communication infrastructure. Signals are transmitted end-to-end within a metropolitan area using optical components. The model resembles a commercial telecom network and takes advantage of existing components, thus allowing for a cost-effective and reliable deployment.