26 resultados para Network scale-up method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Actualmente uma grande preocupação global refere-se ao continuo aumento do custo da energia, resultante da procura e do impacto ambiental. A escalada do custo da energia obriga a procura sistemática de melhores sistemas que permitam diminuir esse custo. Nas comunicações móveis sem fios a economia de energia obtida pelo aumento da eficiência do equipamento das estaçõesbase é insuficiente, pelo qual é necessário também encontrar soluções ao nível da arquitectura. O LTE define os repetidores como um recurso de baixo consumo para aumentar a cobertura e ou capacidade da rede. Nesta dissertação é avaliado um método de economia de energia baseado na substituição de uma estação-base central, circundada por outras estações-base, por um determinado número de repetidores. A cobertura e a capacidade resultante é avaliada assim como a energia poupada. Os resultados obtidos permitem verificar que se pode poupar até 1 000,00 € anuais e 20 kW diários com a substituição de uma estação-base, completamente rodeada por outras estações, por um número de repetidores compreendido entre 1 e 9 dependendo da ISD (até 1750 m). Verifica-se ainda um ganho de eficiência energética de até 13% ao nível do sistema.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of current networks, it became evident the need for Self-Organizing Networks (SON), which aims to automate most of the associated radio planning and optimization tasks. Within SON, this paper aims to optimize the Neighbour Cell List (NCL) for Long Term Evolution (LTE) evolved NodeBs (eNBs). An algorithm composed by three decisions were were developed: distance-based, Radio Frequency (RF) measurement-based and Handover (HO) stats-based. The distance-based decision, proposes a new NCL taking account the eNB location and interference tiers, based in the quadrants method. The last two algorithms consider signal strength measurements and HO statistics, respectively; they also define a ranking to each eNB and neighbour relation addition/removal based on user defined constraints. The algorithms were developed and implemented over an already existent radio network optimization professional tool. Several case studies were produced using real data from a Portuguese LTE mobile operator. © 2014 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of smartphones with Wireless LAN (WiFi) network interfaces brought new challenges to application developers. The expected increase of users connectivity will impact their expectations for example on the performance of background applications. Unfortunately, the number and breadth of the studies on the new patterns of user mobility and connectivity that result from the emergence of smartphones is still insufficient to support this claim. This paper contributes with preliminary results on a large scale study of the usage pattern of about 49000 devices and 31000 users who accessed at least one access point of the eduroam WiFi network on the campuses of the Lisbon Polytechnic Institute. Results confirm that the increasing number of smartphones resulted in significant changes to the pattern of use, with impact on the amount of traffic and users connection time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an analysis and characterization of the regional seismicity recorded by a temporary broadband seismic network deployed in the Cape Verde archipelago between November 2007 and September 2008. The detection of earthquakes was based on spectrograms, allowing the discrimination from low-frequency volcanic signals, resulting in 358 events of which 265 were located, the magnitudes usually being smaller than 3. For the location, a new 1-D P-velocity model was derived for the region showing a crust consistent with an oceanic crustal structure. The seismicity is located mostly offshore the westernmost and geologically youngest areas of the archipelago, near the islands of Santo Antao and Sao Vicente in the NW and Brava and Fogo in the SW. The SW cluster has a lower occurrence rate and corresponds to seismicity concentrated mainly along an alignment between Brava and the Cadamosto seamount presenting normal faulting mechanisms. The existence of the NW cluster, located offshore SW of Santo Antao, was so far unknown and concentrates around a recently recognized submarine cone field; this cluster presents focal depths extending from the crust to the upper mantle and suggests volcanic unrest No evident temporal behaviour could be perceived, although the events tend to occur in bursts of activity lasting a few days. In this recording period, no significant activity was detected at Fogo volcano, the most active volcanic edifice in Cape Verde. The seismicity characteristics point mainly to a volcanic origin. The correlation of the recorded seismicity with active volcanic structures agrees with the tendency for a westward migration of volcanic activity in the archipelago as indicated by the geologic record. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding the structure of a confined liquid crystal is a difficult task since both the density and order parameter profiles are nonuniform. Starting from a microscopic model and density-functional theory, one has to either (i) solve a nonlinear, integral Euler-Lagrange equation, or (ii) perform a direct multidimensional free energy minimization. The traditional implementations of both approaches are computationally expensive and plagued with convergence problems. Here, as an alternative, we introduce an unsupervised variant of the multilayer perceptron (MLP) artificial neural network for minimizing the free energy of a fluid of hard nonspherical particles confined between planar substrates of variable penetrability. We then test our algorithm by comparing its results for the structure (density-orientation profiles) and equilibrium free energy with those obtained by standard iterative solution of the Euler-Lagrange equations and with Monte Carlo simulation results. Very good agreement is found and the MLP method proves competitively fast, flexible, and refinable. Furthermore, it can be readily generalized to the richer experimental patterned-substrate geometries that are now experimentally realizable but very problematic to conventional theoretical treatments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, physical layer awareness in access, core, and metro networks is addressed, and a Physical Layer Aware Network Architecture Framework for the Future Internet is presented and discussed, as proposed within the framework of the European ICT Project 4WARD. Current limitations and shortcomings of the Internet architecture are driving research trends at a global scale toward a novel, secure, and flexible architecture. This Future Internet architecture must allow for the co-existence and cooperation of multiple networks on common platforms, through the virtualization of network resources. Possible solutions embrace a full range of technologies, from fiber backbones to wireless access networks. The virtualization of physical networking resources will enhance the possibility of handling different profiles, while providing the impression of mutual isolation. This abstraction strategy implies the use of well elaborated mechanisms in order to deal with channel impairments and requirements, in both wireless (access) and optical (core) environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lisboa, cidade cenográfica é uma instalação que resulta dum processo de sucessivos registos de momentos, experiências e vivências da cidade de Lisboa, com diferentes narrativas. Através do método de assemblage de elementos retirados à rua e reconstruindo composições de blocos volumétricas que incluem desde imagens gráficas, à presença de fontes de luz e de som, e texturas várias, produzi uma instalação destinada a ser ocupada, como se do próprio processo de deambulação por uma cidade se tratasse – neste caso Lisboa. A instalação final, – Lisboa, cidade cenográfica -, constitui em si uma maqueta, como ponto de partida para um outro processo, quase interminável, que conduzisse a uma outra instalação que nos engolisse e se apoderasse da nossa presença. Manipulando diferentes escalas, composições e morfologias de espaço obter-se-ia uma instalação quase infindável, como a própria cidade. A actual instalação é como a síntese dum Fóssil Urbano. Na observação e captação de imagens da cidade houve a preocupação de efectuar a diferentes horas do dia. Os sons utilizados na instalação, foram gravados nas ruas de Lisboa e incluem desde sinos de igreja, ao chilrear de pássaros, aos aviões que sobrevoam, ao trânsito e respectivas buzinas e sirenes de ambulâncias, entre outros. No âmbito do desenvolvimento do projecto e desta Memória Descritiva, tive a preocupação de pedir a algumas pessoas – Cartas de Lisboa -, testemunhando o modo como habitam ou habitaram a cidade. Nos headphones presentes na instalação, ouve-se o poema Lisbon Revisited (1923), de Álvaro de Campos, completando assim o som ambiente de Lisboa, cidade cenográfica. Aquele poema só audível daquele modo, acaba por se sobrepor assim, dum modo subtil, aos outros sons ambiente (exteriores aos headphones).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many Hyperspectral imagery applications require a response in real time or near-real time. To meet this requirement this paper proposes a parallel unmixing method developed for graphics processing units (GPU). This method is based on the vertex component analysis (VCA), which is a geometrical based method highly parallelizable. VCA is a very fast and accurate method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Experimental results obtained for simulated and real hyperspectral datasets reveal considerable acceleration factors, up to 24 times.