932 resultados para Information dispersal algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Paper presented at the ECKM 2010 – 11th European Conference on Knowledge Management, 2-3 September, 2010, Famalicão, Portugal. URL: http://www.academic-conferences.org/eckm/eckm2010/eckm10-home.htm

Relevância:

20.00% 20.00%

Publicador:

Resumo:

All every day activities take place in space. And it is upon this that all information and knowledge revolve. The latter are the key elements in the organisation of territories. Their creation, use and distribution should therefore occur in a balanced way throughout the whole territory in order to allow all individuals to participate in an egalitarian society, in which the flow of knowledge can take precedence over the flow of interests. The information society depends, to a large extent, on the technological capacity to disseminate information and, consequently, the knowledge throughout territory, thereby creating conditions which allow a more balanced development, from the both the social and economic points of view thus avoiding the existence of info-exclusion territories. Internet should therefore be considered more than a mere technology, given that its importance goes well beyond the frontiers of culture and society. It is already a part of daily life and of the new forms of thinking and transmitting information, thus making it a basic necessity essential, for a full socio-economic development. Its role as a platform of creation and distribution of content is regarded as an indispensable element for education in today’s society, since it makes information a much more easily acquired benefit.”…in the same way that the new technologies of generation and distribution of energy allowed factories and large companies to establish themselves as the organisational bases of industrial society, so the internet today constitutes the technological base of the organisational form that characterises the Information Era: the network” (CASTELLS, 2004:15). The changes taking place today in regional and urban structures are increasingly more evident due to a combination of factors such as faster means of transport, more efficient telecommunications and other cheaper and more advanced technologies of information and knowledge. Although their impact on society is obvious, society itself also has a strong influence on the evolution of these technologies. And although physical distance has lost much of the responsibility it had towards explaining particular phenomena of the economy and of society, other aspects such as telecommunications, new forms of mobility, the networks of innovation, the internet, cyberspace, etc., have become more important, and are the subject of study and profound analysis. The science of geographical information, allows, in a much more rigorous way, the analysis of problems thus integrating in a much more balanced way, the concepts of place, of space and of time. Among the traditional disciplines that have already found their place in this process of research and analysis, we can give special attention to a geography of new spaces, which, while not being a geography of ‘innovation’, nor of the ‘Internet’, nor even ‘virtual’, which can be defined as one of the ‘Information Society’, encompassing not only the technological aspects but also including a socio-economic approach. According to the last European statistical data, Portugal shows a deficit in terms of information and knowledge dissemination among its European partners. Some of the causes are very well identified - low levels of scholarship, weak investments on innovation and R&D (both private and public sector) - but others seem to be hidden behind socio-economical and technological factors. So, the justification of Portugal as the case study appeared naturally, on a difficult quest to find the major causes to territorial asymmetries. The substantial amount of data needed for this work was very difficult to obtain and for the islands of Madeira and Azores was insufficient, so only Continental Portugal was considered for this study. In an effort to understand the various aspects of the Geography of the Information Society and bearing in mind the increasing generalised use of information technologies together with the range of technologies available for the dissemination of information, it is important to: (i) Reflect on the geography of the new socio-technological spaces. (ii) Evaluate the potential for the dissemination of information and knowledge through the selection of variables that allow us to determine the dynamic of a given territory or region; (iii) Define a Geography of the Information Society in Continental Portugal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of great investments in telecommunication networks is to approach economies and put an end to the asymmetries. The most isolated regions could be the beneficiaries of this new technological investments wave disseminating trough the territories. The new economic scenarios created by globalisation make high capacity backbones and coherent information society polity, two instruments that could change regions fate and launch them in to an economic development context. Technology could bring international projection to services or products and could be the differentiating element between a national and an international economic strategy. So, the networks and its fluxes are becoming two of the most important variables to the economies. Measuring and representing this new informational accessibility, mapping new communities, finding new patterns and localisation models, could be today’s challenge. In the physical and real space, location is defined by two or three geographical co-ordinates. In the network virtual space or in cyberspace, geography seems incapable to define location, because it doesn’t have a good model. Trying to solve the problem and based on geographical theories and concepts, new fields of study came to light. The Internet Geography, Cybergeography or Geography of Cyberspace are only three examples. In this paper and using Internet Geography and informational cartography, it was possible to observe and analyse the spacialisation of the Internet phenomenon trough the distribution of the IP addresses in the Portuguese territory. This work shows the great potential and applicability of this indicator to Internet dissemination and regional development studies. The Portuguese territory is seen in a completely new form: the IP address distribution of Country Code Top Level Domains (.pt) could show new regional hierarchies. The spatial concentration or dispersion of top level domains seems to be a good instrument to reflect the info-structural dynamic and economic development of a territory, especially at regional level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

MSc. Dissertation presented at Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer Engineering

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we address the problem of computing multiple roots of a system of nonlinear equations through the global optimization of an appropriate merit function. The search procedure for a global minimizer of the merit function is carried out by a metaheuristic, known as harmony search, which does not require any derivative information. The multiple roots of the system are sequentially determined along several iterations of a single run, where the merit function is accordingly modified by penalty terms that aim to create repulsion areas around previously computed minimizers. A repulsion algorithm based on a multiplicative kind penalty function is proposed. Preliminary numerical experiments with a benchmark set of problems show the effectiveness of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new algorithm for the velocity vector estimation of moving ships using Single Look Complex (SLC) SAR data in strip map acquisition mode is proposed. The algorithm exploits both amplitude and phase information of the Doppler decompressed data spectrum, with the aim to estimate both the azimuth antenna pattern and the backscattering coefficient as function of the look angle. The antenna pattern estimation provides information about the target velocity; the backscattering coefficient can be used for vessel classification. The range velocity is retrieved in the slow time frequency domain by estimating the antenna pattern effects induced by the target motion, while the azimuth velocity is calculated by the estimated range velocity and the ship orientation. Finally, the algorithm is tested on simulated SAR SLC data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper formulates a novel expression for entropy inspired in the properties of Fractional Calculus. The characteristics of the generalized fractional entropy are tested both in standard probability distributions and real world data series. The results reveal that tuning the fractional order allow an high sensitivity to the signal evolution, which is useful in describing the dynamics of complex systems. The concepts are also extended to relative distances and tested with several sets of data, confirming the goodness of the generalization.