136 resultados para Business enterprises - Computer networks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The provision of security in mobile ad hoc networks is of paramount importance due to their wireless nature. However, when conducting research into security protocols for ad hoc networks it is necessary to consider these in the context of the overall system. For example, communicational delay associated with the underlying MAC layer needs to be taken into account. Nodes in mobile ad hoc networks must strictly obey the rules of the underlying MAC when transmitting security-related messages while still maintaining a certain quality of service. In this paper a novel authentication protocol, RASCAAL, is described and its performance is analysed by investigating both the communicational-related effects of the underlying IEEE 802.11 MAC and the computational-related effects of the cryptographic algorithms employed. To the best of the authors' knowledge, RASCAAL is the first authentication protocol which proposes the concept of dynamically formed short-lived random clusters with no prior knowledge of the cluster head. The performance analysis demonstrates that the communication losses outweigh the computation losses with respect to energy and delay. MAC-related communicational effects account for 99% of the total delay and total energy consumption incurred by the RASCAAL protocol. The results also show that a saving in communicational energy of up to 12.5% can be achieved by changing the status of the wireless nodes during the course of operation. Copyright (C) 2009 G. A. Safdar and M. P. O'Neill (nee McLoone).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a novel pattern recognition scheme, global harmonic subspace analysis (GHSA), is developed for face recognition. In the proposed scheme, global harmonic features are extracted at the semantic scale to capture the 2-D semantic spatial structures of a face image. Laplacian Eigenmap is applied to discriminate faces in their global harmonic subspace. Experimental results on the Yale and PIE face databases show that the proposed GHSA scheme achieves an improvement in face recognition accuracy when compared with conventional subspace approaches, and a further investigation shows that the proposed GHSA scheme has impressive robustness to noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a new blind and readable H.264 compressed domain watermarking scheme is proposed in which the embedding/extracting is performed using the syntactic elements of the compressed bit stream. As a result, it is not necessary to fully decode a compressed video stream both in the embedding and extracting processes. The method also presents an inexpensive spatiotemporal analysis that selects the appropriate submacroblocks for embedding, increasing watermark robustness while reducing its impact on visual quality. Meanwhile, the proposed method prevents bit-rate increase and restricts it within an acceptable limit by selecting appropriate quantized residuals for watermark insertion. Regarding watermarking demands such as imperceptibility, bit-rate control, and appropriate level of security, a priority matrix is defined which can be adjusted based on the application requirements. The resulted flexibility expands the usability of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with the universal (blind) image steganalysis problem and introduces a novel method to detect especially spatial domain steganographic methods. The proposed steganalyzer models linear dependencies of image rows/columns in local neighborhoods using singular value decomposition transform and employs content independency provided by a Wiener filtering process. Experimental results show that the novel method has superior performance when compared with its counterparts in terms of spatial domain steganography. Experiments also demonstrate the reasonable ability of the method to detect discrete cosine transform-based steganography as well as the perturbation quantization method.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of accelerators, with compute architectures different and distinct from the CPU, has become a new research frontier in high-performance computing over the past ?ve years. This paper is a case study on how the instruction-level parallelism offered by three accelerator technologies, FPGA, GPU and ClearSpeed, can be exploited in atomic physics. The algorithm studied is the evaluation of two electron integrals, using direct numerical quadrature, a task that arises in the study of intermediate energy electron scattering by hydrogen atoms. The results of our ‘productivity’ study show that while each accelerator is viable, there are considerable differences in the implementation strategies that must be followed on each.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the development of a novel metaheuristic that combines an electromagnetic-like mechanism (EM) and the great deluge algorithm (GD) for the University course timetabling problem. This well-known timetabling problem assigns lectures to specific numbers of timeslots and rooms maximizing the overall quality of the timetable while taking various constraints into account. EM is a population-based stochastic global optimization algorithm that is based on the theory of physics, simulating attraction and repulsion of sample points in moving toward optimality. GD is a local search procedure that allows worse solutions to be accepted based on some given upper boundary or ‘level’. In this paper, the dynamic force calculated from the attraction-repulsion mechanism is used as a decreasing rate to update the ‘level’ within the search process. The proposed method has been applied to a range of benchmark university course timetabling test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature, demonstrating that the method is able to produce improved solutions to those currently published. We believe this is due to the combination of both approaches and the ability of the resultant algorithm to converge all solutions at every search process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article reviews an important class of MIMO wireless communications, known collectively as turbo-MIMO systems. A distinctive property of turbo-MIMO wireless communication systems is that they can attain a channel capacity close to the Shannon limit and do so in a computationally manageable manner. The article focuses attention on a subclass of turbo-MIMO systems that use space-time coding based on bit-interleaved coded modulation. Different computationally manageable decoding (detection) strategies are briefly discussed. The article also includes computer experiments that are intended to improve the understanding of specific issues involved in the design of turbo-MIMO systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we address the problem of designing multirate codes for a multiple-input and multiple-output (MIMO) system by restricting the receiver to be a successive decoding and interference cancellation type, when each of the antennas is encoded independently. Furthermore, it is assumed that the receiver knows the instantaneous fading channel states but the transmitter does not have access to them. It is well known that, in theory, minimum-mean-square error (MMSE) based successive decoding of multiple access (in multi-user communications) and MIMO channels achieves the total channel capacity. However, for this scheme to perform optimally, the optimal rates of each antenna (per-antenna rates) must be known at the transmitter. We show that the optimal per-antenna rates at the transmitter can be estimated using only the statistical characteristics of the MIMO channel in time-varying Rayleigh MIMO channel environments. Based on the results, multirate codes are designed using punctured turbo codes for a horizontal coded MIMO system. Simulation results show performances within about one to two dBs of MIMO channel capacity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent theoretical investigations of spatially correlated multitransmit and multireceive (MTMR) links show that not only independently and identically distributed links, but also spatially correlated links can offer linear capacity growth with increasing number of transmit and receive antennas. In this paper, we explore the suitability of the turbo-BLAST architecture in correlated Rayleigh-fading MTMR environments. In particular, for an MTMR system with a large number of receive antennas, a near optimal performance can be achieved by the turbo-BLAST architecture in spatially and temporarily correlated Rayleigh-fading environments. The performance of turbo-BLAST, in terms of both bit-error rate and spectral efficiency, is analyzed empirically in indoors and correlated outdoor environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new technique based on adaptive code-to-user allocation for interference management on the downlink of BPSK based TDD DS-CDMA systems is presented. The principle of the proposed technique is to exploit the dependency of multiple access interference on the instantaneous symbol values of the active users. The objective is to adaptively allocate the available spreading sequences to users on a symbol-by-symbol basis to optimize the decision variables at the downlink receivers. The presented simulations show an overall system BER performance improvement of more than an order of a magnitude with the proposed technique while the adaptation overhead is kept less than 10% of the available bandwidth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent years have witnessed an incredibly increasing interest in the topic of incremental learning. Unlike conventional machine learning situations, data flow targeted by incremental learning becomes available continuously over time. Accordingly, it is desirable to be able to abandon the traditional assumption of the availability of representative training data during the training period to develop decision boundaries. Under scenarios of continuous data flow, the challenge is how to transform the vast amount of stream raw data into information and knowledge representation, and accumulate experience over time to support future decision-making process. In this paper, we propose a general adaptive incremental learning framework named ADAIN that is capable of learning from continuous raw data, accumulating experience over time, and using such knowledge to improve future learning and prediction performance. Detailed system level architecture and design strategies are presented in this paper. Simulation results over several real-world data sets are used to validate the effectiveness of this method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multicore computational accelerators such as GPUs are now commodity components for highperformance computing at scale. While such accelerators have been studied in some detail as stand-alone computational engines, their integration in large-scale distributed systems raises new challenges and trade-offs. In this paper, we present an exploration of resource management alternatives for building asymmetric accelerator-based distributed systems. We present these alternatives in the context of a capabilities-aware framework for data-intensive computing, which uses an enhanced implementation of the MapReduce programming model for accelerator-based clusters, compared to the state of the art. The framework can transparently utilize heterogeneous accelerators for deriving high performance with low programming effort. Our work is the first to compare heterogeneous types of accelerators, GPUs and a Cell processors, in the same environment and the first to explore the trade-offs between compute-efficient and control-efficient accelerators on data-intensive systems. Our investigation shows that our framework scales well with the number of different compute nodes. Furthermore, it runs simultaneously on two different types of accelerators, successfully adapts to the resource capabilities, and performs 26.9% better on average than a static execution approach.