79 resultados para Iterative Closest Point (ICP) Algorithm
Resumo:
In previous work we proposed a multi-objective traffic engineering scheme (MHDB-S model) using different distribution trees to multicast several flows. In this paper, we propose a heuristic algorithm to create multiple point-to-multipoint (p2mp) LSPs based on the optimum sub-flow values obtained with our MHDB-S model. Moreover, a general problem for supporting multicasting in MPLS networks is the lack of labels. To reduce the number of labels used, a label space reduction algorithm solution is also considered
Resumo:
The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved
Resumo:
The application of Discriminant function analysis (DFA) is not a new idea in the studyof tephrochrology. In this paper, DFA is applied to compositional datasets of twodifferent types of tephras from Mountain Ruapehu in New Zealand and MountainRainier in USA. The canonical variables from the analysis are further investigated witha statistical methodology of change-point problems in order to gain a betterunderstanding of the change in compositional pattern over time. Finally, a special caseof segmented regression has been proposed to model both the time of change and thechange in pattern. This model can be used to estimate the age for the unknown tephrasusing Bayesian statistical calibration
Resumo:
The system described herein represents the first example of a recommender system in digital ecosystems where agents negotiate services on behalf of small companies. The small companies compete not only with price or quality, but with a wider service-by-service composition by subcontracting with other companies. The final result of these offerings depends on negotiations at the scale of millions of small companies. This scale requires new platforms for supporting digital business ecosystems, as well as related services like open-id, trust management, monitors and recommenders. This is done in the Open Negotiation Environment (ONE), which is an open-source platform that allows agents, on behalf of small companies, to negotiate and use the ecosystem services, and enables the development of new agent technologies. The methods and tools of cyber engineering are necessary to build up Open Negotiation Environments that are stable, a basic condition for predictable business and reliable business environments. Aiming to build stable digital business ecosystems by means of improved collective intelligence, we introduce a model of negotiation style dynamics from the point of view of computational ecology. This model inspires an ecosystem monitor as well as a novel negotiation style recommender. The ecosystem monitor provides hints to the negotiation style recommender to achieve greater stability of an open negotiation environment in a digital business ecosystem. The greater stability provides the small companies with higher predictability, and therefore better business results. The negotiation style recommender is implemented with a simulated annealing algorithm at a constant temperature, and its impact is shown by applying it to a real case of an open negotiation environment populated by Italian companies
Resumo:
This paper proposes an heuristic for the scheduling of capacity requests and the periodic assignment of radio resources in geostationary (GEO) satellite networks with star topology, using the Demand Assigned Multiple Access (DAMA) protocol in the link layer, and Multi-Frequency Time Division Multiple Access (MF-TDMA) and Adaptive Coding and Modulation (ACM) in the physical layer.
Resumo:
Algoritmo que optimiza y crea pairings para tripulaciones de líneas aéreas mediante la posterior programación en Java.
Resumo:
We are going to implement the "GA-SEFS" by Tsymbal and analyse experimentally its performance depending on the classifier algorithms used in the fitness function (NB, MNge, SMO). We are also going to study the effect of adding to the fitness function a measure to control complexity of the base classifiers.
Resumo:
In this paper, we are proposing a methodology to determine the most efficient and least costly way of crew pairing optimization. We are developing a methodology based on algorithm optimization on Eclipse opensource IDE using the Java programming language to solve the crew scheduling problems.
Resumo:
The longwave emission of planetary atmospheres that contain a condensable absorbing gas in the infrared (i.e., longwave), which is in equilibrium with its liquid phase at the surface, may exhibit an upper bound. Here we analyze the effect of the atmospheric absorption of sunlight on this radiation limit. We assume that the atmospheric absorption of infrared radiation is independent of wavelength except within the spectral width of the atmospheric window, where it is zero. The temperature profile in radiative equilibrium is obtained analytically as a function of the longwave optical thickness. For illustrative purposes, numerical values for the infrared atmospheric absorption (i.e., greenhouse effect) and the liquid vapor equilibrium curve of the condensable absorbing gas refer to water. Values for the atmospheric absorption of sunlight (i.e., antigreenhouse effect) take a wide range since our aim is to provide a qualitative view of their effects. We find that atmospheres with a transparent region in the infrared spectrum do not present an absolute upper bound on the infrared emission. This result may be also found in atmospheres opaque at all infrared wavelengths if the fraction of absorbed sunlight in the atmosphere increases with the longwave opacity
Resumo:
Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.
Resumo:
We characterize the capacity-achieving input covariance for multi-antenna channels known instantaneously at the receiver and in distribution at the transmitter. Our characterization, valid for arbitrary numbers of antennas, encompasses both the eigenvectors and the eigenvalues. The eigenvectors are found for zero-mean channels with arbitrary fading profiles and a wide range of correlation and keyhole structures. For the eigenvalues, in turn, we present necessary and sufficient conditions as well as an iterative algorithm that exhibits remarkable properties: universal applicability, robustness and rapid convergence. In addition, we identify channel structures for which an isotropic input achieves capacity.
Resumo:
In this work we describe the usage of bilinear statistical models as a means of factoring the shape variability into two components attributed to inter-subject variation and to the intrinsic dynamics of the human heart. We show that it is feasible to reconstruct the shape of the heart at discrete points in the cardiac cycle. Provided we are given a small number of shape instances representing the same heart atdifferent points in the same cycle, we can use the bilinearmodel to establish this. Using a temporal and a spatial alignment step in the preprocessing of the shapes, around half of the reconstruction errors were on the order of the axial image resolution of 2 mm, and over 90% was within 3.5 mm. From this, weconclude that the dynamics were indeed separated from theinter-subject variability in our dataset.
Resumo:
A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.
Resumo:
This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.
Resumo:
The present paper is aimed at identifying what are the effects of the Point System of Selection of immigrants in Quebec. I defend that the distribution of points results in a different composition of immigrant stocks in terms of origin mix and not in terms of labour skills. To do so, I carry out a longitudinal descriptive analysis on the national composition of immigrants in Quebec and two other significant provinces (Ontario and British Columbia), as well as an analysis of the distribution of points in Quebec and in the rest of Canada.