17 resultados para Castiglione, Baltasar
em Universidad Politécnica de Madrid
A repository for integration of software artifacts with dependency resolution and federation support
Resumo:
While developing new IT products, reusability of existing components is a key aspect that can considerably improve the success rate. This fact has become even more important with the rise of the open source paradigm. However, integrating different products and technologies is not always an easy task. Different communities employ different standards and tools, and most times is not clear which dependencies a particular piece of software has. This is exacerbated by the transitive nature of these dependencies, making component integration a complicated affair. To help reducing this complexity we propose a model-based repository, capable of automatically resolve the required dependencies. This repository needs to be expandable, so new constraints can be analyzed, and also have federation support, for the integration with other sources of artifacts. The solution we propose achieves these working with OSGi components and using OSGi itself.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
Open source is a software development paradigm that has seen a huge rise in recent years. It reduces IT costs and time to market, while increasing security and reliability. However, the difficulty in integrating developments from different communities and stakeholders prevents this model from reaching its full potential. This is mainly due to the challenge of determining and locating the correct dependencies for a given software artifact. To solve this problem we propose the development of an extensible software component repository based upon models. This repository should be capable of solving the dependencies between several components and work with already existing repositories to access the needed artifacts transparently. This repository will also be easily expandable, enabling the creation of modules that support new kinds of dependencies or other existing repository technologies. The proposed solution will work with OSGi components and use OSGi itself.
Resumo:
Although most of the research on Cognitive Radio is focused on communication bands above the HF upper limit (30 MHz), Cognitive Radio principles can also be applied to HF communications to make use of the extremely scarce spectrum more efficiently. In this work we consider legacy users as primary users since these users transmit without resorting to any smart procedure, and our stations using the HFDVL (HF Data+Voice Link) architecture as secondary users. Our goal is to enhance an efficient use of the HF band by detecting the presence of uncoordinated primary users and avoiding collisions with them while transmitting in different HF channels using our broad-band HF transceiver. A model of the primary user activity dynamics in the HF band is developed in this work to make short-term predictions of the sojourn time of a primary user in the band and avoid collisions. It is based on Hidden Markov Models (HMM) which are a powerful tool for modelling stochastic random processes and are trained with real measurements of the 14 MHz band. By using the proposed HMM based model, the prediction model achieves an average 10.3% prediction error rate with one minute-long channel knowledge but it can be reduced when this knowledge is extended: with the previous 8 min knowledge, an average 5.8% prediction error rate is achieved. These results suggest that the resulting activity model for the HF band could actually be used to predict primary users activity and included in a future HF cognitive radio based station.
Resumo:
It is clear that in the near future much broader transmissions in the HF band will replace part of the current narrow band links. Our personal view is that a real wide band signal is infeasible in this environment because the usage is typically very intensive and may suffer interferences from all over the world. Therefore, we envision that dynamic multiband transmissions may provide better satisfactory performance. From the very beginning, we observed that real links with our broadband transceiver suffered interferences out of our multiband but within the acquisition bandwidth that degrade the expected performance. Therefore, we concluded that a mitigation structure is required that operates on severely saturated signals as the interference may be of much higher power. In this paper we address a procedure based on Higher Order Crossings (HOC) statistics that are able to extract most of the signal structure in the case where the amplitude is severely distorted and allows the estimation of the interference carrier frequency to command a variable notch filter that mitigates its effect in the analog domain.
Resumo:
We envision that dynamic multiband transmissions taking advantage of the receiver diversity (even for collocated antennas with different polarization or radiation pattern) will create a new paradigm for these links guaranteeing high quality and reliability. However, there are many challenges to face regarding the use of broadband reception where several out of band (with respect to multiband transmission) strong interferers, but still within the acquisition band, may limit dramatically the expected performance. In this paper we address this problem introducing a specific capability of the communication system that is able to mitigate these interferences using analog beamforming principles. Indeed, Higher Order Crossing (HOCs) joint statistics of the Single Input ? Multiple Output (SIMO) system are shown to effectively determine the angle on arrival of the wavefront even operating over highly distorted signals.
Resumo:
Achieving reliable communication over HF channels is known to be challenging due to the particularly hostile propagation medium. To address this problem, diversity techniques were shown to be promising. In this paper, we demonstrate through experimental results the benefits of different diversity strategies when applied to multi-input-multi-output (MIMO) multicarrier systems. The performance gains of polarisation, space and frequency diversities are quantified using different measurement campaigns.
Resumo:
Cognitive Radio principles can be applied to HF communications to make a more efficient use of the extremely scarce spectrum. In this contribution we focus on analyzing the usage of the available channels done by the legacy users, which are regarded as primary users since they are allowed to transmit without resorting any smart procedure, and consider the possibilities for our stations -over the HFDVL (HF Data+Voice Link) architecture- to participate as secondary users. Our goal is to enhance an efficient use of the HF band by detecting the presence of uncoordinated primary users and avoiding collisions with them while transmitting in different HF channels using our broad-band HF transceiver. A model of the primary user activity dynamics in the HF band is developed in this work. It is based on Hidden Markov Models (HMM) which are a powerful tool for modelling stochastic random processes, and is trained with real measurements from the 14 MHz band.
Resumo:
The reliability of bidirectional communication link can be guaranteed with Automatic Repeat Request Procedures (ARQ). The standard STANAG 5066 describes the ARQ procedure for HF communications that can either be applied to existing HF physical layers modems or adapted to future physical layer designs. In this contribution the physical layer parameters of an HF modem (HFDVL), developed by the authors over the last decade, are chosen to optimize the performance of the ARQ procedure described in STANAG 5066. Besides the interleaving length, constellation size and coding type, the OFDM-based HFDVL modem permits the selection of the number of receiver antennas. It will be shown that this parameter gives additional degrees of freedom and permits reliable communication over low SNR HF communication links.
Resumo:
Multi-carrier modulations are widely employed in ionospheric communications to mitigate the adverse effects of the HF channel. In this paper we show how performance achieved by these modulations can be further increased by means of CSIbased precoding techniques in the context of our research on interactive digital voice communications. Depending on communication constraints and channel parameters, we will show which of the studied modulations and precoding techniques to select so that to maximise performance.
Resumo:
Cloud computing has seen an impressive growth in recent years, with virtualization technologies being massively adopted to create IaaS (Infrastructure as a Service) public and private solutions. Today, the interest is shifting towards the PaaS (Platform as a Service) model, which allows developers to abstract from the execution platform and focus only on the functionality. There are several public PaaS offerings available, but currently no private PaaS solution is ready for production environments. To fill this gap a new solution must be developed. In this paper we present a key element for enabling this model: a cloud repository based on the OSGi component model. The repository stores, manages, provisions and resolves the dependencies of PaaS software components and services. This repository can federate with other repositories located in the same or different clouds, both private and public. This way, dependencies can be fulfilled collaboratively, and new business models can be implemented.
Resumo:
Cloud computing and, more particularly, private IaaS, is seen as a mature technology with a myriad solutions tochoose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock-in. Several competing and incompatible interfaces and management styles have given even more voice to these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this paper, we propose a management architecture that tries to tackle these problems; it offers a common way of managing several cloud solutions, and an interface that can be tailored to the needs of the user. This management architecture is designed in a modular way, and using a generic information model. We have validated our approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack
Resumo:
The size and complexity of cloud environments make them prone to failures. The traditional approach to achieve a high dependability for these systems relies on constant monitoring. However, this method is purely reactive. A more proactive approach is provided by online failure prediction (OFP) techniques. In this paper, we describe a OFP system for private IaaS platforms, currently under development, that combines di_erent types of data input, including monitoring information, event logs, and failure data. In addition, this system operates at both the physical and virtual planes of the cloud, taking into account the relationships between nodes and failure propagation mechanisms that are unique to cloud environments.
Resumo:
A lo largo de los últimos años, el paradigma de la arquitectura orientada a servicios ha tenido una gran expansión gracias a la expansión de las tecnologías web e internet. Las ventajas de esta arquitectura se basan en ofrecer diseños modulares con poco acoplamiento entre sí, lo que permite la creación eficiente y sistemática de sistemas distribuidos. Para que este tipo de arquitectura sea posible, es necesario dotar a los servicios de interfaces de interconexión que permitan encapsular los servicios al mismo tiempo que faciliten el uso de los mismos. Existen varias tecnologías para definir estos interfaces. Entre ellas, los servicios REST, o REpresentional State Transfer, están logrando cada vez más aceptación. Esto se debe principalmente a su capacidad de escalabilidad y la uniformidad de sus interfaces, que permite una mayor separación entre los consumidores y los servicios. De hecho, compañias como Yahoo, Google o Twitter definen interfaces REST de acceso a sus servicios, ya se para consultar mapas (GoogleMaps), imágenes (Flickr) o el correo, permitiendo a terceros desarrollar clientes para sus servicios sin tener que involucrarse en su producción.
Resumo:
The concept of service oriented architecture has been extensively explored in software engineering, due to the fact that it produces architectures made up of several interconnected modules, easy to reuse when building new systems. This approach to design would be impossible without interconnection mechanisms such as REST (Representationa State Transfer) services, which allow module communication while minimizing coupling. . However, this low coupling brings disadvantages, such as the lack of transparency, which makes it difficult to sistematically create tests without knowledge of the inner working of a system. In this article, we present an automatic error detection system for REST services, based on a statistical analysis over responses produced at multiple service invocations. Thus, a service can be systematically tested without knowing its full specification. The method can find errors in REST services which could not be identified by means of traditional testing methods, and provides limited testing coverage for services whose response format is unknown. It can be also useful as a complement to other testing mechanisms.