32 resultados para ARTIFACTS
em Universidad Politécnica de Madrid
Resumo:
Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT method
A repository for integration of software artifacts with dependency resolution and federation support
Resumo:
While developing new IT products, reusability of existing components is a key aspect that can considerably improve the success rate. This fact has become even more important with the rise of the open source paradigm. However, integrating different products and technologies is not always an easy task. Different communities employ different standards and tools, and most times is not clear which dependencies a particular piece of software has. This is exacerbated by the transitive nature of these dependencies, making component integration a complicated affair. To help reducing this complexity we propose a model-based repository, capable of automatically resolve the required dependencies. This repository needs to be expandable, so new constraints can be analyzed, and also have federation support, for the integration with other sources of artifacts. The solution we propose achieves these working with OSGi components and using OSGi itself.
Resumo:
New digital artifacts are emerging in data-intensive science. For example, scientific workflows are executable descriptions of scientific procedures that define the sequence of computational steps in an automated data analysis, supporting reproducible research and the sharing and replication of best-practice and know-how through reuse. Workflows are specified at design time and interpreted through their execution in a variety of situations, environments, and domains. Hence it is essential to preserve both their static and dynamic aspects, along with the research context in which they are used. To achieve this, we propose the use of multidimensional digital objects (Research Objects) that aggregate the resources used and/or produced in scientific investigations, including workflow models, provenance of their executions, and links to the relevant associated resources, along with the provision of technological support for their preservation and efficient retrieval and reuse. In this direction, we specified a software architecture for the design and implementation of a Research Object preservation system, and realized this architecture with a set of services and clients, drawing together practices in digital libraries, preservation systems, workflow management, social networking and Semantic Web technologies. In this paper, we describe the backbone system of this realization, a digital library system built on top of dLibra.
Resumo:
Open source is a software development paradigm that has seen a huge rise in recent years. It reduces IT costs and time to market, while increasing security and reliability. However, the difficulty in integrating developments from different communities and stakeholders prevents this model from reaching its full potential. This is mainly due to the challenge of determining and locating the correct dependencies for a given software artifact. To solve this problem we propose the development of an extensible software component repository based upon models. This repository should be capable of solving the dependencies between several components and work with already existing repositories to access the needed artifacts transparently. This repository will also be easily expandable, enabling the creation of modules that support new kinds of dependencies or other existing repository technologies. The proposed solution will work with OSGi components and use OSGi itself.
Resumo:
Manufacturing technologies as injection molding or embossing specify their production limits for minimum radii of the vertices or draft angle for demolding, for instance. In some demanding nonimaging applications, these restrictions may limit the system optical efficiency or affect the generation of undesired artifacts on the illumination pattern. A novel manufacturing concept is presented here, in which the optical surfaces are not obtained from the usual revolution symmetry with respect to a central axis (z axis), but they are calculated as free-form surfaces describing a spiral trajectory around z axis. The main advantage of this new concept lies in the manufacturing process: a molded piece can be easily separated from its mold just by applying a combination of rotational movement around axis z and linear movement along axis z, even for negative draft angles. Some of these spiral symmetry examples will be shown here, as well as their simulated results.
Resumo:
Manufacturing technologies as injection molding or embossing specify their production limits for minimum radii of the vertices or draft angle for demolding, for instance. These restrictions may limit the system optical efficiency or affect the generation of undesired artifacts on the illumination pattern when dealing with optical design. A novel manufacturing concept is presented here, in which the optical surfaces are not obtained from the usual revolution symmetry with respect to a central axis (z axis), but they are calculated as free-form surfaces describing a spiral trajectory around z axis. The main advantage of this new concept lies in the manufacturing process: a molded piece can be easily separated from its mold just by applying a combination of rotational movement around axis z and linear movement along axis z, even for negative draft angles. The general designing procedure will be described in detail
Resumo:
Manufacturing technologies as injection molding or embossing specify their production limits for minimum radii of the vertices or draft angle for demolding, for instance. In some demanding nonimaging applications, these restrictions may limit the system optical efficiency or affect the generation of undesired artifacts on the illumination pattern. A novel manufacturing concept is presented here, in which the optical surfaces are not obtained from the usual revolution symmetry with respect to a central axis (z axis), but they are calculated as free-form surfaces describing a spiral trajectory around z axis. The main advantage of this new concept lies in the manufacturing process: a molded piece can be easily separated from its mold just by applying a combination of rotational movement around axis z and linear movement along axis z, even for negative draft angles. Some of these spiral symmetry examples will be shown here, as well as their simulated results.
Resumo:
The magnetoencephalogram (MEG) is contaminated with undesired signals, which are called artifacts. Some of the most important ones are the cardiac and the ocular artifacts (CA and OA, respectively), and the power line noise (PLN). Blind source separation (BSS) has been used to reduce the influence of the artifacts in the data. There is a plethora of BSS-based artifact removal approaches, but few comparative analyses. In this study, MEG background activity from 26 subjects was processed with five widespread BSS (AMUSE, SOBI, JADE, extended Infomax, and FastICA) and one constrained BSS (cBSS) techniques. Then, the ability of several combinations of BSS algorithm, epoch length, and artifact detection metric to automatically reduce the CA, OA, and PLN were quantified with objective criteria. The results pinpointed to cBSS as a very suitable approach to remove the CA. Additionally, a combination of AMUSE or SOBI and artifact detection metrics based on entropy or power criteria decreased the OA. Finally, the PLN was reduced by means of a spectral metric. These findings confirm the utility of BSS to help in the artifact removal for MEG background activity.
Resumo:
Conventional SAR (Synthetic Aperture Radar) techniques only consider a single reflection of transmitted waveforms from targets. Nevertheless, today?s new applications force SAR systems to work in much more complex scenes such as urban environments. As a result, multiple-bounce returns are additionally superposed to direct echoes. We refer to these as ghost images, since they obscure true target image and lead to poor resolution. By applying Time Reversal concept to SAR imaging (TR-SAR), it is possible to reduce considerably ?or almost mitigate? ghosting artifacts, recovering the lost resolution due to multipath effects. Furthermore, some focusing indicators such as entropy (E), contrast (C) and Rényi entropy (RE) provide us a good focusing criterion when using TR-SAR.
Resumo:
Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor.
Resumo:
High flux and high CRI may be achieved by combining different chips and/or phosphors. This, however, results in inhomogeneous sources that, when combined with collimating optics, typically produce patterns with undesired artifacts. These may be a combination of spatial, angular or color non-uniformities. In order to avoid these effects, there is a need to mix the light source, both spatially and angularly. Diffusers can achieve this effect, but they also increase the etendue (and reduce the brightness) of the resulting source, leading to optical systems of increased size and wider emission angles. The shell mixer is an optic comprised of many lenses on a shell covering the source. These lenses perform Kohler integration to mix the emitted light, both spatially and angularly. Placing it on top of a multi-chip Lambertian light source, the result is a highly homogeneous virtual source (i.e, spatially and angularly mixed), also Lambertian, which is located in the same position with essentially the same size (so the average brightness is not increased). This virtual light source can then be collimated using another optic, resulting in a homogeneous pattern without color separation. Experimental measurements have shown optical efficiency of the shell of 94%, and highly homogeneous angular intensity distribution of collimated beams, in good agreement with the ray-tracing simulations.
Resumo:
The understanding of the embryogenesis in living systems requires reliable quantitative analysis of the cell migration throughout all the stages of development. This is a major challenge of the "in-toto" reconstruction based on different modalities of "in-vivo" imaging techniques -spatio-temporal resolution and image artifacts and noise. Several methods for cell tracking are available, but expensive manual interaction -time and human resources- is always required to enforce coherence. Because of this limitation it is necessary to restrict the experiments or assume an uncontrolled error rate. Is it possible to obtain automated reliable measurements of migration? can we provide a seed for biologists to complete cell lineages efficiently? We propose a filtering technique that considers trajectories as spatio-temporal connected structures that prunes out those that might introduce noise and false positives by using multi-dimensional morphological operators.
Resumo:
Usability is a critical quality factor. Therefore, like traditional software teams, agile teams have to address usability to properly catch their users experience. There exists an interesting debate in the agile and usability communities about how to achieve this integration. Our aim is to contribute to this debate by discussing the incorporation of particular usability recommendations into user stories, one of the most popular artifacts for communicating agile requirements. In this paper, we explore the implications of usability for both the structure of and the process for defining user stories. We discuss what changes the incorporation of particular usability issues may introduce in a user story. Although our findings require more empirical validation, we think that they are a good starting point for further research on this line.
Resumo:
Interoperability on multiple levels, concerning both the ontologies themselves and their engineering activities, is a key requirement for ontology networks to be efficient, with minimal redundancy and high reuse. This requirement has a strict binding for software tools that can support some interoperability levels, yet they can be hindered by a lack of shared models and vocabularies describing the resources to be handled, as well as the ways of handling them. Here, three examples of metalevel vocabularies are proposed, each covering at least one peculiar interoperability aspect: OMV for modeling the artifacts themselves, LIR for managing a multilingual layer on top of them, and C-ODO Light for modeling collaboration-supportive life cycle management tasks and processes. All of these models lend themselves to handling by dedicated software tools and are all being employed within NeOn products.
Resumo:
Esta tesis estudia la monitorización y gestión de la Calidad de Experiencia (QoE) en los servicios de distribución de vídeo sobre IP. Aborda el problema de cómo prevenir, detectar, medir y reaccionar a las degradaciones de la QoE desde la perspectiva de un proveedor de servicios: la solución debe ser escalable para una red IP extensa que entregue flujos individuales a miles de usuarios simultáneamente. La solución de monitorización propuesta se ha denominado QuEM(Qualitative Experience Monitoring, o Monitorización Cualitativa de la Experiencia). Se basa en la detección de las degradaciones de la calidad de servicio de red (pérdidas de paquetes, disminuciones abruptas del ancho de banda...) e inferir de cada una una descripción cualitativa de su efecto en la Calidad de Experiencia percibida (silencios, defectos en el vídeo...). Este análisis se apoya en la información de transporte y de la capa de abstracción de red de los flujos codificados, y permite caracterizar los defectos más relevantes que se observan en este tipo de servicios: congelaciones, efecto de “cuadros”, silencios, pérdida de calidad del vídeo, retardos e interrupciones en el servicio. Los resultados se han validado mediante pruebas de calidad subjetiva. La metodología usada en esas pruebas se ha desarrollado a su vez para imitar lo más posible las condiciones de visualización de un usuario de este tipo de servicios: los defectos que se evalúan se introducen de forma aleatoria en medio de una secuencia de vídeo continua. Se han propuesto también algunas aplicaciones basadas en la solución de monitorización: un sistema de protección desigual frente a errores que ofrece más protección a las partes del vídeo más sensibles a pérdidas, una solución para minimizar el impacto de la interrupción de la descarga de segmentos de Streaming Adaptativo sobre HTTP, y un sistema de cifrado selectivo que encripta únicamente las partes del vídeo más sensibles. También se ha presentado una solución de cambio rápido de canal, así como el análisis de la aplicabilidad de los resultados anteriores a un escenario de vídeo en 3D. ABSTRACT This thesis proposes a comprehensive approach to the monitoring and management of Quality of Experience (QoE) in multimedia delivery services over IP. It addresses the problem of preventing, detecting, measuring, and reacting to QoE degradations, under the constraints of a service provider: the solution must scale for a wide IP network delivering individual media streams to thousands of users. The solution proposed for the monitoring is called QuEM (Qualitative Experience Monitoring). It is based on the detection of degradations in the network Quality of Service (packet losses, bandwidth drops...) and the mapping of each degradation event to a qualitative description of its effect in the perceived Quality of Experience (audio mutes, video artifacts...). This mapping is based on the analysis of the transport and Network Abstraction Layer information of the coded stream, and allows a good characterization of the most relevant defects that exist in this kind of services: screen freezing, macroblocking, audio mutes, video quality drops, delay issues, and service outages. The results have been validated by subjective quality assessment tests. The methodology used for those test has also been designed to mimic as much as possible the conditions of a real user of those services: the impairments to evaluate are introduced randomly in the middle of a continuous video stream. Based on the monitoring solution, several applications have been proposed as well: an unequal error protection system which provides higher protection to the parts of the stream which are more critical for the QoE, a solution which applies the same principles to minimize the impact of incomplete segment downloads in HTTP Adaptive Streaming, and a selective scrambling algorithm which ciphers only the most sensitive parts of the media stream. A fast channel change application is also presented, as well as a discussion about how to apply the previous results and concepts in a 3D video scenario.