24 resultados para multi-project environment

em Universidad de Alicante


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A parallel algorithm to remove impulsive noise in digital images using heterogeneous CPU/GPU computing is proposed. The parallel denoising algorithm is based on the peer group concept and uses an Euclidean metric. In order to identify the amount of pixels to be allocated in multi-core and GPUs, a performance analysis using large images is presented. A comparison of the parallel implementation in multi-core, GPUs and a combination of both is performed. Performance has been evaluated in terms of execution time and Megapixels/second. We present several optimization strategies especially effective for the multi-core environment, and demonstrate significant performance improvements. The main advantage of the proposed noise removal methodology is its computational speed, which enables efficient filtering of color images in real-time applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robotics is a field that presents a large number of problems because it depends on a large number of disciplines, devices, technologies and tasks. Its expansion from perfectly controlled industrial environments toward open and dynamic environment presents a many new challenges, such as robots household robots or professional robots. To facilitate the rapid development of robotic systems, low cost, reusability of code, its medium and long term maintainability and robustness are required novel approaches to provide generic models and software systems who develop paradigms capable of solving these problems. For this purpose, in this paper we propose a model based on multi-agent systems inspired by the human nervous system able to transfer the control characteristics of the biological system and able to take advantage of the best properties of distributed software systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a proposal of a multi-modal dialogue system oriented to multilingual question-answering is presented. This system includes the following ways of access: voice, text, avatar, gestures and signs language. The proposal is oriented to the question-answering task as a user interaction mechanism. The proposal here presented is in the first stages of its development phase and the architecture is presented for the first time on the base of the experiences in question-answering and dialogues previously developed. The main objective of this research work is the development of a solid platform that will permit the modular integration of the proposed architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several works deal with 3D data in SLAM problem. Data come from a 3D laser sweeping unit or a stereo camera, both providing a huge amount of data. In this paper, we detail an efficient method to extract planar patches from 3D raw data. Then, we use these patches in an ICP-like method in order to address the SLAM problem. Using ICP with planes is not a trivial task. It needs some adaptation from the original ICP. Some promising results are shown for outdoor environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robotics is an emerging field with great activity. Robotics is a field that presents several problems because it depends on a large number of disciplines, technologies, devices and tasks. Its expansion from perfectly controlled industrial environments toward open and dynamic environment presents a many new challenges. New uses are, for example, household robots or professional robots. To facilitate the low cost, rapid development of robotic systems, reusability of code, its medium and long term maintainability and robustness are required novel approaches to provide generic models and software systems who develop paradigms capable of solving these problems. For this purpose, in this paper we propose a model based on multi-agent systems inspired by the human nervous system able to transfer the control characteristics of the biological system and able to take advantage of the best properties of distributed software systems. Specifically, we model the decentralized activity and hormonal variation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Historically, supergiant (sg)B[e] stars have been difficult to include in theoretical schemes for the evolution of massive OB stars. Aims. The location of Wd1-9 within the coeval starburst cluster Westerlund 1 means that it may be placed into a proper evolutionary context and we therefore aim to utilise a comprehensive multiwavelength dataset to determine its physical properties and consequently its relation to other sgB[e] stars and the global population of massive evolved stars within Wd1. Methods. Multi-epoch R- and I-band VLT/UVES and VLT/FORS2 spectra are used to constrain the properties of the circumstellar gas, while an ISO-SWS spectrum covering 2.45−45μm is used to investigate the distribution, geometry and composition of the dust via a semi-analytic irradiated disk model. Radio emission enables a long term mass-loss history to be determined, while X-ray observations reveal the physical nature of high energy processes within the system. Results. Wd1-9 exhibits the rich optical emission line spectrum that is characteristic of sgB[e] stars. Likewise its mid-IR spectrum resembles those of the LMC sgB[e] stars R66 and 126, revealing the presence of equatorially concentrated silicate dust, with a mass of ~10−4M⊙. Extreme historical and ongoing mass loss (≳ 10−4M⊙yr−1) is inferred from the radio observations. The X-ray properties of Wd1-9 imply the presence of high temperature plasma within the system and are directly comparable to a number of confirmed short-period colliding wind binaries within Wd1. Conclusions. The most complete explanation for the observational properties of Wd1-9 is that it is a massive interacting binary currently undergoing, or recently exited from, rapid Roche-lobe overflow, supporting the hypothesis that binarity mediates the formation of (a subset of) sgB[e] stars. The mass loss rate of Wd1-9 is consistent with such an assertion, while viable progenitor and descendent systems are present within Wd1 and comparable sgB[e] binaries have been identified in the Galaxy. Moreover, the rarity of sgB[e] stars - only two examples are identified from a census of ~ 68 young massive Galactic clusters and associations containing ~ 600 post-Main Sequence stars - is explicable given the rapidity (~ 104yr) expected for this phase of massive binary evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alkaline hydroxides, especially sodium and potassium hydroxides, are multi-million-ton per annum commodities and strong chemical bases that have large scale applications. Some of them are related with their consequent ability to degrade most materials, depending on the temperature used. As an example, these chemicals are involved in the manufacture of pulp and paper, textiles, biodiesels, soaps and detergents, acid gases removal (e.g., SO2) and others, as well as in many organic synthesis processes. Sodium and potassium hydroxides are strong and corrosive bases, but they are also very stable chemicals that can melt without decomposition, NaOH at 318ºC, and KOH at 360ºC. Hence, they can react with most materials, even with relatively inert ones such as carbon materials. Thus, at temperatures higher than 360ºC these melted hydroxides easily react with most types of carbon-containing raw materials (coals, lignocellulosic materials, pitches, etc.), as well as with most pure carbon materials (carbon fibers, carbon nanofibers and carbon nanotubes). This reaction occurs via a solid-liquid redox reaction in which both hydroxides (NaOH or KOH) are converted to the following main products: hydrogen, alkaline metals and alkaline carbonates, as a result of the carbon precursor oxidation. By controlling this reaction, and after a suitable washing process, good quality activated carbons (ACs), a classical type of porous materials, can be prepared. Such carbon activation by hydroxides, known since long time ago, continues to be under research due to the unique properties of the resulting activated carbons. They have promising high porosity developments and interesting pore size distributions. These two properties are important for new applications such as gas storage (e.g., natural gas or hydrogen), capture, storage and transport of carbon dioxide, electricity storage demands (EDLC-supercapacitors-) or pollution control. Because these applications require new and superior quality activated carbons, there is no doubt that among the different existing activating processes, the one based on the chemical reaction between the carbon precursor and the alkaline hydroxide (NaOH or KOH) gives the best activation results. The present article covers different aspects of the activation by hydroxides, including the characteristics of the resulting activated carbons and their performance in some environment-related applications. The following topics are discussed: i) variables of the preparation method, such as the nature of the hydroxide, the type of carbon precursor, the hydroxide/carbon precursor ratio, the mixing procedure of carbon precursor and hydroxide (impregnation of the precursor with a hydroxide solution or mixing both, hydroxide and carbon precursor, as solids), or the temperature and time of the reaction are discussed, analyzing their effect on the resulting porosity; ii) analysis of the main reactions occurring during the activation process, iii) comparative analysis of the porosity development obtained from different activation processes (e.g., CO2, steam, phosphoric acid and hydroxides activation); and iv) performance of the prepared activated carbon materials on a few applications, such as VOC removal, electricity and gas storages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a complete system for the treatment of both geographical and temporal dimensions in text and its application to information retrieval. This system has been evaluated in both the GeoTime task of the 8th and 9th NTCIR workshop in the years 2010 and 2011 respectively, making it possible to compare the system to contemporary approaches to the topic. In order to participate in this task we have added the temporal dimension to our GIR system. The system proposed here has a modular architecture in order to add or modify features. In the development of this system, we have followed a QA-based approach as well as multi-search engines to improve the system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trachoma currently represents one of the three main causes of ‘avoidable' blindness and reaches intolerable dimensions in many developing countries. It was endemic in many regions of eastern Spain until well into the twentieth century. The aim of this paper is to analyze the epidemiological development of this disease in contemporary Spain; to examine its determining factors, particularly environmental and sanitary/health factors, and, finally, to study the health care, environmental and socio-economic measures that led to its control and eradication. We believe that the historical approach not only highlights the role of environmental factors in the development of trachoma, but may also aid in understanding the current epidemiology of trachoma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tool path generation is one of the most complex problems in Computer Aided Manufacturing. Although some efficient strategies have been developed, most of them are only useful for standard machining. However, the algorithms used for tool path computation demand a higher computation performance, which makes the implementation on many existing systems very slow or even impractical. Hardware acceleration is an incremental solution that can be cleanly added to these systems while keeping everything else intact. It is completely transparent to the user. The cost is much lower and the development time is much shorter than replacing the computers by faster ones. This paper presents an optimisation that uses a specific graphic hardware approach using the power of multi-core Graphic Processing Units (GPUs) in order to improve the tool path computation. This improvement is applied on a highly accurate and robust tool path generation algorithm. The paper presents, as a case of study, a fully implemented algorithm used for turning lathe machining of shoe lasts. A comparative study will show the gain achieved in terms of total computing time. The execution time is almost two orders of magnitude faster than modern PCs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prototype Selection (PS) algorithms allow a faster Nearest Neighbor classification by keeping only the most profitable prototypes of the training set. In turn, these schemes typically lower the performance accuracy. In this work a new strategy for multi-label classifications tasks is proposed to solve this accuracy drop without the need of using all the training set. For that, given a new instance, the PS algorithm is used as a fast recommender system which retrieves the most likely classes. Then, the actual classification is performed only considering the prototypes from the initial training set belonging to the suggested classes. Results show that this strategy provides a large set of trade-off solutions which fills the gap between PS-based classification efficiency and conventional kNN accuracy. Furthermore, this scheme is not only able to, at best, reach the performance of conventional kNN with barely a third of distances computed, but it does also outperform the latter in noisy scenarios, proving to be a much more robust approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies on positive plant–plant relations have traditionally focused on pair-wise interactions. Conversely, the interaction with other co-occurring species has scarcely been addressed, despite the fact that the entire community may affect plant performance. We used woody vegetation patches as models to evaluate community facilitation in semi-arid steppes. We characterized biotic and physical attributes of 53 woody patches (patch size, litter accumulation, canopy density, vegetation cover, species number and identity, and phylogenetic distance), and soil fertility (organic C and total N), and evaluated their relative importance for the performance of seedlings of Pistacia lentiscus, a keystone woody species in western Mediterranean steppes. Seedlings were planted underneath the patches, and on their northern and southern edges. Woody patches positively affected seedling survival but not seedling growth. Soil fertility was higher underneath the patches than elsewhere. Physical and biotic attributes of woody patches affected seedling survival, but these effects depended on microsite conditions. The composition of the community of small shrubs and perennial grasses growing underneath the patches controlled seedling performance. An increase in Stipa tenacissima and a decrease in Brachypodium retusum increased the probability of survival. The cover of these species and other small shrubs, litter depth and community phylogenetic distance, were also related to seedling survival. Seedlings planted on the northern edge of the patches were mostly affected by attributes of the biotic community. These traits were of lesser importance in seedlings planted underneath and in the southern edge of patches, suggesting that constraints to seedling establishment differed within the patches. Our study highlights the importance of taking into consideration community attributes over pair-wise interactions when evaluating the outcome of ecological interactions in multi-specific communities, as they have profound implications in the composition, function and management of semi-arid steppes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature selection is an important and active issue in clustering and classification problems. By choosing an adequate feature subset, a dataset dimensionality reduction is allowed, thus contributing to decreasing the classification computational complexity, and to improving the classifier performance by avoiding redundant or irrelevant features. Although feature selection can be formally defined as an optimisation problem with only one objective, that is, the classification accuracy obtained by using the selected feature subset, in recent years, some multi-objective approaches to this problem have been proposed. These either select features that not only improve the classification accuracy, but also the generalisation capability in case of supervised classifiers, or counterbalance the bias toward lower or higher numbers of features that present some methods used to validate the clustering/classification in case of unsupervised classifiers. The main contribution of this paper is a multi-objective approach for feature selection and its application to an unsupervised clustering procedure based on Growing Hierarchical Self-Organising Maps (GHSOMs) that includes a new method for unit labelling and efficient determination of the winning unit. In the network anomaly detection problem here considered, this multi-objective approach makes it possible not only to differentiate between normal and anomalous traffic but also among different anomalies. The efficiency of our proposals has been evaluated by using the well-known DARPA/NSL-KDD datasets that contain extracted features and labelled attacks from around 2 million connections. The selected feature sets computed in our experiments provide detection rates up to 99.8% with normal traffic and up to 99.6% with anomalous traffic, as well as accuracy values up to 99.12%.