13 resultados para Expert systems

em Universidad de Alicante


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we introduce a probabilistic approach to support visual supervision and gesture recognition. Task knowledge is both of geometric and visual nature and it is encoded in parametric eigenspaces. Learning processes for compute modal subspaces (eigenspaces) are the core of tracking and recognition of gestures and tasks. We describe the overall architecture of the system and detail learning processes and gesture design. Finally we show experimental results of tracking and recognition in block-world like assembling tasks and in general human gestures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The delineation of functional economic areas, or market areas, is a problem of high practical relevance, since the delineation of functional sets such as economic areas in the US, Travel-to-Work Areas in the United Kingdom, and their counterparts in other OECD countries are the basis of many statistical operations and policy making decisions at local level. This is a combinatorial optimisation problem defined as the partition of a given set of indivisible spatial units (covering a territory) into regions characterised by being (a) self-contained and (b) cohesive, in terms of spatial interaction data (flows, relationships). Usually, each region must reach a minimum size and self-containment level, and must be continuous. Although these optimisation problems have been typically solved through greedy methods, a recent strand of the literature in this field has been concerned with the use of evolutionary algorithms with ad hoc operators. Although these algorithms have proved to be successful in improving the results of some of the more widely applied official procedures, they are so time consuming that cannot be applied directly to solve real-world problems. In this paper we propose a new set of group-based mutation operators, featuring general operations over disjoint groups, tailored to ensure that all the constraints are respected during the operation to improve efficiency. A comparative analysis of our results with those from previous approaches shows that the proposed algorithm systematically improves them in terms of both quality and processing time, something of crucial relevance since it allows dealing with most large, real-world problems in reasonable time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Automatic Text Summarization has been shown to be useful for Natural Language Processing tasks such as Question Answering or Text Classification and other related fields of computer science such as Information Retrieval. Since Geographical Information Retrieval can be considered as an extension of the Information Retrieval field, the generation of summaries could be integrated into these systems by acting as an intermediate stage, with the purpose of reducing the document length. In this manner, the access time for information searching will be improved, while at the same time relevant documents will be also retrieved. Therefore, in this paper we propose the generation of two types of summaries (generic and geographical) applying several compression rates in order to evaluate their effectiveness in the Geographical Information Retrieval task. The evaluation has been carried out using GeoCLEF as evaluation framework and following an Information Retrieval perspective without considering the geo-reranking phase commonly used in these systems. Although single-document summarization has not performed well in general, the slight improvements obtained for some types of the proposed summaries, particularly for those based on geographical information, made us believe that the integration of Text Summarization with Geographical Information Retrieval may be beneficial, and consequently, the experimental set-up developed in this research work serves as a basis for further investigations in this field.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, Twitter has become one of the most important microblogging services of the Web 2.0. Among the possible uses it allows, it can be employed for communicating and broadcasting information in real time. The goal of this research is to analyze the task of automatic tweet generation from a text summarization perspective in the context of the journalism genre. To achieve this, different state-of-the-art summarizers are selected and employed for producing multi-lingual tweets in two languages (English and Spanish). A wide experimental framework is proposed, comprising the creation of a new corpus, the generation of the automatic tweets, and their assessment through a quantitative and a qualitative evaluation, where informativeness, indicativeness and interest are key criteria that should be ensured in the proposed context. From the results obtained, it was observed that although the original tweets were considered as model tweets with respect to their informativeness, they were not among the most interesting ones from a human viewpoint. Therefore, relying only on these tweets may not be the ideal way to communicate news through Twitter, especially if a more personalized and catchy way of reporting news wants to be performed. In contrast, we showed that recent text summarization techniques may be more appropriate, reflecting a balance between indicativeness and interest, even if their content was different from the tweets delivered by the news providers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper analyses the productivity growth of the SUMA tax offices located in Spain evolved between 2004 and 2006 by using Malmquist Index based on Data Envelopment Analysis (DEA) models. It goes a step forward by smoothed bootstrap procedure which improves the quality of the results by generalising the samples, so that the conclusions obtained from them can be applied in order to increase productivity levels. Additionally, the productivity effect is divided into two different components, efficiency and technological change, with the objective of helping to clarify the role played by either the managers or the level of technology in the final performance figures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many applications including object reconstruction, robot guidance, and. scene mapping require the registration of multiple views from a scene to generate a complete geometric and appearance model of it. In real situations, transformations between views are unknown and it is necessary to apply expert inference to estimate them. In the last few years, the emergence of low-cost depth-sensing cameras has strengthened the research on this topic, motivating a plethora of new applications. Although they have enough resolution and accuracy for many applications, some situations may not be solved with general state-of-the-art registration methods due to the signal-to-noise ratio (SNR) and the resolution of the data provided. The problem of working with low SNR data, in general terms, may appear in any 3D system, then it is necessary to propose novel solutions in this aspect. In this paper, we propose a method, μ-MAR, able to both coarse and fine register sets of 3D points provided by low-cost depth-sensing cameras, despite it is not restricted to these sensors, into a common coordinate system. The method is able to overcome the noisy data problem by means of using a model-based solution of multiplane registration. Specifically, it iteratively registers 3D markers composed by multiple planes extracted from points of multiple views of the scene. As the markers and the object of interest are static in the scenario, the transformations obtained for the markers are applied to the object in order to reconstruct it. Experiments have been performed using synthetic and real data. The synthetic data allows a qualitative and quantitative evaluation by means of visual inspection and Hausdorff distance respectively. The real data experiments show the performance of the proposal using data acquired by a Primesense Carmine RGB-D sensor. The method has been compared to several state-of-the-art methods. The results show the good performance of the μ-MAR to register objects with high accuracy in presence of noisy data outperforming the existing methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes an adaptive algorithm for clustering cumulative probability distribution functions (c.p.d.f.) of a continuous random variable, observed in different populations, into the minimum homogeneous clusters, making no parametric assumptions about the c.p.d.f.’s. The distance function for clustering c.p.d.f.’s that is proposed is based on the Kolmogorov–Smirnov two sample statistic. This test is able to detect differences in position, dispersion or shape of the c.p.d.f.’s. In our context, this statistic allows us to cluster the recorded data with a homogeneity criterion based on the whole distribution of each data set, and to decide whether it is necessary to add more clusters or not. In this sense, the proposed algorithm is adaptive as it automatically increases the number of clusters only as necessary; therefore, there is no need to fix in advance the number of clusters. The output of the algorithm are the common c.p.d.f. of all observed data in the cluster (the centroid) and, for each cluster, the Kolmogorov–Smirnov statistic between the centroid and the most distant c.p.d.f. The proposed algorithm has been used for a large data set of solar global irradiation spectra distributions. The results obtained enable to reduce all the information of more than 270,000 c.p.d.f.’s in only 6 different clusters that correspond to 6 different c.p.d.f.’s.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study evaluates the technical efficiency of the learning-teaching process in higher education using a three-stage procedure that offers advances in comparison to previous studies and improves the quality of the results. First, it utilizes a multiple stage Data Envelopment Analysis (DEA) with contextual variables. Second, the levels of super efficiency are calculated in order to prioritize the efficiency units. And finally, through sensitivity analysis, the contribution of each key performance indicator (KPI) is established with respect to the efficiency levels without omission of variables. The analytical data was collected from a survey completed by 633 tourism students during the 2011/12, 2012/13 and 2013/14 academic course years. The results suggest that level of satisfaction with the course, diversity of materials and satisfaction with the teacher were the most important factors affecting teaching performance. Furthermore, the effect of the contextual variables was found to be significant.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abdominal Aortic Aneurism is a disease related to a weakening in the aortic wall that can cause a break in the aorta and the death. The detection of an unusual dilatation of a section of the aorta is an indicative of this disease. However, it is difficult to diagnose because it is necessary image diagnosis using computed tomography or magnetic resonance. An automatic diagnosis system would allow to analyze abdominal magnetic resonance images and to warn doctors if any anomaly is detected. We focus our research in magnetic resonance images because of the absence of ionizing radiation. Although there are proposals to identify this disease in magnetic resonance images, they need an intervention from clinicians to be precise and some of them are computationally hard. In this paper we develop a novel approach to analyze magnetic resonance abdominal images and detect the lumen and the aortic wall. The method combines different algorithms in two stages to improve the detection and the segmentation so it can be applied to similar problems with other type of images or structures. In a first stage, we use a spatial fuzzy C-means algorithm with morphological image analysis to detect and segment the lumen; and subsequently, in a second stage, we apply a graph cut algorithm to segment the aortic wall. The obtained results in the analyzed images are pretty successful obtaining an average of 79% of overlapping between the automatic segmentation provided by our method and the aortic wall identified by a medical specialist. The main impact of the proposed method is that it works in a completely automatic way with a low computational cost, which is of great significance for any expert and intelligent system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we present a semantic framework suitable of being used as support tool for recommender systems. Our purpose is to use the semantic information provided by a set of integrated resources to enrich texts by conducting different NLP tasks: WSD, domain classification, semantic similarities and sentiment analysis. After obtaining the textual semantic enrichment we would be able to recommend similar content or even to rate texts according to different dimensions. First of all, we describe the main characteristics of the semantic integrated resources with an exhaustive evaluation. Next, we demonstrate the usefulness of our resource in different NLP tasks and campaigns. Moreover, we present a combination of different NLP approaches that provide enough knowledge for being used as support tool for recommender systems. Finally, we illustrate a case of study with information related to movies and TV series to demonstrate that our framework works properly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Web 2.0 has resulted in a shift as to how users consume and interact with the information, and has introduced a wide range of new textual genres, such as reviews or microblogs, through which users communicate, exchange, and share opinions. The exploitation of all this user-generated content is of great value both for users and companies, in order to assist them in their decision-making processes. Given this context, the analysis and development of automatic methods that can help manage online information in a quicker manner are needed. Therefore, this article proposes and evaluates a novel concept-level approach for ultra-concise opinion abstractive summarization. Our approach is characterized by the integration of syntactic sentence simplification, sentence regeneration and internal concept representation into the summarization process, thus being able to generate abstractive summaries, which is one the most challenging issues for this task. In order to be able to analyze different settings for our approach, the use of the sentence regeneration module was made optional, leading to two different versions of the system (one with sentence regeneration and one without). For testing them, a corpus of 400 English texts, gathered from reviews and tweets belonging to two different domains, was used. Although both versions were shown to be reliable methods for generating this type of summaries, the results obtained indicate that the version without sentence regeneration yielded to better results, improving the results of a number of state-of-the-art systems by 9%, whereas the version with sentence regeneration proved to be more robust to noisy data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decision support systems (DSS) support business or organizational decision-making activities, which require the access to information that is internally stored in databases or data warehouses, and externally in the Web accessed by Information Retrieval (IR) or Question Answering (QA) systems. Graphical interfaces to query these sources of information ease to constrain dynamically query formulation based on user selections, but they present a lack of flexibility in query formulation, since the expressivity power is reduced to the user interface design. Natural language interfaces (NLI) are expected as the optimal solution. However, especially for non-expert users, a real natural communication is the most difficult to realize effectively. In this paper, we propose an NLI that improves the interaction between the user and the DSS by means of referencing previous questions or their answers (i.e. anaphora such as the pronoun reference in “What traits are affected by them?”), or by eliding parts of the question (i.e. ellipsis such as “And to glume colour?” after the question “Tell me the QTLs related to awn colour in wheat”). Moreover, in order to overcome one of the main problems of NLIs about the difficulty to adapt an NLI to a new domain, our proposal is based on ontologies that are obtained semi-automatically from a framework that allows the integration of internal and external, structured and unstructured information. Therefore, our proposal can interface with databases, data warehouses, QA and IR systems. Because of the high NL ambiguity of the resolution process, our proposal is presented as an authoring tool that helps the user to query efficiently in natural language. Finally, our proposal is tested on a DSS case scenario about Biotechnology and Agriculture, whose knowledge base is the CEREALAB database as internal structured data, and the Web (e.g. PubMed) as external unstructured information.