888 resultados para Database application, Biologia cellulare, Image retrieval
Resumo:
Current development platforms for designing spoken dialog services feature different kinds of strategies to help designers build, test, and deploy their applications. In general, these platforms are made up of several assistants that handle the different design stages (e.g. definition of the dialog flow, prompt and grammar definition, database connection, or to debug and test the running of the application). In spite of all the advances in this area, in general the process of designing spoken-based dialog services is a time consuming task that needs to be accelerated. In this paper we describe a complete development platform that reduces the design time by using different types of acceleration strategies based on using information from the data model structure and database contents, as well as cumulative information obtained throughout the successive steps in the design. Thanks to these accelerations, the interaction with the platform is simplified and the design is reduced, in most cases, to simple confirmations to the “proposals” that the platform automatically provides at each stage. Different kinds of proposals are available to complete the application flow such as the possibility of selecting which information slots should be requested to the user together, predefined templates for common dialogs, the most probable actions that make up each state defined in the flow, different solutions to solve specific speech-modality problems such as the presentation of the lists of retrieved results after querying the backend database. The platform also includes accelerations for creating speech grammars and prompts, and the SQL queries for accessing the database at runtime. Finally, we will describe the setup and results obtained in a simultaneous summative, subjective and objective evaluations with different designers used to test the usability of the proposed accelerations as well as their contribution to reducing the design time and interaction.
Resumo:
Evolvable Hardware (EH) is a technique that consists of using reconfigurable hardware devices whose configuration is controlled by an Evolutionary Algorithm (EA). Our system consists of a fully-FPGA implemented scalable EH platform, where the Reconfigurable processing Core (RC) can adaptively increase or decrease in size. Figure 1 shows the architecture of the proposed System-on-Programmable-Chip (SoPC), consisting of a MicroBlaze processor responsible of controlling the whole system operation, a Reconfiguration Engine (RE), and a Reconfigurable processing Core which is able to change its size in both height and width. This system is used to implement image filters, which are generated autonomously thanks to the evolutionary process. The system is complemented with a camera that enables the usage of the platform for real time applications.
Resumo:
NIR Hyperspectral imaging (1000-2500 nm) combined with IDC allowed the detection of peanut traces down to adulteration percentages 0.01% Contrary to PLSR, IDC does not require a calibration set, but uses both expert and experimental information and suitable for quantification of an interest compound in complex matrices. The obtained results shows the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA
Resumo:
As embedded systems evolve, problems inherent to technology become important limitations. In less than ten years, chips will exceed the maximum allowed power consumption affecting performance, since, even though the resources available per chip are increasing, frequency of operation has stalled. Besides, as the level of integration is increased, it is difficult to keep defect density under control, so new fault tolerant techniques are required. In this demo work, a new dynamically adaptable virtual architecture (ARTICo3) to allow dynamic and context-aware use of resources is implemented in a high performance Wireless Sensor node (HiReCookie) to perform an image processing application.
Resumo:
Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification.
Resumo:
The SWISS-PROT group at EBI has developed the Proteome Analysis Database utilising existing resources and providing comparative analysis of the predicted protein coding sequences of the complete genomes of bacteria, archaea and eukaryotes (http://www.ebi.ac.uk/proteome/). The two main projects used, InterPro and CluSTr, give a new perspective on families, domains and sites and cover 31–67% (InterPro statistics) of the proteins from each of the complete genomes. CluSTr covers the three complete eukaryotic genomes and the incomplete human genome data. The Proteome Analysis Database is accompanied by a program that has been designed to carry out InterPro proteome comparisons for any one proteome against any other one or more of the proteomes in the database.
Resumo:
rSNP_Guide is a novel curated database system for analysis of transcription factor (TF) binding to target sequences in regulatory gene regions altered by mutations. It accumulates experimental data on naturally occurring site variants in regulatory gene regions and site-directed mutations. This database system also contains the web tools for SNP analysis, i.e., active applet applying weight matrices to predict the regulatory site candidates altered by a mutation. The current version of the rSNP_Guide is supplemented by six sub-databases: (i) rSNP_DB, on DNA–protein interaction caused by mutation; (ii) SYSTEM, on experimental systems; (iii) rSNP_BIB, on citations to original publications; (iv) SAMPLES, on experimentally identified sequences of known regulatory sites; (v) MATRIX, on weight matrices of known TF sites; (vi) rSNP_Report, on characteristic examples of successful rSNP_Tools implementation. These databases are useful for the analysis of natural SNPs and site-directed mutations. The databases are available through the Web, http://wwwmgs.bionet.nsc.ru/mgs/systems/rsnp/.
Resumo:
Automatic Text Summarization has been shown to be useful for Natural Language Processing tasks such as Question Answering or Text Classification and other related fields of computer science such as Information Retrieval. Since Geographical Information Retrieval can be considered as an extension of the Information Retrieval field, the generation of summaries could be integrated into these systems by acting as an intermediate stage, with the purpose of reducing the document length. In this manner, the access time for information searching will be improved, while at the same time relevant documents will be also retrieved. Therefore, in this paper we propose the generation of two types of summaries (generic and geographical) applying several compression rates in order to evaluate their effectiveness in the Geographical Information Retrieval task. The evaluation has been carried out using GeoCLEF as evaluation framework and following an Information Retrieval perspective without considering the geo-reranking phase commonly used in these systems. Although single-document summarization has not performed well in general, the slight improvements obtained for some types of the proposed summaries, particularly for those based on geographical information, made us believe that the integration of Text Summarization with Geographical Information Retrieval may be beneficial, and consequently, the experimental set-up developed in this research work serves as a basis for further investigations in this field.
Resumo:
The study of the Neogene (Miocene to Holocene) stratigraphic record on the glaciated Atlantic margin of NW Europe has, to date, largely been undertaken on an ad-hoc basis. Whereas a systematic approach to understanding the stratigraphic development of Palaeogene and older strata has been undertaken in areas such as the North Sea, West of Shetland and Norway, the problem of establishing a Neogene framework has been only partly addressed by academia and the oil industry. In most cases where a Neogene stratigraphy has been constructed, this has been largely in response to problem solving and risk assessment in a restricted area. Nevertheless, in the past few years it has become increasingly apparent that there is a common history in the Neogene development of the passive Atlantic margin of NW Europe, between mid-Norway and SW Ireland. The inspection and interpretation of an extensive geophysical and geological database has identified several regionally significant and correlatable unconformities along this continental margin. Thus, a regional approach to the stratigraphical development of the Neogene succession on the glaciated European Atlantic margin is undertaken in this volume.
Resumo:
"(Supported in part by Contract AT(11-1)-1018 with the U.S. Atomic Energy Commission and the Advanced Research Projects Agency.)"
Resumo:
In some forms of tourism, and perhaps particularly in the case of special interest tourism, it can be argued that tourism encounters are service relationships with emotional attachment through the special interest focus and a level of enduring involvement on the part of participants. This involvement is two-fold. First, an interest with the activity; second, a sharing with like-minded people in a social world that extends from home to tourist destination and return. Intimacies in tourism can thus be interpreted through the model of the relationship cycle that comprises the stages A. Aquaintance, B, Buildup, C, Continuation and D, Dissolution. The paper builds upon this concept by utilising ideas of other-centred and self-centredness in personal relationships, and extends the concept of other-centredness to host environments. It also suggests that, in the academic literature about place, location may be secondary in that the quality of experience is primarily determined by the intimacies that exist between people at that place, especially that existing between visitors. © 2004 Published by Elsevier Ltd.
Resumo:
Current image database metadata schemas require users to adopt a specific text-based vocabulary. Text-based metadata is good for searching but not for browsing. Existing image-based search facilities, on the other hand, are highly specialised and so suffer similar problems. Wexelblat's semantic dimensional spatial visualisation schemas go some way towards addressing this problem by making both searching and browsing more accessible to the user in a single interface. But the question of how and what initial metadata to enter a database remains. Different people see different things in an image and will organise a collection in equally diverse ways. However, we can find some similarity across groups of users regardless of their reasoning. For example, a search on Amazon.com returns other products also, based on an averaging of how users navigate the database. In this paper, we report on applying this concept to a set of images for which we have visualised them using traditional methods and the Amazon.com method. We report on the findings of this comparative investigation in a case study setting involving a group of randomly selected participants. We conclude with the recommendation that in combination, the traditional and averaging methods would provide an enhancement to current database visualisation, searching, and browsing facilities.
Resumo:
Many emerging applications benefit from the extraction of geospatial data specified at different resolutions for viewing purposes. Data must also be topologically accurate and up-to-date as it often represents real-world changing phenomena. Current multiresolution schemes use complex opaque data types, which limit the capacity for in-database object manipulation. By using z-values and B+trees to support multiresolution retrieval, objects are fragmented in such a way that updates to objects or object parts are executed using standard SQL (Structured Query Language) statements as opposed to procedural functions. Our approach is compared to a current model, using complex data types indexed under a 3D (three-dimensional) R-tree, and shows better performance for retrieval over realistic window sizes and data loads. Updates with the R-tree are slower and preclude the feasibility of its use in time-critical applications whereas, predictably, projecting the issue to a one-dimensional index allows constant updates using z-values to be implemented more efficiently.