924 resultados para EUREKA (Information retrieval system)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The value of knowing about data availability and system accessibility is analyzed through theoretical models of Information Economics. When a user places an inquiry for information, it is important for the user to learn whether the system is not accessible or the data is not available, rather than not have any response. In reality, various outcomes can be provided by the system: nothing will be displayed to the user (e.g., a traffic light that does not operate, a browser that keeps browsing, a telephone that does not answer); a random noise will be displayed (e.g., a traffic light that displays random signals, a browser that provides disorderly results, an automatic voice message that does not clarify the situation); a special signal indicating that the system is not operating (e.g., a blinking amber indicating that the traffic light is down, a browser responding that the site is unavailable, a voice message regretting to tell that the service is not available). This article develops a model to assess the value of the information for the user in such situations by employing the information structure model prevailing in Information Economics. Examples related to data accessibility in centralized and in distributed systems are provided for illustration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study some of the characteristics of the art painting image color semantics. We analyze the color features of differ- ent artists and art movements. The analysis includes exploration of hue, saturation and luminance. We also use quartile’s analysis to obtain the dis- tribution of the dispersion of defined groups of paintings and measure the degree of purity for these groups. A special software system “Art Paint- ing Image Color Semantics” (APICSS) for image analysis and retrieval was created. The obtained result can be used for automatic classification of art paintings in image retrieval systems, where the indexing is based on color characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present an innovative topic segmentation system based on a new informative similarity measure that takes into account word co-occurrence in order to avoid the accessibility to existing linguistic resources such as electronic dictionaries or lexico-semantic databases such as thesauri or ontology. Topic segmentation is the task of breaking documents into topically coherent multi-paragraph subparts. Topic segmentation has extensively been used in information retrieval and text summarization. In particular, our architecture proposes a language-independent topic segmentation system that solves three main problems evidenced by previous research: systems based uniquely on lexical repetition that show reliability problems, systems based on lexical cohesion using existing linguistic resources that are usually available only for dominating languages and as a consequence do not apply to less favored languages and finally systems that need previously existing harvesting training data. For that purpose, we only use statistics on words and sequences of words based on a set of texts. This solution provides a flexible solution that may narrow the gap between dominating languages and less favored languages thus allowing equivalent access to information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the recent explosion in the complexity and amount of digital multimedia data, there has been a huge impact on the operations of various organizations in distinct areas, such as government services, education, medical care, business, entertainment, etc. To satisfy the growing demand of multimedia data management systems, an integrated framework called DIMUSE is proposed and deployed for distributed multimedia applications to offer a full scope of multimedia related tools and provide appealing experiences for the users. This research mainly focuses on video database modeling and retrieval by addressing a set of core challenges. First, a comprehensive multimedia database modeling mechanism called Hierarchical Markov Model Mediator (HMMM) is proposed to model high dimensional media data including video objects, low-level visual/audio features, as well as historical access patterns and frequencies. The associated retrieval and ranking algorithms are designed to support not only the general queries, but also the complicated temporal event pattern queries. Second, system training and learning methodologies are incorporated such that user interests are mined efficiently to improve the retrieval performance. Third, video clustering techniques are proposed to continuously increase the searching speed and accuracy by architecting a more efficient multimedia database structure. A distributed video management and retrieval system is designed and implemented to demonstrate the overall performance. The proposed approach is further customized for a mobile-based video retrieval system to solve the perception subjectivity issue by considering individual user's profile. Moreover, to deal with security and privacy issues and concerns in distributed multimedia applications, DIMUSE also incorporates a practical framework called SMARXO, which supports multilevel multimedia security control. SMARXO efficiently combines role-based access control (RBAC), XML and object-relational database management system (ORDBMS) to achieve the target of proficient security control. A distributed multimedia management system named DMMManager (Distributed MultiMedia Manager) is developed with the proposed framework DEMUR; to support multimedia capturing, analysis, retrieval, authoring and presentation in one single framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since multimedia data, such as images and videos, are way more expressive and informative than ordinary text-based data, people find it more attractive to communicate and express with them. Additionally, with the rising popularity of social networking tools such as Facebook and Twitter, multimedia information retrieval can no longer be considered a solitary task. Rather, people constantly collaborate with one another while searching and retrieving information. But the very cause of the popularity of multimedia data, the huge and different types of information a single data object can carry, makes their management a challenging task. Multimedia data is commonly represented as multidimensional feature vectors and carry high-level semantic information. These two characteristics make them very different from traditional alpha-numeric data. Thus, to try to manage them with frameworks and rationales designed for primitive alpha-numeric data, will be inefficient. An index structure is the backbone of any database management system. It has been seen that index structures present in existing relational database management frameworks cannot handle multimedia data effectively. Thus, in this dissertation, a generalized multidimensional index structure is proposed which accommodates the atypical multidimensional representation and the semantic information carried by different multimedia data seamlessly from within one single framework. Additionally, the dissertation investigates the evolving relationships among multimedia data in a collaborative environment and how such information can help to customize the design of the proposed index structure, when it is used to manage multimedia data in a shared environment. Extensive experiments were conducted to present the usability and better performance of the proposed framework over current state-of-art approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the years there has been a broader definition of the term health. At the same time it was found also an evolution of the concept of health care which in turn has led to changes in the approach to delivery of health services and hence in its management. In this regard, currently the nephrology services have been searching for quality technical and social need. In view of these innovations and the quest for quality, it elaborated the general objective: to develop a quality assessment protocol for dialysis service Onofre Lopes University Hospital. It is an intervention project effected through an action research, which consisted of 4 steps. Initially was identified through a literature search in scientific literature, which quality indicators would apply to a dialysis unit being selected as follows: infection rate in hemodialysis access site, microbiological control of water used for hemodialysis and Index User satisfaction. Through critical reflection on the theme researched in the previous step, it was drawn up three data collection instruments, interview form type, applied between the months of October and November 2015. In addition to the information obtained, also made up of the use of information retrieval technique. The results were organized in graphs and tables and analyzed using qualitative and exploratory technical approach. Then a reflective analysis of the data obtained and the diagnosis of reality studied was traced and confronted with the literature was performed. The data produced in this study revealed that the Dialysis Unit of HUOL is much to be desired, considering that some weaknesses have been identified in its structure. Faced with this finding have been proposed, as a contribution and aiming to guide the development of future actions, suggestions for improvement that should be implemented and monitored to be assured overcoming these difficulties, allowing an appropriate organizational restructuring, and resulting in improved service public offered. It was concluded that for hemodialysis treatment results are achieved and positive, it is necessary to have physical structure and adequate infrastructure, multidisciplinary team specialized, trained and in sufficient quantity, well designed processes for professionals to have standards to be followed decreasing the chance to err, and a risk management system to detect and control situations that endanger patient safety.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the years there has been a broader definition of the term health. At the same time it was found also an evolution of the concept of health care which in turn has led to changes in the approach to delivery of health services and hence in its management. In this regard, currently the nephrology services have been searching for quality technical and social need. In view of these innovations and the quest for quality, it elaborated the general objective: to develop a quality assessment protocol for dialysis service Onofre Lopes University Hospital. It is an intervention project effected through an action research, which consisted of 4 steps. Initially was identified through a literature search in scientific literature, which quality indicators would apply to a dialysis unit being selected as follows: infection rate in hemodialysis access site, microbiological control of water used for hemodialysis and Index User satisfaction. Through critical reflection on the theme researched in the previous step, it was drawn up three data collection instruments, interview form type, applied between the months of October and November 2015. In addition to the information obtained, also made up of the use of information retrieval technique. The results were organized in graphs and tables and analyzed using qualitative and exploratory technical approach. Then a reflective analysis of the data obtained and the diagnosis of reality studied was traced and confronted with the literature was performed. The data produced in this study revealed that the Dialysis Unit of HUOL is much to be desired, considering that some weaknesses have been identified in its structure. Faced with this finding have been proposed, as a contribution and aiming to guide the development of future actions, suggestions for improvement that should be implemented and monitored to be assured overcoming these difficulties, allowing an appropriate organizational restructuring, and resulting in improved service public offered. It was concluded that for hemodialysis treatment results are achieved and positive, it is necessary to have physical structure and adequate infrastructure, multidisciplinary team specialized, trained and in sufficient quantity, well designed processes for professionals to have standards to be followed decreasing the chance to err, and a risk management system to detect and control situations that endanger patient safety.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The information architecture supports information retrieval by users in Web environment. The design should be center in the information user, favoring usability. The Faculty of Industrial Engineering and Tourism of the Universidad Central "Marta Abreu" de Las Villas, lacks a site that enhances the disclosure of information to its members. Are presented as objectives of the study: 1) conduct a user survey to identify information needs of users, 2) establish guidelines for information architecture for the institution focused on users, 3) designing the information architecture for the institution and 4) designed to evaluate the proposal. Are presented as objectives of the study: 1) to realize a user study to identify the information needs of users, 2) establish guidelines for information architecture for the institution focused on users, 3) to design the information architecture for the institution and 4) to evaluate the proposal designed. To obtain results are used methods in the theoretical and empirical levels. Besides, are use techniques that favored the design and evaluation. Is designed the intranet of the Faculty of Industrial Engineering and Tourism. Is evaluated the proposed design for the validation of the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Semantic Annotation component is a software application that provides support for automated text classification, a process grounded in a cohesion-centered representation of discourse that facilitates topic extraction. The component enables the semantic meta-annotation of text resources, including automated classification, thus facilitating information retrieval within the RAGE ecosystem. It is available in the ReaderBench framework (http://readerbench.com/) which integrates advanced Natural Language Processing (NLP) techniques. The component makes use of Cohesion Network Analysis (CNA) in order to ensure an in-depth representation of discourse, useful for mining keywords and performing automated text categorization. Our component automatically classifies documents into the categories provided by the ACM Computing Classification System (http://dl.acm.org/ccs_flat.cfm), but also into the categories from a high level serious games categorization provisionally developed by RAGE. English and French languages are already covered by the provided web service, whereas the entire framework can be extended in order to support additional languages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research addressed practice related problems from a medico-legal perspective and aims to provide a working tool that aids GPs to comply with best practice protocols. The resulting bag was developed in collaboration with General Practitioners, clinicians and members of the Medical Defense Union. Using proven methods developed within the Healthcare & Patient Safety Lab (e.g. DOME, Ambulance) to establish an evidence-based brief, this research used task, equipment and consumables analysis to determine minimum requirements and preferred layouts for task optimisation. The research established that clinicians require three distinct functions in their workspace: laying out, organisation and information retrieval. Feedback from clinicians indicates that this working tool allows them to access information and equipment wherever they may be and suggests an improvement from current practice. The research is now into a second year where the design of the bag will be refined and tested. Lifestyle and demographic changes such as the ageing population and increased prevalence of chronic diseases require more consistent standards of primary care, and care that is well coordinated and integrated (Imison, et al., 2011). Many guidelines exist relating to general practice and the doctor’s bag (NSLMC, 2008, RACGP, 2010, RCGP, 2008 and Hiramanek, 2004), however there is no standard in the UK that regulates the shape and materials of the bag or its contents. Doctors may use any sort of vessel to transport their equipment and consumables to a patient’s location. Furthermore, treating a patient in their own home, outside an ideal clinical environment, presents its own complications. A looks-like, works-like bag prototype and information system that will be used in clinical trials, the results of which will determine the manufacturing of a new, standardised bag for clinical treatment used by members of the Medical Defence Union.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While news stories are an important traditional medium to broadcast and consume news, microblogging has recently emerged as a place where people can dis- cuss, disseminate, collect or report information about news. However, the massive information in the microblogosphere makes it hard for readers to keep up with these real-time updates. This is especially a problem when it comes to breaking news, where people are more eager to know “what is happening”. Therefore, this dis- sertation is intended as an exploratory effort to investigate computational methods to augment human effort when monitoring the development of breaking news on a given topic from a microblog stream by extractively summarizing the updates in a timely manner. More specifically, given an interest in a topic, either entered as a query or presented as an initial news report, a microblog temporal summarization system is proposed to filter microblog posts from a stream with three primary concerns: topical relevance, novelty, and salience. Considering the relatively high arrival rate of microblog streams, a cascade framework consisting of three stages is proposed to progressively reduce quantity of posts. For each step in the cascade, this dissertation studies methods that improve over current baselines. In the relevance filtering stage, query and document expansion techniques are applied to mitigate sparsity and vocabulary mismatch issues. The use of word embedding as a basis for filtering is also explored, using unsupervised and supervised modeling to characterize lexical and semantic similarity. In the novelty filtering stage, several statistical ways of characterizing novelty are investigated and ensemble learning techniques are used to integrate results from these diverse techniques. These results are compared with a baseline clustering approach using both standard and delay-discounted measures. In the salience filtering stage, because of the real-time prediction requirement a method of learning verb phrase usage from past relevant news reports is used in conjunction with some standard measures for characterizing writing quality. Following a Cranfield-like evaluation paradigm, this dissertation includes a se- ries of experiments to evaluate the proposed methods for each step, and for the end- to-end system. New microblog novelty and salience judgments are created, building on existing relevance judgments from the TREC Microblog track. The results point to future research directions at the intersection of social media, computational jour- nalism, information retrieval, automatic summarization, and machine learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At present, in large precast concrete enterprises, the management over precast concrete component has been chaotic. Most enterprises take labor-intensive manual input method, which is time consuming and laborious, and error-prone. Some other slightly better enterprises choose to manage through bar-code or printing serial number manually. However, on one hand, this is also labor-intensive, on the other hand, this method is limited by external environment, making the serial number blur or even lost, and also causes a big problem on production traceability and quality accountability. Therefore, to realize the enterprise’s own rapid development and cater to the needs of the time, to achieve the automated production management has been a big problem for a modern enterprise. In order to solve the problem, inefficiency in production and traceability of the products, this thesis try to introduce RFID technology into the production of PHC tubular pile. By designing a production management system of precast concrete components, the enterprise will achieve the control of the entire production process, and realize the informatization of enterprise production management. RFID technology has been widely used in many fields like entrance control, charge management, logistics and so on. RFID technology will adopt passive RFID tag, which is waterproof, shockproof, anti-interference, so it’s suitable for the actual working environment. The tag will be bound to the precast component steel cage (the structure of the PHC tubular pile before the concrete placement), which means each PHC tubular pile will have a unique ID number. Then according to the production procedure, the precast component will be performed with a series of actions, put the steel cage into the mold, mold clamping, pouring concrete (feed), stretching, centrifugalizing, maintenance, mold removing, welding splice. In every session of the procedure, the information of the precast components can be read through a RFID reader. Using a portable smart device connected to the database, the user can check, inquire and management the production information conveniently. Also, the system can trace the production parameter and the person in charge, realize the traceability of the information. This system can overcome the disadvantages in precast components manufacturers, like inefficiency, error-prone, time consuming, labor intensity, low information relevance and so on. This system can help to improve the production management efficiency, and can produce a good economic and social benefits, so, this system has a certain practical value.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

International audience

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El volumen de datos en bibliotecas ha aumentado enormemente en los últimos años, así como también la complejidad de sus fuentes y formatos de información, dificultando su gestión y acceso, especialmente como apoyo en la toma de decisiones. Sabiendo que una buena gestión de bibliotecas involucra la integración de indicadores estratégicos, la implementación de un Data Warehouse (DW), que gestione adecuadamente tal cantidad de información, así como su compleja mezcla de fuentes de datos, se convierte en una alternativa interesante a considerar. El artículo describe el diseño e implementación de un sistema de soporte de decisiones (DSS) basado en técnicas de DW para la biblioteca de la Universidad de Cuenca. Para esto, el estudio utiliza una metodología holística, propuesto por Siguenza-Guzman et al. (2014) para la evaluación integral de bibliotecas. Dicha metodología evalúa la colección y los servicios, incorporando importantes elementos para la gestión de bibliotecas, tales como: el desempeño de los servicios, el control de calidad, el uso de la colección y la interacción con el usuario. A partir de este análisis, se propone una arquitectura de DW que integra, procesa y almacena los datos. Finalmente, estos datos almacenados son analizados y visualizados a través de herramientas de procesamiento analítico en línea (OLAP). Las pruebas iniciales de implementación confirman la viabilidad y eficacia del enfoque propuesto, al integrar con éxito múltiples y heterogéneas fuentes y formatos de datos, facilitando que los directores de bibliotecas generen informes personalizados, e incluso permitiendo madurar los procesos transaccionales que diariamente se llevan a cabo.