952 resultados para TSDEAI Semantic-Web Twitter Semantic-Search WordNet LSA
Resumo:
The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^
Resumo:
As the Web evolves unexpectedly fast, information grows explosively. Useful resources become more and more difficult to find because of their dynamic and unstructured characteristics. A vertical search engine is designed and implemented towards a specific domain. Instead of processing the giant volume of miscellaneous information distributed in the Web, a vertical search engine targets at identifying relevant information in specific domains or topics and eventually provides users with up-to-date information, highly focused insights and actionable knowledge representation. As the mobile device gets more popular, the nature of the search is changing. So, acquiring information on a mobile device poses unique requirements on traditional search engines, which will potentially change every feature they used to have. To summarize, users are strongly expecting search engines that can satisfy their individual information needs, adapt their current situation, and present highly personalized search results. ^ In my research, the next generation vertical search engine means to utilize and enrich existing domain information to close the loop of vertical search engine's system that mutually facilitate knowledge discovering, actionable information extraction, and user interests modeling and recommendation. I investigate three problems in which domain taxonomy plays an important role, including taxonomy generation using a vertical search engine, actionable information extraction based on domain taxonomy, and the use of ensemble taxonomy to catch user's interests. As the fundamental theory, ultra-metric, dendrogram, and hierarchical clustering are intensively discussed. Methods on taxonomy generation using my research on hierarchical clustering are developed. The related vertical search engine techniques are practically used in Disaster Management Domain. Especially, three disaster information management systems are developed and represented as real use cases of my research work.^
Resumo:
As the Web evolves unexpectedly fast, information grows explosively. Useful resources become more and more difficult to find because of their dynamic and unstructured characteristics. A vertical search engine is designed and implemented towards a specific domain. Instead of processing the giant volume of miscellaneous information distributed in the Web, a vertical search engine targets at identifying relevant information in specific domains or topics and eventually provides users with up-to-date information, highly focused insights and actionable knowledge representation. As the mobile device gets more popular, the nature of the search is changing. So, acquiring information on a mobile device poses unique requirements on traditional search engines, which will potentially change every feature they used to have. To summarize, users are strongly expecting search engines that can satisfy their individual information needs, adapt their current situation, and present highly personalized search results. In my research, the next generation vertical search engine means to utilize and enrich existing domain information to close the loop of vertical search engine's system that mutually facilitate knowledge discovering, actionable information extraction, and user interests modeling and recommendation. I investigate three problems in which domain taxonomy plays an important role, including taxonomy generation using a vertical search engine, actionable information extraction based on domain taxonomy, and the use of ensemble taxonomy to catch user's interests. As the fundamental theory, ultra-metric, dendrogram, and hierarchical clustering are intensively discussed. Methods on taxonomy generation using my research on hierarchical clustering are developed. The related vertical search engine techniques are practically used in Disaster Management Domain. Especially, three disaster information management systems are developed and represented as real use cases of my research work.
Resumo:
Increasing the size of training data in many computer vision tasks has shown to be very effective. Using large scale image datasets (e.g. ImageNet) with simple learning techniques (e.g. linear classifiers) one can achieve state-of-the-art performance in object recognition compared to sophisticated learning techniques on smaller image sets. Semantic search on visual data has become very popular. There are billions of images on the internet and the number is increasing every day. Dealing with large scale image sets is intense per se. They take a significant amount of memory that makes it impossible to process the images with complex algorithms on single CPU machines. Finding an efficient image representation can be a key to attack this problem. A representation being efficient is not enough for image understanding. It should be comprehensive and rich in carrying semantic information. In this proposal we develop an approach to computing binary codes that provide a rich and efficient image representation. We demonstrate several tasks in which binary features can be very effective. We show how binary features can speed up large scale image classification. We present learning techniques to learn the binary features from supervised image set (With different types of semantic supervision; class labels, textual descriptions). We propose several problems that are very important in finding and using efficient image representation.
Resumo:
Some of the Iowa Department of Transportation (Iowa DOT) continuous, steel, welded plate girder bridges have developed web cracking in the negative moment regions at the diaphragm connection plates. The cracks are due to out-of-plane bending of the web near the top flange of the girder. The out-of-plane bending occurs in the "web-gap", which is the portion of the girder web between (1) the top of the fillet welds attaching the diaphragm connection plate to the web and (2) the fillet welds attaching the flange to the web. A literature search indicated that four retrofit techniques have been suggested by other researchers to prevent or control this type of cracking. To eliminate the problem in new bridges, AASHTO specifications require a positive attachment between the connection plate and the top (tension) flange. Applying this requirement to existing bridges is expensive and difficult. The Iowa DOT has relied primarily on the hole-drilling technique to prevent crack extension once cracking has occurred; however, the literature indicates that hole-drilling alone may not be entirely effective in preventing crack extension. The objective of this research was to investigate experimentally a method proposed by the Iowa DOT to prevent cracking at the diaphragm/plate girder connection in steel bridges with X-type or K-type diaphragms. The method consists of loosening the bolts at some connections between the diaphragm diagonals and the connection plates. The investigation included selecting and testing five bridges: three with X-type diaphragms and two with K-type diaphragms. During 1996 and 1997, these bridges were instrumented using strain gages and displacement transducers to obtain the response at various locations before and after implementing the method. Bridges were subjected to loaded test trucks traveling in different lanes with speeds varying from crawl speed to 65 mph (104 km/h) to determine the effectiveness of the proposed method. The results of the study show that the effect of out-of-plane loading was confined to widths of approximately 4 in. (100 mm) on either side of the connection plates. Further, they demonstrate that the stresses in gaps with drilled holes were higher than those in gaps without cracks, implying that the drilling hole technique is not sufficient to prevent crack extension. The behavior of the web gaps in X-type diaphragm bridges was greatly enhanced by the proposed method as the stress range and out-of-plane distortion were reduced by at least 42% at the exterior girders. For bridges with K-type diaphragms, a similar trend was obtained. However, the stress range increased in one of the web gaps after implementing the proposed method. Other design aspects (wind, stability of compression flange, and lateral distribution of loads) must be considered when deciding whether to adopt the proposed method. Considering the results of this investigation, the proposed method can be implemented for X-type diaphragm bridges. Further research is recommended for K-type diaphragm bridges.
Resumo:
Although Recovery is often defined as the less studied and documented phase of the Emergency Management Cycle, a wide literature is available for describing characteristics and sub-phases of this process. Previous works do not allow to gain an overall perspective because of a lack of systematic consistent monitoring of recovery utilizing advanced technologies such as remote sensing and GIS technologies. Taking into consideration the key role of Remote Sensing in Response and Damage Assessment, this thesis is aimed to verify the appropriateness of such advanced monitoring techniques to detect recovery advancements over time, with close attention to the main characteristics of the study event: Hurricane Katrina storm surge. Based on multi-source, multi-sensor and multi-temporal data, the post-Katrina recovery was analysed using both a qualitative and a quantitative approach. The first phase was dedicated to the investigation of the relation between urban types, damage and recovery state, referring to geographical and technological parameters. Damage and recovery scales were proposed to review critical observations on remarkable surge- induced effects on various typologies of structures, analyzed at a per-building level. This wide-ranging investigation allowed a new understanding of the distinctive features of the recovery process. A quantitative analysis was employed to develop methodological procedures suited to recognize and monitor distribution, timing and characteristics of recovery activities in the study area. Promising results, gained by applying supervised classification algorithms to detect localization and distribution of blue tarp, have proved that this methodology may help the analyst in the detection and monitoring of recovery activities in areas that have been affected by medium damage. The study found that Mahalanobis Distance was the classifier which provided the most accurate results, in localising blue roofs with 93.7% of blue roof classified correctly and a producer accuracy of 70%. It was seen to be the classifier least sensitive to spectral signature alteration. The application of the dissimilarity textural classification to satellite imagery has demonstrated the suitability of this technique for the detection of debris distribution and for the monitoring of demolition and reconstruction activities in the study area. Linking these geographically extensive techniques with expert per-building interpretation of advanced-technology ground surveys provides a multi-faceted view of the physical recovery process. Remote sensing and GIS technologies combined to advanced ground survey approach provides extremely valuable capability in Recovery activities monitoring and may constitute a technical basis to lead aid organization and local government in the Recovery management.
Resumo:
The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-International databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP.
Resumo:
The Internet has created new opportunities for librarians to present literature search results to clinicians. In order to take full advantage of these opportunities, libraries need to create locally maintained bibliographic databases. A simple method of creating a local bibliographic database and publishing it on the Web is described. The method uses off-the-shelf software and requires minimal programming. A hedge search strategy for outcome studies of clinical process interventions is created, and Ovid is used to search MEDLINE. The search results are saved and imported into EndNote libraries. The citations are modified, exported to a Microsoft Access database, and published on the Web. Clinicians can use a Web browser to search the database. The bibliographic database contains 13,803 MEDLINE citations of outcome studies. Most searches take between four and ten seconds and retrieve between ten and 100 citations. The entire cost of the software is under $900. Locally maintained bibliographic databases can be created easily and inexpensively. They significantly extend the evidence-based health care services that libraries can offer to clinicians.
Resumo:
En asignaturas fuertemente conceptuales, como la Mecánica del Suelo y de las Rocas, el alumno no es consciente de ello hasta que se decide a estudiar para poder superar una prueba parcial. Para entonces es demasiado tarde, razón por la cual se produce un elevado índice de fracasos en esta materia integrada en el Grado en Ingeniería Civil. Aprovechando que el alumnado actual es un usuario asiduo de los dispositivos móviles, mediante el empleo de las redes sociales, concretamente Twitter, se envía regularmente, casi a diario, tweets con “píndoles geotècniques” (píldoras geotécnicas en castellano) muy conceptuales y reducidas, con el fin de que el alumno, sin darse cuenta, asuma los conceptos más importantes de la asignatura. Al mismo tiempo, en muchos tweets se le redirige a una página web (El tauler geotècnic o el Tablón geotécnico en castellano) creada ex proceso para la asignatura, donde la información se amplía con material de apoyo mayoritariamente audiovisual, que es mejor asimilado por parte del alumnado. Además este sitio web dispone de pruebas de autoevaluación, enlaces con otras webs del ámbito de la Ingeniería del terreno, con información adicional, aplicaciones informáticas desarrolladas por los profesores del área o de libre acceso, etc.
Resumo:
BACKGROUND Extracorporeal membrane oxygenation (ECMO) offers therapeutic options in refractory respiratory and/or cardiac failure. Systemic anticoagulation with heparin is routinely administered. However, in patients with heparin-induced thrombocytopenia or heparin resistance, the direct thrombin inhibitor bivalirudin is a valid option and has been increasingly used for ECMO anticoagulation. We aimed at evaluating its safety and its optimal dosing for ECMO. METHODS Systematic web-based literature search of PubMed and EMBASE performed via National Health Service Library Evidence and manually, updated until January 30, 2016. RESULTS The search revealed 8 publications relevant to the topic (5 case reports). In total, 58 patients (24 pediatrics) were reported (18 received heparin as control groups). Bivalirudin was used with or without loading dose, followed by infusion at different ranges (lowest 0.1-0.2 mg/kg/h without loading dose; highest 0.5 mg/kg/h after loading dose). The strategies for monitoring anticoagulation and optimal targets were dissimilar (activated partial thromboplastin time 45-60 seconds to 42-88 seconds; activated clotting time 180-200 seconds to 200-220 seconds; thromboelastography in 1 study). CONCLUSION Bivalirudin loading dose was not always used; infusion range and anticoagulation targets were different. In this systematic review, we discuss the reasons for this variability. Larger studies are needed to establish the optimal approach with the use of bivalirudin for ECMO.
Resumo:
L’Exploratory Search, paradigma di ricerca basato sulle attività di scoperta e d’apprendimento, è stato per diverso tempo ignorato dai motori di ricerca tradizionali. Invece, è spesso dalle ricerche esplorative che nascono le idee più innovative. Le recenti tecnologie del Semantic Web forniscono le soluzioni che permettono d’implementare dei motori di ricerca capaci di accompagnare gli utenti impegnati in tale tipo di ricerca. Aemoo, motore di ricerca sul quale s’appoggia questa tesi ne è un esempio efficace. A partire da quest’ultimo e sempre con l’aiuto delle tecnologie del Web of Data, questo lavoro si propone di fornire una metodologia che permette di prendere in considerazione la singolarità del profilo di ciascun utente al fine di guidarlo nella sua ricerca esplorativa in modo personalizzato. Il criterio di personalizzazione che abbiamo scelto è comportamentale, ovvero basato sulle decisioni che l’utente prende ad ogni tappa che ritma il processo di ricerca. Implementando un prototipo, abbiamo potuto testare la validità di quest’approccio permettendo quindi all’utente di non essere più solo nel lungo e tortuoso cammino che porta alla conoscenza.
Resumo:
The web is continuously evolving into a collection of many data, which results in the interest to collect and merge these data in a meaningful way. Based on that web data, this paper describes the building of an ontology resting on fuzzy clustering techniques. Through continual harvesting folksonomies by web agents, an entire automatic fuzzy grassroots ontology is built. This self-updating ontology can then be used for several practical applications in fields such as web structuring, web searching and web knowledge visualization.A potential application for online reputation analysis, added value and possible future studies are discussed in the conclusion.
Resumo:
The expansion of the Internet has made the task of searching a crucial one. Internet users, however, have to make a great effort in order to formulate a search query that returns the required results. Many methods have been devised to assist in this task by helping the users modify their query to give better results. In this paper we propose an interactive method for query expansion. It is based on the observation that documents are often found to contain terms with high information content, which can summarise their subject matter. We present experimental results, which demonstrate that our approach significantly shortens the time required in order to accomplish a certain task by performing web searches.
Resumo:
Question Answering systems that resort to the Semantic Web as a knowledge base can go well beyond the usual matching words in documents and, preferably, find a precise answer, without requiring user help to interpret the documents returned. In this paper, the authors introduce a Dialogue Manager that, through the analysis of the question and the type of expected answer, provides accurate answers to the questions posed in Natural Language. The Dialogue Manager not only represents the semantics of the questions, but also represents the structure of the discourse, including the user intentions and the questions context, adding the ability to deal with multiple answers and providing justified answers. The authors’ system performance is evaluated by comparing with similar question answering systems. Although the test suite is slight dimension, the results obtained are very promising.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.