914 resultados para pacs: information retrieval techniques


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desde os primórdios da humanidade, a descoberta do método de processamento cerebral do som, e consequentemente da música, fazem parte do imaginário humano. Portanto, as pesquisas relacionadas a este processo constituem um dos mais vastos campos de estudos das áreas de ciências. Dentre as inúmeras tentativas para compreensão do processamento biológico do som, o ser humano inventou o processo automático de composição musical, com o intuito de aferir a possibilidade da realização de composições musicais de qualidade sem a imposição sentimental, ou seja, apenas com a utilização das definições e estruturas de música existentes. Este procedimento automático de composição musical, também denominado música aleatória ou música do acaso, tem sido vastamente explorado ao longo dos séculos, já tendo sido utilizado por alguns dos grandes nomes do cenário musical, como por exemplo, Mozart. Os avanços nas áreas de engenharia e computação permitiram a evolução dos métodos utilizados para composição de música aleatória, tornando a aplicação de autômatos celulares uma alternativa viável para determinação da sequência de execução de notas musicais e outros itens utilizados durante a composição deste tipo de música. Esta dissertação propõe uma arquitetura para geração de música harmonizada a partir de intervalos melódicos determinados por autômatos celulares, implementada em hardware reconfigurável do tipo FPGA. A arquitetura proposta possui quatro tipos de autômatos celulares, desenvolvidos através dos modelos de vizinhança unidimensional de Wolfram, vizinhança bidimensional de Neumann, vizinhança bidimensional Moore e vizinhança tridimensional de Neumann, que podem ser combinados de 16 formas diferentes para geração de melodias. Os resultados do processamento realizado pela arquitetura proposta são melodias no formato .mid, compostas através da utilização de dois autômatos celulares, um para escolha das notas e outro para escolha dos instrumentos a serem emulados, de acordo com o protocolo MIDI. Para tal esta arquitetura é formada por três unidades principais, a unidade divisor de frequência, que é responsável pelo sincronismo das tarefas executadas pela arquitetura, a unidade de conjunto de autômatos celulares, que é responsável pelo controle e habilitação dos autômatos celulares, e a unidade máquina MIDI, que é responsável por organizar os resultados de cada iteração corrente dos autômatos celulares e convertê-los conforme a estrutura do protocolo MIDI, gerando-se assim o produto musical. A arquitetura proposta é parametrizável, de modo que a configuração dos dados que influenciam no produto musical gerado, como por exemplo, a definição dos conjuntos de regras para os autômatos celulares habilitados, fica a cargo do usuário, não havendo então limites para as combinações possíveis a serem realizadas na arquitetura. Para validação da funcionalidade e aplicabilidade da arquitetura proposta, alguns dos resultados obtidos foram apresentados e detalhados através do uso de técnicas de obtenção de informação musical.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Investigators at the Cooperative Oxford Laboratory (COL) diagnose and study crustaceans, mollusks, finfish, and a variety of other marine and estuarine invertebrates to assess animal health. This edition updates the Histological Techniques for Marine Bivalve Mollusks manual by Howard and Smith (1983) with additional chapters on molluscan and crustacean techniques. The new edition is intended to serve as a guide for histological processing of shellfish, principally bivalve mollusks and crustaceans. Basically, the techniques included are applicable for histopathological preparation of all marine animals, recognizing however that initial necropsy is unique to each species. Photographs and illustrations are provided for instruction on necropsy of different species to simplify the processing of tissues. Several of the procedures described are adaptations developed by the COL staff. They represent techniques based on principles established for the histopathologic study of mammalian and other vertebrate tissues, but modified for marine and aquatic invertebrates. Although the manual attempts to provide adequate information on techniques, it is also intended to serve as a useful reference source to those interested in the pathology of marine animals. General references and recommended reading listed in the back of the manual will provide histological information on species not addressed in the text.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents some developments in query expansion and document representation of our spoken document retrieval system and shows how various retrieval techniques affect performance for different sets of transcriptions derived from a common speech source. Modifications of the document representation are used, which combine several techniques for query expansion, knowledge-based on one hand and statistics-based on the other. Taken together, these techniques can improve Average Precision by over 19% relative to a system similar to that which we presented at TREC-7. These new experiments have also confirmed that the degradation of Average Precision due to a word error rate (WER) of 25% is quite small (3.7% relative) and can be reduced to almost zero (0.2% relative). The overall improvement of the retrieval system can also be observed for seven different sets of transcriptions from different recognition engines with a WER ranging from 24.8% to 61.5%. We hope to repeat these experiments when larger document collections become available, in order to evaluate the scalability of these techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several research studies have been recently initiated to investigate the use of construction site images for automated infrastructure inspection, progress monitoring, etc. In these studies, it is always necessary to extract material regions (concrete or steel) from the images. Existing methods made use of material's special color/texture ranges for material information retrieval, but they do not sufficiently discuss how to find these appropriate color/texture ranges. As a result, users have to define appropriate ones by themselves, which is difficult for those who do not have enough image processing background. This paper presents a novel method of identifying concrete material regions using machine learning techniques. Under the method, each construction site image is first divided into regions through image segmentation. Then, the visual features of each region are calculated and classified with a pre-trained classifier. The output value determines whether the region is composed of concrete or not. The method was implemented using C++ and tested over hundreds of construction site images. The results were compared with the manual classification ones to indicate the method's validity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The capability to automatically identify shapes, objects and materials from the image content through direct and indirect methodologies has enabled the development of several civil engineering related applications that assist in the design, construction and maintenance of construction projects. This capability is a product of the technological breakthroughs in the area of image processing that has allowed for the development of a large number of digital imaging applications in all industries. In this paper, an automated and content based construction site image retrieval method is presented. This method is based on image retrieval techniques, and specifically those related with material and object identification and matches known material samples with material clusters within the image content. The results demonstrate the suitability of this method for construction site image retrieval purposes and reveal the capability of existing image processing technologies to accurately identify a wealth of materials from construction site images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This book explores the processes for retrieval, classification, and integration of construction images in AEC/FM model based systems. The author describes a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval that have been integrated into a novel method for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks. objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Architecture, Engineering, Construction and Facilities Management (AEC/FM) industry is rapidly becoming a multidisciplinary, multinational and multi-billion dollar economy, involving large numbers of actors working concurrently at different locations and using heterogeneous software and hardware technologies. Since the beginning of the last decade, a great deal of effort has been spent within the field of construction IT in order to integrate data and information from most computer tools used to carry out engineering projects. For this purpose, a number of integration models have been developed, like web-centric systems and construction project modeling, a useful approach in representing construction projects and integrating data from various civil engineering applications. In the modern, distributed and dynamic construction environment it is important to retrieve and exchange information from different sources and in different data formats in order to improve the processes supported by these systems. Previous research demonstrated that a major hurdle in AEC/FM data integration in such systems is caused by its variety of data types and that a significant part of the data is stored in semi-structured or unstructured formats. Therefore, new integrative approaches are needed to handle non-structured data types like images and text files. This research is focused on the integration of construction site images. These images are a significant part of the construction documentation with thousands stored in site photographs logs of large scale projects. However, locating and identifying such data needed for the important decision making processes is a very hard and time-consuming task, while so far, there are no automated methods for associating them with other related objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urquhart, C., Lonsdale, R.,Thomas, R., Spink, S., Yeoman, A., Armstrong, C. & Fenton, R. (2003). Uptake and use of electronic information services: trends in UK higher education from the JUSTEIS project. Program, 37(3), 167-180. Sponsorship: JISC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To help design an environment in which professionals without legal training can make effective use of public sector legal information on planning and the environment - for Add-Wijzer, a European e-government project - we evaluated their perceptions of usefulness and usability. In concurrent think-aloud usability tests, lawyers and non-lawyers carried out information retrieval tasks on a range of online legal databases. We found that non-lawyers reported twice as many difficulties as those with legal training (p = 0.001), that the number of difficulties and the choice of database affected successful completion, and that the non-lawyers had surprisingly few problems understanding legal terminology. Instead, they had more problems understanding the syntactical structure of legal documents and collections. The results support the constraint attunement hypothesis (CAH) of the effects of expertise on information retrieval, with implications for the design of systems to support the effective understanding and use of information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many plain text information hiding techniques demand deep semantic processing, and so suffer in reliability. In contrast, syntactic processing is a more mature and reliable technology. Assuming a perfect parser, this paper evaluates a set of automated and reversible syntactic transforms that can hide information in plain text without changing the meaning or style of a document. A large representative collection of newspaper text is fed through a prototype system. In contrast to previous work, the output is subjected to human testing to verify that the text has not been significantly compromised by the information hiding procedure, yielding a success rate of 96% and bandwidth of 0.3 bits per sentence. © 2007 SPIE-IS&T.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Online forums are becoming a popular way of finding useful
information on the web. Search over forums for existing discussion
threads so far is limited to keyword-based search due
to the minimal effort required on part of the users. However,
it is often not possible to capture all the relevant context in a
complex query using a small number of keywords. Examplebased
search that retrieves similar discussion threads given
one exemplary thread is an alternate approach that can help
the user provide richer context and vastly improve forum
search results. In this paper, we address the problem of
finding similar threads to a given thread. Towards this, we
propose a novel methodology to estimate similarity between
discussion threads. Our method exploits the thread structure
to decompose threads in to set of weighted overlapping
components. It then estimates pairwise thread similarities
by quantifying how well the information in the threads are
mutually contained within each other using lexical similarities
between their underlying components. We compare our
proposed methods on real datasets against state-of-the-art
thread retrieval mechanisms wherein we illustrate that our
techniques outperform others by large margins on popular
retrieval evaluation measures such as NDCG, MAP, Precision@k
and MRR. In particular, consistent improvements of
up to 10% are observed on all evaluation measures