954 resultados para semi-automatic indexing
Resumo:
A semi-automatic segmentation algorithm for abdominal aortic aneurysms (AAA), and based on Active Shape Models (ASM) and texture models, is presented in this work. The texture information is provided by a set of four 3D magnetic resonance (MR) images, composed of axial slices of the abdomen, where lumen, wall and intraluminal thrombus (ILT) are visible. Due to the reduced number of images in the MRI training set, an ASM and a custom texture model based on border intensity statistics are constructed. For the same reason the shape is characterized from 35-computed tomography angiography (CTA) images set so the shape variations are better represented. For the evaluation, leave-one-out experiments have been held over the four MRI set.
Resumo:
"Issued October 1980."
Resumo:
Over the last decade, the rapid growth and adoption of the World Wide Web has further exacerbated user needs for e±cient mechanisms for information and knowledge location, selection, and retrieval. How to gather useful and meaningful information from the Web becomes challenging to users. The capture of user information needs is key to delivering users' desired information, and user pro¯les can help to capture information needs. However, e®ectively acquiring user pro¯les is di±cult. It is argued that if user background knowledge can be speci¯ed by ontolo- gies, more accurate user pro¯les can be acquired and thus information needs can be captured e®ectively. Web users implicitly possess concept models that are obtained from their experience and education, and use the concept models in information gathering. Prior to this work, much research has attempted to use ontologies to specify user background knowledge and user concept models. However, these works have a drawback in that they cannot move beyond the subsumption of super - and sub-class structure to emphasising the speci¯c se- mantic relations in a single computational model. This has also been a challenge for years in the knowledge engineering community. Thus, using ontologies to represent user concept models and to acquire user pro¯les remains an unsolved problem in personalised Web information gathering and knowledge engineering. In this thesis, an ontology learning and mining model is proposed to acquire user pro¯les for personalised Web information gathering. The proposed compu- tational model emphasises the speci¯c is-a and part-of semantic relations in one computational model. The world knowledge and users' Local Instance Reposito- ries are used to attempt to discover and specify user background knowledge. From a world knowledge base, personalised ontologies are constructed by adopting au- tomatic or semi-automatic techniques to extract user interest concepts, focusing on user information needs. A multidimensional ontology mining method, Speci- ¯city and Exhaustivity, is also introduced in this thesis for analysing the user background knowledge discovered and speci¯ed in user personalised ontologies. The ontology learning and mining model is evaluated by comparing with human- based and state-of-the-art computational models in experiments, using a large, standard data set. The experimental results are promising for evaluation. The proposed ontology learning and mining model in this thesis helps to develop a better understanding of user pro¯le acquisition, thus providing better design of personalised Web information gathering systems. The contributions are increasingly signi¯cant, given both the rapid explosion of Web information in recent years and today's accessibility to the Internet and the full text world.
Resumo:
Semi-automatic segmentation of still images has vast and varied practical applications. Recently, an approach "GrabCut" has managed to successfully build upon earlier approaches based on colour and gradient information in order to address the problem of efficient extraction of a foreground object in a complex environment. In this paper, we extend the GrabCut algorithm further by applying an unsupervised algorithm for modelling the Gaussian Mixtures that are used to define the foreground and background in the segmentation algorithm. We show examples where the optimisation of the GrabCut framework leads to further improvements in performance.
Resumo:
Browse > Journals> Automation Science and Enginee ...> Volume: 5 Issue: 3 Microassembly Fabrication of Tissue Engineering Scaffolds With Customized Design 4468741 abstract Han Zhang; Burdet, E.; Poo, A.N.; Hutmacher, D.W.; GE Global Res. Center Ltd., Shanghai This paper appears in: Automation Science and Engineering, IEEE Transactions on Issue Date: July 2008 Volume: 5 Issue:3 On page(s): 446 - 456 ISSN: 1545-5955 Digital Object Identifier: 10.1109/TASE.2008.917011 Date of Current Version: 02 July 2008 Sponsored by: IEEE Robotics and Automation Society Abstract This paper presents a novel technique to fabricate scaffold/cell constructs for tissue engineering by robotic assembly of microscopic building blocks (of volume 0.5$,times,$0.5$,times,$0.2 ${hbox{mm}}^{3}$ and 60 $mu {hbox{m}}$ thickness). In this way, it becomes possible to build scaffolds with freedom in the design of architecture, surface morphology, and chemistry. Biocompatible microparts with complex 3-D shapes were first designed and mass produced using MEMS techniques. Semi-automatic assembly was then realized using a robotic workstation with four degrees of freedom integrating a dedicated microgripper and two optical microscopes. Coarse movement of the gripper is determined by pattern matching in the microscopes images, while the operator controls fine positioning and accurate insertion of the microparts. Successful microassembly was demonstrated using SU-8 and acrylic resin microparts. Taking advantage of parts distortion and adhesion forces, which dominate at micro-level, the parts cleave together after assembly. In contrast to many current scaffold fabrication techniques, no heat, pressure, electrical effect, or toxic chemical reaction is involved, a critical condition for creating scaffolds with biological agents.
Resumo:
Sound tagging has been studied for years. Among all sound types, music, speech, and environmental sound are three hottest research areas. This survey aims to provide an overview about the state-of-the-art development in these areas.We discuss about the meaning of tagging in different sound areas at the beginning of the journey. Some examples of sound tagging applications are introduced in order to illustrate the significance of this research. Typical tagging techniques include manual, automatic, and semi-automatic approaches.After reviewing work in music, speech and environmental sound tagging, we compare them and state the research progress to date. Research gaps are identified for each research area and the common features and discriminations between three areas are discovered as well. Published datasets, tools used by researchers, and evaluation measures frequently applied in the analysis are listed. In the end, we summarise the worldwide distribution of countries dedicated to sound tagging research for years.
Resumo:
In this paper it is demonstrated how the Bayesian parametric bootstrap can be adapted to models with intractable likelihoods. The approach is most appealing when the semi-automatic approximate Bayesian computation (ABC) summary statistics are selected. After a pilot run of ABC, the likelihood-free parametric bootstrap approach requires very few model simulations to produce an approximate posterior, which can be a useful approximation in its own right. An alternative is to use this approximation as a proposal distribution in ABC algorithms to make them more efficient. In this paper, the parametric bootstrap approximation is used to form the initial importance distribution for the sequential Monte Carlo and the ABC importance and rejection sampling algorithms. The new approach is illustrated through a simulation study of the univariate g-and- k quantile distribution, and is used to infer parameter values of a stochastic model describing expanding melanoma cell colonies.
Resumo:
The methodology of extracting information from texts has widely been described in the current literature. However, the methodology has been developed mainly for the purposes of other fields than terminology science. In addition, the research has been English language oriented. Therefore, there are no satisfactory language-independent methods for extracting terminological information from texts. The aim of the present study is to form the basis for a further improvement of methods for extraction of terminological information. A further aim is to determine differences in term extraction between subject groups with or without knowledge of the special field in question. The study is based on the theory of terminology, and has mainly a qualitative approach. The research material consists of electronically readable specialized texts in the subject domain of maritime safety. Textbooks, conference papers, research reports and articles from professional journals in Finnish and in Russian are included. The thesis first deals with certain term extraction methods. These are manual term identification and semi-automatic term extraction, the latter of which was carried out by using three commercial computer programs. The results of term extraction were compared and the recall and precision of the methods were evaluated. The latter part of the study is dedicated to the identification of concept relations. Certain linguistic expressions, which some researchers call knowledge probes, were applied to identify concept relations. The results of the present thesis suggest that special field knowledge is an advantage in manual term identification. However, in the candidate term lists the variation between subject groups was not as remarkable as it was between individual subjects. The term extraction software tested here produces candidate term lists which can be useful, but only after some manual work. Therefore, the work emphasizes the need to further develop term extraction software. Furthermore, the analyses indicate that there are a certain number of terms which were extracted by all the subjects and the software. These terms we call core terms. As the result of the experiment on linguistic expressions which signal concept relations, a proposal of Finnish and Russian knowledge probes in the field of maritime safety was made. The main finding was that it would be useful to combine the use of knowledge probes with semi-automatic term extraction since knowledge probes usually occur in the vicinity of terms.
Resumo:
Objective Vast amounts of injury narratives are collected daily and are available electronically in real time and have great potential for use in injury surveillance and evaluation. Machine learning algorithms have been developed to assist in identifying cases and classifying mechanisms leading to injury in a much timelier manner than is possible when relying on manual coding of narratives. The aim of this paper is to describe the background, growth, value, challenges and future directions of machine learning as applied to injury surveillance. Methods This paper reviews key aspects of machine learning using injury narratives, providing a case study to demonstrate an application to an established human-machine learning approach. Results The range of applications and utility of narrative text has increased greatly with advancements in computing techniques over time. Practical and feasible methods exist for semi-automatic classification of injury narratives which are accurate, efficient and meaningful. The human-machine learning approach described in the case study achieved high sensitivity and positive predictive value and reduced the need for human coding to less than one-third of cases in one large occupational injury database. Conclusion The last 20 years have seen a dramatic change in the potential for technological advancements in injury surveillance. Machine learning of ‘big injury narrative data’ opens up many possibilities for expanded sources of data which can provide more comprehensive, ongoing and timely surveillance to inform future injury prevention policy and practice.
Resumo:
This paper describes a semi-automatic tool for annotation of multi-script text from natural scene images. To our knowledge, this is the maiden tool that deals with multi-script text or arbitrary orientation. The procedure involves manual seed selection followed by a region growing process to segment each word present in the image. The threshold for region growing can be varied by the user so as to ensure pixel-accurate character segmentation. The text present in the image is tagged word-by-word. A virtual keyboard interface has also been designed for entering the ground truth in ten Indic scripts, besides English. The keyboard interface can easily be generated for any script, thereby expanding the scope of the toolkit. Optionally, each segmented word can further be labeled into its constituent characters/symbols. Polygonal masks are used to split or merge the segmented words into valid characters/symbols. The ground truth is represented by a pixel-level segmented image and a '.txt' file that contains information about the number of words in the image, word bounding boxes, script and ground truth Unicode. The toolkit, developed using MATLAB, can be used to generate ground truth and annotation for any generic document image. Thus, it is useful for researchers in the document image processing community for evaluating the performance of document analysis and recognition techniques. The multi-script annotation toolokit (MAST) is available for free download.
Resumo:
Imaging flow cytometry is an emerging technology that combines the statistical power of flow cytometry with spatial and quantitative morphology of digital microscopy. It allows high-throughput imaging of cells with good spatial resolution, while they are in flow. This paper proposes a general framework for the processing/classification of cells imaged using imaging flow cytometer. Each cell is localized by finding an accurate cell contour. Then, features reflecting cell size, circularity and complexity are extracted for the classification using SVM. Unlike the conventional iterative, semi-automatic segmentation algorithms such as active contour, we propose a noniterative, fully automatic graph-based cell localization. In order to evaluate the performance of the proposed framework, we have successfully classified unstained label-free leukaemia cell-lines MOLT, K562 and HL60 from video streams captured using custom fabricated cost-effective microfluidics-based imaging flow cytometer. The proposed system is a significant development in the direction of building a cost-effective cell analysis platform that would facilitate affordable mass screening camps looking cellular morphology for disease diagnosis. Lay description In this article, we propose a novel framework for processing the raw data generated using microfluidics based imaging flow cytometers. Microfluidics microscopy or microfluidics based imaging flow cytometry (mIFC) is a recent microscopy paradigm, that combines the statistical power of flow cytometry with spatial and quantitative morphology of digital microscopy, which allows us imaging cells while they are in flow. In comparison to the conventional slide-based imaging systems, mIFC is a nascent technology enabling high throughput imaging of cells and is yet to take the form of a clinical diagnostic tool. The proposed framework process the raw data generated by the mIFC systems. The framework incorporates several steps: beginning from pre-processing of the raw video frames to enhance the contents of the cell, localising the cell by a novel, fully automatic, non-iterative graph based algorithm, extraction of different quantitative morphological parameters and subsequent classification of cells. In order to evaluate the performance of the proposed framework, we have successfully classified unstained label-free leukaemia cell-lines MOLT, K562 and HL60 from video streams captured using cost-effective microfluidics based imaging flow cytometer. The cell lines of HL60, K562 and MOLT were obtained from ATCC (American Type Culture Collection) and are separately cultured in the lab. Thus, each culture contains cells from its own category alone and thereby provides the ground truth. Each cell is localised by finding a closed cell contour by defining a directed, weighted graph from the Canny edge images of the cell such that the closed contour lies along the shortest weighted path surrounding the centroid of the cell from a starting point on a good curve segment to an immediate endpoint. Once the cell is localised, morphological features reflecting size, shape and complexity of the cells are extracted and used to develop a support vector machine based classification system. We could classify the cell-lines with good accuracy and the results were quite consistent across different cross validation experiments. We hope that imaging flow cytometers equipped with the proposed framework for image processing would enable cost-effective, automated and reliable disease screening in over-loaded facilities, which cannot afford to hire skilled personnel in large numbers. Such platforms would potentially facilitate screening camps in low income group countries; thereby transforming the current health care paradigms by enabling rapid, automated diagnosis for diseases like cancer.
Resumo:
Máster y Doctorado en Sistemas Informáticos Avanzados, Informatika Fakultatea - Facultad de Informática
Resumo:
Sedimentos em suspensão representam um dos principais fatores que afetam a qualidade dos sistemas aquáticos no mundo; influenciam os processos geomórficos de construção da paisagem e podem indicar problemas de erosão e perda de solo na bacia hidrográfica contribuinte. O seu monitoramento espacial e temporal é fundamental nas atividades de gestão ambiental de áreas costeiras. Nesse sentido, a hipótese básica desta pesquisa é que o padrão espacial e temporal de plumas de sedimentos costeiras associado ao regime hidrológico do rio pode ser caracterizado a partir de imagens orbitais de média resolução espacial. Para comprová-la, elegeu-se a foz do rio Paraíba do Sul como área de estudo para definição e teste metodológico, e formulou-se como principal objetivo mapear qualitativamente a pluma costeira deste rio a partir de imagens Landsat 5 e CBERS-2, ao longo do período compreendido entre 1985 e 2007. As datas avaliadas foram criteriosamente definidas através de três estratégias de análise, totalizando cinqüenta imagens. Pesquisa bibliográfica e avaliação da resposta espectral da feição de interesse nas imagens selecionadas consistiram nas etapas principais da definição da metodologia. As plumas foram então identificadas, mapeadas e extraídas; posteriormente, suas características espaciais e temporais foram analisadas por intermédio de sistemas de informação geográfica e avaliadas em conjunto com dados históricos de vazão. Os resultados indicam que a banda do vermelho forneceu uma melhor discriminação interna da pluma, sendo, portanto, utilizada como base para as análises realizadas neste trabalho. Com exceção do procedimento de correção atmosférica, a metodologia proposta consiste na utilização de técnicas simples de processamento digital de imagens, baseadas na integração de técnicas semi-automáticas e de análise visual. A avaliação do padrão dos sedimentos e dos mapas temáticos qualitativos de concentração de sedimentos em suspensão indica a forte diferenciação existente entre cenários representativos de épocas de cheia e seca do rio. Análises espaciais do comportamento da pluma contribuem ainda para um maior conhecimento do espaço geográfico, fornecendo subsídios aos mais variados setores do planejamento e gestão ambiental.
Resumo:
O problema que justifica o presente estudo refere-se à falta de semântica nos mecanismos de busca na Web. Para este problema, o consórcio W3 vem desenvolvendo tecnologias que visam construir uma Web Semântica. Entre estas tecnologias, estão as ontologias de domínio. Neste sentido, o objetivo geral desta dissertação é discutir as possibilidades de se imprimir semântica às buscas nos agregadores de notícia da Web. O objetivo específico é apresentar uma aplicação que usa uma classificação semi-automática de notícias, reunindo, para tanto, as tecnologias de busca da área de recuperação de informação com as ontologias de domínio. O sistema proposto é uma aplicação para a Web capaz de buscar notícias sobre um domínio específico em portais de informação. Ela utiliza a API do Google Maps V1 para a localização georreferenciada da notícia, sempre que esta informação estiver disponível. Para mostrar a viabilidade da proposta, foi desenvolvido um exemplo apoiado em uma ontologia para o domínio de chuvas e suas consequências. Os resultados obtidos por este novo Feed de base ontológica são alocados em um banco de dados e disponibilizados para consulta via Web. A expectativa é que o Feed proposto seja mais relevante em seus resultados do que um Feed comum. Os resultados obtidos com a união de tecnologias patrocinadas pelo consórcio W3 (XML, RSS e ontologia) e ferramentas de busca em página Web foram satisfatórios para o propósito pretendido. As ontologias mostram-se como ferramentas de usos múltiplos, e seu valor de análise em buscas na Web pode ser ampliado com aplicações computacionais adequadas para cada caso. Como no exemplo apresentado nesta dissertação, à palavra chuva agregaram-se outros conceitos, que estavam presentes nos desdobramentos ocasionados por ela. Isto realçou a ligação do evento chuva com as consequências que ela provoca - ação que só foi possível executar através de um recorte do conhecimento formal envolvido.
Resumo:
O objetivo deste trabalho foi desenvolver um estudo morfológico quantitativo e qualitativo da região da sínfise mandibular (SM), através da construção de modelos tridimensionais (3D) e avaliar o seu grau de associação com diferentes classificações de padrões faciais. Foram avaliados 61 crânios secos humanos de adultos jovens com oclusão normal, com idade entre 18 e 45 anos e dentadura completa. Tomografias computadorizadas de feixe cônico (TCFC) de todos os crânios foram obtidas de forma padronizada. O padrão facial foi determinado por método antropométrico e cefalométrico. Utilizando o critério antropométrico, tomando como referência o índice facial (IF), o padrão facial foi classificado em: euriprósopo (≤84,9), mesoprósopo (85,0 - 89,9) e leptoprósopo (≥90,0). Pelo critério cefalométrico, o plano mandibular (FMA) determinou o padrão facial em curto (≤21,0), médio (21,1 - 29,0) e longo (≥29,1); e o índice de altura facial (IAF) classificou a face em hipodivergente (≥0,750), normal (0,749 - 0,650) e hiperdivergente (≤0,649). A construção de modelos 3D, representativos da região da SM, foi realizada com o auxílio do software ITK-SNAP. Os dentes presentes nesta região, incisivos, caninos e pré-molares inferiores, foram separados do modelo por técnica de segmentação semi-automática, seguida de refinamento manual. Em seguida, foram obtidos modelos 3D somente com o tecido ósseo, possibilitando a mensuraçãodo volume ósseo em mm3 (VOL) e da densidade radiográfica, pela média de intensidade dos voxels (Mvox). No programa Geomagic Studio 10 foi feita uma superposição anatômica dos modelos 3D em bestfit para estabelecer um plano de corte padronizado na linha média. Para cada sínfise foi medida a altura (Alt), a largura (Larg) e calculado o índice de proporção entre altura e largura (PAL). A avaliação da presença de defeitos alveolares foi feita diretamente na mandíbula,obtendo-se a média de todas as alturas ósseas alveolares (AltOss) e a média da dimensão das deiscências presentes (Medef). O índice de correlação intra-classe (ICC) com valores entre 0,923 a 0,994,indicou alta reprodutibilidade e confiabilidade das variáveis medidas. As diferenças entre os grupos, determinados pelas classificações do padrão facial (IF, FMA e IAF), foram avaliadas através da análise de variância (oneway ANOVA) seguida do teste post-hoc de Tukey. O grau de associação entre o padrão facial e as variáveis Vol, Mvox, PAL, Alt, Larg, AltOss e Medef foi avaliado pelo coeficiente de correlação de Pearson com um teste t para r. Os resultados indicaram ausência de diferença ou associação entre o volume, densidade radiográfica e presença de defeitos alveolares da SM e o padrão facial quando determinado pelo IF, FMA e IAF. Verificou-se tendência de SM mais longas nos indivíduos com face alongada, porém a largura não mostrou associação com o padrão facial. Estes resultados sugerem que as classificações utilizadas para determinar o padrão facial não representam satisfatoriamente o caráter 3D da face humana e não estão associadas com a morfologia da SM.