927 resultados para IMAGE PROCESSING COMPUTER-ASSISTED
Resumo:
in RoboCup 2007: Robot Soccer World Cup XI
Resumo:
Oceans - San Diego, 2013
Resumo:
O ensaio de dureza, e mais concretamente o ensaio de micro dureza Vickers, é no universo dos ensaios mecânicos um dos mais utilizados quer seja na indústria, no ensino ou na investigação e desenvolvimento de produto no âmbito das ciências dos materiais. Na grande maioria dos casos, a utilização deste ensaio tem como principal aplicação a caracterização ou controlo da qualidade de fabrico de materiais metálicos. Sendo um ensaio de relativa simplicidade de execução, rapidez e com resultados comparáveis e relacionáveis a outras grandezas físicas das propriedades dos materiais. Contudo, e tratando-se de um método de ensaio cuja intervenção humana é importante, na medição da indentação gerada por penetração mecânica através de um sistema ótico, não deixa de exibir algumas debilidades que daí advêm, como sendo o treino dos técnicos e respetivas acuidades visuais, fenómenos de fadiga visual que afetam os resultados ao longo de um turno de trabalho; ora estes fenómenos afetam a repetibilidade e reprodutibilidade dos resultados obtidos no ensaio. O CINFU possui um micro durómetro Vickers, cuja realização dos ensaios depende de um técnico treinado para a execução do mesmo, apresentando todas as debilidades já mencionadas e que o tornou elegível para o estudo e aplicação de uma solução alternativa. Assim, esta dissertação apresenta o desenvolvimento de uma solução alternativa ao método ótico convencional na medição de micro dureza Vickers. Utilizando programação em LabVIEW da National Instruments, juntamente com as ferramentas de visão computacional (NI Vision), o programa começa por solicitar ao técnico a seleção da câmara para aquisição da imagem digital acoplada ao micro durómetro, seleção do método de ensaio (Força de ensaio); posteriormente o programa efetua o tratamento da imagem (aplicação de filtros para eliminação do ruído de fundo da imagem original), segue-se, por indicação do operador, a zona de interesse (ROI) e por sua vez são identificadas automaticamente os vértices da calote e respetivas distâncias das diagonais geradas concluindo, após aceitação das mesmas, com o respetivo cálculo de micro dureza resultante. Para validação dos resultados foram utilizados blocos-padrão de dureza certificada (CRM), cujos resultados foram satisfatórios, tendo-se obtido um elevado nível de exatidão nas medições efetuadas. Por fim, desenvolveu-se uma folha de cálculo em Excel com a determinação da incerteza associada às medições de micro dureza Vickers. Foram então comparados os resultados nas duas metodologias possíveis, pelo método ótico convencional e pela utilização das ferramentas de visão computacional, tendo-se obtido bons resultados com a solução proposta.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
In this thesis a semi-automated cell analysis system is described through image processing. To achieve this, an image processing algorithm was studied in order to segment cells in a semi-automatic way. The main goal of this analysis is to increase the performance of cell image segmentation process, without affecting the results in a significant way. Even though, a totally manual system has the ability of producing the best results, it has the disadvantage of taking too long and being repetitive, when a large number of images need to be processed. An active contour algorithm was tested in a sequence of images taken by a microscope. This algorithm, more commonly known as snakes, allowed the user to define an initial region in which the cell was incorporated. Then, the algorithm would run several times, making the initial region contours to converge to the cell boundaries. With the final contour, it was possible to extract region properties and produce statistical data. This data allowed to say that this algorithm produces similar results to a purely manual system but at a faster rate. On the other hand, it is slower than a purely automatic way but it allows the user to adjust the contour, making it more versatile and tolerant to image variations.
Resumo:
This research aims to advance blinking detection in the context of work activity. Rather than patients having to attend a clinic, blinking videos can be acquired in a work environment, and further automatically analyzed. Therefore, this paper presents a methodology to perform the automatic detection of eye blink using consumer videos acquired with low-cost web cameras. This methodology includes the detection of the face and eyes of the recorded person, and then it analyzes the low-level features of the eye region to create a quantitative vector. Finally, this vector is classified into one of the two categories considered —open and closed eyes— by using machine learning algorithms. The effectiveness of the proposed methodology was demonstrated since it provides unbiased results with classification errors under 5%
Resumo:
Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)
Resumo:
OBJECTIVE: To assess the effect of the inhibition of the angiotensin-converting enzyme on the collagen matrix (CM) of the heart of newborn spontaneously hypertensive rats (SHR) during embryonic development. METHODS: The study comprised the 2 following groups of SHR (n=5 each): treated group - rats conceived from SHR females treated with enalapril maleate (15 mg. kg-1.day-1) during gestation; and nontreated group - offspring of nontreated females. The newborns were euthanized within the first 24 hours after birth and their hearts were removed and processed for histological study. Three fields per animal were considered for computer-assisted digital analysis and determination of the volume densities (Vv) of the nuclei and CM. The images were segmented with the aid of Image Pro Plus® 4.5.029 software (Media Cybernetics). RESULTS: No difference was observed between the treated and nontreated groups in regard to body mass, cardiac mass, and the relation between cardiac and body mass. A significant reduction in the Vv[matrix] and a concomitant increase in the Vv[nuclei] were observed in the treated group as compared with those in the nontreated group. CONCLUSION: The treatment with enalapril of hypertensive rats during pregnancy alters the collagen content and structure of the myocardium of newborns.
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
El treball presentat suposa una visió general de l'"Endoscopia amb Càpsula de Vídeo Wireless" i la inspecció de sequències de contraccions intestinals amb les últimes tecnologies de visió per computador. Després de la observació preliminar dels fonaments mèdics requerits, la aplicació de visió per computador es presenta en aquestos termes. En essència, aquest treball proveïx una exhaustiva selecció, descripció i avaluació de cert conjunt de mètodes de processament d'imatges respecte a l'anàlisi de moviment, en el entorn de seqüències d'imatges preses amb una càpsula endoscòpica. Finalment, es presenta una aplicació de software per configurar i emprar de forma ràpida i fàcil un entorn experimental.
Resumo:
In recent years, multi-atlas fusion methods have gainedsignificant attention in medical image segmentation. Inthis paper, we propose a general Markov Random Field(MRF) based framework that can perform edge-preservingsmoothing of the labels at the time of fusing the labelsitself. More specifically, we formulate the label fusionproblem with MRF-based neighborhood priors, as an energyminimization problem containing a unary data term and apairwise smoothness term. We present how the existingfusion methods like majority voting, global weightedvoting and local weighted voting methods can be reframedto profit from the proposed framework, for generatingmore accurate segmentations as well as more contiguoussegmentations by getting rid of holes and islands. Theproposed framework is evaluated for segmenting lymphnodes in 3D head and neck CT images. A comparison ofvarious fusion algorithms is also presented.
Resumo:
El projecte Matic vol crear un entorn d'EAO apte per a treballar qualsevol contingut acadèmic i personalitzat per a cada estudiant, seguint les directrius marcades pel professor, que en podrà fer el seguiment acadèmic.
Resumo:
Estudi seriós sobre les interfícies gràfiques destinades al sector industrial. En aquest sentit, s'analitza el perfil d'usuari o usuaris més freqüent en aquest sector (les seves característiques i les seves necessitats), es presenten i es descriuen diverses pautes de disseny i diversos elements gràfics que compleixen una sèrie de requisits predefinits, es procedeix a fer un muntatge d'exemple presentant una sèrie de pantalles (se n'explica i justifica el funcionament) i, per acabar, es proposa un mètode per a fer la validació del disseny, mètode que pot comportar modificacions sobre el disseny inicial.
Resumo:
We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail, we are given a set of labeled images of scenes (for example, coast, forest, city, river, etc.), and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent ";topics"; using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently, training a multiway classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly and training a multiway classifier on these vectors. To this end, we introduce a novel vocabulary using dense color SIFT descriptors and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learned, and the type of discriminative classifier used (k-nearest neighbor or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases, using the authors' own data sets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos
Resumo:
This paper describes a method to achieve the most relevant contours of an image. The presented method proposes to integrate the information of the local contours from chromatic components such as H, S and I, taking into account the criteria of coherence of the local contour orientation values obtained from each of these components. The process is based on parametrizing pixel by pixel the local contours (magnitude and orientation values) from the H, S and I images. This process is carried out individually for each chromatic component. If the criterion of dispersion of the obtained orientation values is high, this chromatic component will lose relevance. A final processing integrates the extracted contours of the three chromatic components, generating the so-called integrated contours image