10 resultados para Machine Vision and Image Processing
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Although the hydrophobicity is usually an arduous parameter to be determined in the field, it has been pointed out as a good option to monitor aging of polymeric outdoor insulators. Concerning this purpose, digital image processing of photos taken from wet insulators has been the main technique nowadays. However, important challenges on this technique still remain to be overcome, such as; images from non-controlled illumination conditions can interfere on analyses and no existence of standard surfaces with different levels of hydrophobicity. In this paper, the photo image samples were digitally filtered to reduce the illumination influence, and hydrophobic surface samples were prepared from wetting silicon surfaces with solution of water-alcohol. Furthermore norevious studies triying to quantify and relate these properties in a mathematical function were found, that could be used in the field by the electrical companies. Based on such considerations, high quality images of countless hydrophobic surfaces were obtained and three different image processing methodologies, the fractal dimension and two Haralick textures descriptors, entropy and homogeneity, associated with several digital filters, were compared. The entropy parameter Haralick's descriptors filtered with the White Top-Hat filter presented the best result to classify the hydrophobicity.
Resumo:
Metalinguistic skill is the ability to reflect upon language as an object of thought. Amongst metalinguistic skills, two seem to be associated with reading and spelling: morphological awareness and phonological awareness. Phonological awareness is the ability of reflecting upon the phonemes that compose words, and morphological awareness is the ability of reflecting upon the morphemes that compose the words. The latter seems to be particularly important for reading comprehension and contextual reading, as beyond phonological information, syntactic and semantic information are required. This study is set to investigate - with a longitudinal design - the relation between those abilities and contextual reading measured by the Cloze test. The first part of the study explores the relationship between morphological awareness tasks and Cloze scores through simple correlations and, in the second part, the specificity of such relationship was inquired using multiple regressions. The results give some support to the hypothesis that morphological awareness offers an independent contribution regarding phonological awareness to contextual reading in Brazilian Portuguese.
Resumo:
Listeria monocytogenes is a pathogen capable of adhering to many surfaces and forming biofilms, which may explain its persistence in food processing environments. This study aimed to genetically characterise L monocytogenes isolates obtained from bovine carcasses and beef processing facilities and to evaluate their adhesion abilities. DNA from 29 L monocytogenes isolates was subjected to enzymatic restriction digestion (Ascii and Apal), and two clusters were identified for serotypes 4b and 112a, with similarities of 48% and 68%. respectively. The adhesion ability of the isolates was tested considering: inoculum concentration, culture media, carbohydrate source, NaCl concentration, incubation temperature, and pH. Each isolate was tested at 10(8) CFU mL(-1) and classified according to its adhesion ability as weak (8 isolates). moderate (17) or strong (4). The isolates showed higher adhesion capability in non-diluted culture media, media at pH 7.0, incubation at 25 degrees C and 37 degrees C, and media with NaCl at 5% and 7%. No relevant differences were observed for adhesion ability with respect to the carbohydrate source. The results indicated a wide diversity of PFGE profiles of persistent L monocytogenes isolates, without relation to their adhesion characteristics. Also, it was observed that stressing conditions did not enhance the adhesion profile of the isolates. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Assuming that textbooks give literary expression to cultural and ideological values of a nation or group, we propose the analysis of chemistry textbooks used in Brazilian universities throughout the twentieth century. We analyzed iconographic and textual aspects of 31 textbooks which had significant diffusion in the context of Brazilian universities at that period. As a result of the iconographic analysis, nine categories of images were proposed: (1) laboratory and experimentation, (2) industry and production, (3) graphs and diagrams, (4) illustrations related to daily life, (5) models, (6) illustrations related to the history of science, (7) pictures or diagrams of animal, vegetable or mineral samples, (8) analogies and (9) concepts of physics. The distribution of images among the categories showed a different emphasis in the presentation of chemical content due to a commitment to different conceptions of chemistry over the period. So, we started with chemistry as an experimental science in the early twentieth century, with an emphasis change to the principles of chemistry from the 1950s, culminating in a chemistry of undeniable technological influence. Results showed that reflections not only on the history of science, but on the history of science education, may be useful for the improvement of science education.
Resumo:
Abstract: Bisphosphonate-related osteonecrosis of the jaws (BRONJ) is characterized as exposed bone in the jaws for more than 8 weeks in patients with current or previous history of therapy with bisphosphonates (BPs) and no history of radiotherapy in the head and neck. We report a case series of 7 patients with BRONJ and analyze the variations of clinical and imaging signs, correlating them with the presence or absence of bone exposure. Among the patients, 6 were women and 1 was a man, aged 42–79 years. Five of the patients were using zoledronic acid and the other 2 alendronate. The use of BPs varied from 3 to 13 years. In 5 patients, tooth extraction was the triggering event of injuries. Panoramic radiographs and computed tomography (CT) were evaluated by a radiologist blinded to the cases. There were persistent unremodeled extraction socket even several months after tooth extraction in 3 of the cases that were consistent wit CT findings that also showed areas of osteosclerosis and osteolysis. Patients were treated according to the recommendations of the AAOMS, with surgical debridement and antibiotic coverage with amoxicillin in the symptomatic patients. The follow-up of these patients ranged from 8 to 34 months, with a good response to treatment. The image findings in this case series were not specific and showed no difference between each stages of BRONJ (AAOMS, 2009). The image features were similar in presence or absence of exposed bone.
Resumo:
Recently there has been a considerable interest in dynamic textures due to the explosive growth of multimedia databases. In addition, dynamic texture appears in a wide range of videos, which makes it very important in applications concerning to model physical phenomena. Thus, dynamic textures have emerged as a new field of investigation that extends the static or spatial textures to the spatio-temporal domain. In this paper, we propose a novel approach for dynamic texture segmentation based on automata theory and k-means algorithm. In this approach, a feature vector is extracted for each pixel by applying deterministic partially self-avoiding walks on three orthogonal planes of the video. Then, these feature vectors are clustered by the well-known k-means algorithm. Although the k-means algorithm has shown interesting results, it only ensures its convergence to a local minimum, which affects the final result of segmentation. In order to overcome this drawback, we compare six methods of initialization of the k-means. The experimental results have demonstrated the effectiveness of our proposed approach compared to the state-of-the-art segmentation methods.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
We examined achromatic contrast discrimination in asymptomatic carriers of 11778 Leber`s hereditary optic neuropathy (LHON 18 controls) and 18 age-match were also tested. To evaluate magnocellular (MC) and Parvocellular (PC) contrast discrimination, we used a version of Pokorny and Smith`s (1997) Pulsed/steady-pedestal paradigms (PPP/SPP) thought to be detected via PC and MC pathways, respectively. A luminance pedestal (four 1 degrees x 1 degrees squares) was presented on a 12 cd/m(2) surround. The luminance of one of the squares (trial square, TS) was randomly incremented for either 17 or 133 ms. Observers had to detect the TS, in a forced-choice task, at each duration, for three pedestal levels: 7, 12, 19 cd/m(2). In the SPP, the pedestal was fixed, and the TS was modulated. For the PPP, all four pedestal squares pulsed for 17 or 133 ms, and the TS was simultaneously incremented or decremented. We found that contrast discrimination thresholds of LHON carriers were significantly higher than controls` in the condition with the highest luminance of both paradigms, implying impaired contrast processing with no evidence of differential sensitivity losses between the two systems. Carriers` thresholds manifested significantly longer temporal integration than controls in the SPP, consistent with slowed MC responses. The SPP and PPP paradigms can identify contrast and temporal processing deficits in asymptomatic LHON carriers, and thus provide an additional tool for early detection and characterization of the disease.
Resumo:
Bilayer segmentation of live video in uncontrolled environments is an essential task for home applications in which the original background of the scene must be replaced, as in videochats or traditional videoconference. The main challenge in such conditions is overcome all difficulties in problem-situations (e. g., illumination change, distract events such as element moving in the background and camera shake) that may occur while the video is being captured. This paper presents a survey of segmentation methods for background substitution applications, describes the main concepts and identifies events that may cause errors. Our analysis shows that although robust methods rely on specific devices (multiple cameras or sensors to generate depth maps) which aid the process. In order to achieve the same results using conventional devices (monocular video cameras), most current research relies on energy minimization frameworks, in which temporal and spacial information are probabilistically combined with those of color and contrast.
Resumo:
OBJECTIVE: To evaluate tools for the fusion of images generated by tomography and structural and functional magnetic resonance imaging. METHODS: Magnetic resonance and functional magnetic resonance imaging were performed while a volunteer who had previously undergone cranial tomography performed motor and somatosensory tasks in a 3-Tesla scanner. Image data were analyzed with different programs, and the results were compared. RESULTS: We constructed a flow chart of computational processes that allowed measurement of the spatial congruence between the methods. There was no single computational tool that contained the entire set of functions necessary to achieve the goal. CONCLUSION: The fusion of the images from the three methods proved to be feasible with the use of four free-access software programs (OsiriX, Register, MRIcro and FSL). Our results may serve as a basis for building software that will be useful as a virtual tool prior to neurosurgery.