877 resultados para Segmentation, Targeting and Positioning


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Human intestinal parasites constitute a problem in most tropical countries, causing death or physical and mental disorders. Their diagnosis usually relies on the visual analysis of microscopy images, with error rates that may range from moderate to high. The problem has been addressed via computational image analysis, but only for a few species and images free of fecal impurities. In routine, fecal impurities are a real challenge for automatic image analysis. We have circumvented this problem by a method that can segment and classify, from bright field microscopy images with fecal impurities, the 15 most common species of protozoan cysts, helminth eggs, and larvae in Brazil. Our approach exploits ellipse matching and image foresting transform for image segmentation, multiple object descriptors and their optimum combination by genetic programming for object representation, and the optimum-path forest classifier for object recognition. The results indicate that our method is a promising approach toward the fully automation of the enteroparasitosis diagnosis. © 2012 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Includes bibliography.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The document evaluates the presence of segmentation in the Argentinean labour market. The analysis is centred on the comparison of the earnings of formal and informal workers. Two different approaches to the definition of informality are used. The existence of a formal premium is tested using dynamic data and semiparametric techniques. The period analysed is 1996-2006 for all urban surveyed areas. Our results support the segmentation hypothesis for the Argentine urban labour market: workers with similar probabilities of entering/exiting across sectors obtain different earnings.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper makes a comparative analysis of results produced by the application of two techniques for the detection and segmentation of bodies in motion captured in images sequence, namely: 1) technique based on the temporal average of the values of each pixel recorded in N consecutive image frames and, 2) technique based on historical values associated with pixels recorded in different frames of an image sequence.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: This study analyzed the positioning of the head, trunk, and upper extremities during gait in children with visual impairment. Methods: A total of 11 children participated in this study: 6 with blindness and 5 with low vision. The kinematics of the positioning of the head, trunk, shoulders, and elbows in each participant was analyzed during the four phases of the gait cycle: foot strike, support, toe-off, and swing. Results: There were significant differences between children with blindness and low vision in the positioning of the trunk in the sagittal plane during the foot strike, support, and swing phases. Conclusions: The analysis identified postural alterations of the head, trunk, shoulder, and elbow during the children’s gait, highlighting the relevance of appropriate stimulation at an early age in orientation and mobility programs, as well as the essential presence of professionals who work with movement.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Lipid nanoemulsions (LDE) may be used as carriers of paclitaxel (PTX) and etoposide (ETP) to decrease toxicity and increase the therapeutic action of those drugs. The current study investigates the combined chemotherapy with PTX and ETP associated with LDE. Four groups of 10-20 B16F10 melanoma-bearing mice were treated with LDE-PTX and LDE-ETP in combination (LDE-PTX + ETP), commercial PTX and ETP in combination (PTX + ETP), single LDE-PTX, and single LDE-ETP. PTX and ETX doses were 9 mu mol/kg administered in three intraperitoneal injections on three alternate days. In two control groups mice were treated with saline solution or LDE alone. Tumor growth, metastasis presence, cell-cycle distribution, blood cell counts and histological data were analyzed. Toxicity of all treatments was evaluated in mice without tumors. Tumor growth inhibition was similarly strong in all treatment groups. However, there was a greater reduction in the number of animals bearing metastases in the LDE-PTX + ETP group (30 %) in comparison to the PTX + ETP group (82 %, p < 0.05). Reduction of cellular density, blood vessels and increase of collagen fibers in tumor tissues were observed in the LDE-PTX + ETP group but not in the PTX + ETP group, and in both groups reduced melanoma-related anemia and thrombocytosis were observed. Flow cytometric analysis suggested that LDE-PTX + ETP exhibited greater selectivity to neoplastic cells than PTX-ETP, showing arrest (65 %) in the G(2)/M phase of the cell cycle (p < 0.001). Toxicity manifested by weight loss and myelosuppression was markedly milder in the LDE-PTX + ETP than in the PTX + ETP group. LDE-PTX + ETP combined drug-targeting therapy showed markedly superior anti-cancer properties and reduced toxicity compared to PTX + ETP.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bilayer segmentation of live video in uncontrolled environments is an essential task for home applications in which the original background of the scene must be replaced, as in videochats or traditional videoconference. The main challenge in such conditions is overcome all difficulties in problem-situations (e. g., illumination change, distract events such as element moving in the background and camera shake) that may occur while the video is being captured. This paper presents a survey of segmentation methods for background substitution applications, describes the main concepts and identifies events that may cause errors. Our analysis shows that although robust methods rely on specific devices (multiple cameras or sensors to generate depth maps) which aid the process. In order to achieve the same results using conventional devices (monocular video cameras), most current research relies on energy minimization frameworks, in which temporal and spacial information are probabilistically combined with those of color and contrast.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this study was to evaluate the physicochemical properties of avocado pulp of four different varieties (Avocado, Guatemala, Dickinson, and Butter pear) and to identify which has the greatest potential for oil extraction. Fresh avocado pulp was characterized by moisture, protein, fat, ash, carbohydrates and energy contents were determined. The carotenoids and chlorophyll contents were determined by the organic solvent extraction method. The results showed significant differences in the composition of the fruit when varieties are compared. However, the striking feature in all varieties is high lipid content; Avocado and Dickinson are the most suitable varieties for oil extraction, taking into account moisture content and the levels of lipids in the pulp. Moreover, it could be said that the variety Dickinson is the most affected by the parameters evaluated in terms of overall quality. Chlorophyll and carotenoids, fat-soluble pigments, showed a negative correlation with respect to lipids since it could be related to its function in the fruit. The varieties Avocado and Dickinson are an alternative to oil extraction having great commercial potential to be exploited thus avoiding waste and increasing farmers income.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bromodomains are epigenetic reader domains that have recently become popular targets. In contrast to BET bromodomains, which have proven druggable, bromodomains from other regions of the phylogenetic tree have shallower pockets. We describe successful targeting of the challenging BAZ2B bromodomain using biophysical fragment screening and structure-based optimization of high ligand-efficiency fragments into a novel series of low-micromolar inhibitors. Our results provide attractive leads for development of BAZ2B chemical probes and indicate the whole family may be tractable.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dynamic texture is a recent field of investigation that has received growing attention from computer vision community in the last years. These patterns are moving texture in which the concept of selfsimilarity for static textures is extended to the spatiotemporal domain. In this paper, we propose a novel approach for dynamic texture representation, that can be used for both texture analysis and segmentation. In this method, deterministic partially self-avoiding walks are performed in three orthogonal planes of the video in order to combine appearance and motion features. We validate our method on three applications of dynamic texture that present interesting challenges: recognition, clustering and segmentation. Experimental results on these applications indicate that the proposed method improves the dynamic texture representation compared to the state of the art.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).