14 resultados para foreground background segmentation
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Bilayer segmentation of live video in uncontrolled environments is an essential task for home applications in which the original background of the scene must be replaced, as in videochats or traditional videoconference. The main challenge in such conditions is overcome all difficulties in problem-situations (e. g., illumination change, distract events such as element moving in the background and camera shake) that may occur while the video is being captured. This paper presents a survey of segmentation methods for background substitution applications, describes the main concepts and identifies events that may cause errors. Our analysis shows that although robust methods rely on specific devices (multiple cameras or sensors to generate depth maps) which aid the process. In order to achieve the same results using conventional devices (monocular video cameras), most current research relies on energy minimization frameworks, in which temporal and spacial information are probabilistically combined with those of color and contrast.
Resumo:
Abstract Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation.
Resumo:
This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Resumo:
We show that a single imperfect fluid can be used as a source to obtain a mass-varying black hole in an expanding universe. This approach generalizes the well-known McVittie spacetime, by allowing the mass to vary thanks to a novel mechanism based on the presence of a temperature gradient. This fully dynamical solution, which does not require phantom fields or fine-tuning, is a step forward in a new direction in the study of systems whose local gravitational attraction is coupled to the expansion of the universe. We present a simple but instructive example for the mass function and briefly discuss the structure of the apparent horizons and the past singularity.
Resumo:
We study the interaction between dark sectors by considering the momentum transfer caused by the dark matter scattering elastically within the dark energy fluid. Describing the dark scattering analogy to the Thomson scattering which couples baryons and photons, we examine the impact of the dark scattering in CMB observations. Performing global fitting with the latest observational data, we find that for a dark energy equation of state w < -1, the CMB gives tight constraints on dark matter-dark energy elastic scattering. Assuming a dark matter particle of proton mass, we derive an elastic scattering cross section of sigma(D) < 3.295 x 10(-10)sigma(T) where sigma(T) is the cross section of Thomson scattering. For w > -1, however, the constraints are poor. For w = -1, sigma(D) can formally take any value.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
Latin American countries have a privileged position to tackle the environmental crisis, producing a new framework of relations and interdependencies: a biocivilizacao. Inspired by the ideas and Gourou Sachs, founded in centralities other than those of the "global market", and fed by other "sources" than those of high-carbon, embodied in examples like those provided by the Amazons, that teach us how the interaction between cultural and natural elements can produce the main source of biodiversity on the planet and your invaluable environmental service. Countries that share, if you can settle in other conditions (have) dependencies, guided references and values different from those that have chaired the hegemonic order. But for that, the paths that lead only to "cooperate" with the construction of "common markets", which will be replaced by others, eg, dialogues between the Organization of the Amazon Cooperation Treaty and Pan Amazonian Social Forum.
Resumo:
The effect of event background fluctuations on charged particle jet reconstruction in Pb-Pb collisions at root s(NN) = 2.76 TeV has been measured with the ALICE experiment. The main sources of non-statistical fluctuations are characterized based purely on experimental data with an unbiased method, as well as by using single high p(t) particles and simulated jets embedded into real Pb-Pb events and reconstructed with the anti-k(t) jet finder. The influence of a low transverse momentum cut-off on particles used in the jet reconstruction is quantified by varying the minimum track p(t) between 0.15 GeV/c and 2 GeV/c. For embedded jets reconstructed from charged particles with p(t) > 0.15 GeV/c, the uncertainty in the reconstructed jet transverse momentum due to the heavy-ion background is measured to be 11.3 GeV/c (standard deviation) for the 10% most central Pb-Pb collisions, slightly larger than the value of 11.0 GeV/c measured using the unbiased method. For a higher particle transverse momentum threshold of 2 GeV/c, which will generate a stronger bias towards hard fragmentation in the jet finding process, the standard deviation of the fluctuations in the reconstructed jet transverse momentum is reduced to 4.8-5.0 GeV/c for the 10% most central events. A non-Gaussian tail of the momentum uncertainty is observed and its impact on the reconstructed jet spectrum is evaluated for varying particle momentum thresholds, by folding the measured fluctuations with steeply falling spectra.
Resumo:
We present the first numerical implementation of the minimal Landau background gauge for Yang-Mills theory on the lattice. Our approach is a simple generalization of the usual minimal Landau gauge and is formulated for the general SU(N) gauge group. We also report on preliminary tests of the method in the four-dimensional SU(2) case, using different background fields. Our tests show that the convergence of the numerical minimization process is comparable to the case of a null background. The uniqueness of the minimizing functional employed is briefly discussed.
Resumo:
Abstract Background Recent studies have raised controversy regarding the association between cesarean section and later obesity in the offspring. The purpose of this study was to assess the association of cesarean section with increased body mass index (BMI) and obesity in school children from two Brazilian cities with distinct socioeconomic backgrounds. Methods Two birth cohorts respectively born in 1994 in Ribeirao Preto, a wealthy city in Southeast, and in 1997/98 in Sao Luis, a less wealthy city in Northeast of Brasil, were evaluated. After birth, 2,846 pairs of mothers-newborns were evaluated in Ribeirao Preto and 2,542 in Sao Luis. In 2004/05, 790 children aged 10/11 years were randomly reassessed in Ribeirao Preto and 673 at 7/9 years in Sao Luis. Information on type of delivery, maternal and child characteristics, socioeconomic position and anthropometric measurements were collected after birth and at school age. Obesity was defined as BMI ≥ 95th percentile at school age. Results Obesity rate was 13.0% in Ribeirao Preto and 2.1% in Sao Luis. Cesarean section was associated with obesity and remained significant after adjustment only in Ribeirao Preto [OR = 1.74 (95% CI: 1.04; 2.92)]. The association between cesarean section and BMI remained significant after adjustment for maternal schooling, maternal smoking during pregnancy, duration of breastfeeding, gender, birth weight and gestational age, type of school and, only in Sao Luis, pre-pregnancy maternal weight. In Ribeirao Preto children born by cesarean section had BMI 0.31 kg/m2 (95%CI: 0.11; 0.51) higher than those born by vaginal delivery. In Sao Luis BMI of children born by cesarean section was 0.28 kg/m2 higher (95%CI: 0.08; 0.49) than those born by vaginal delivery. Conclusion A positive association between cesarean section and increased BMI z-score was demonstrated in areas with different socioeconomic status in a middle-income country.
Resumo:
OBJECTIVE: To propose an automatic brain tumor segmentation system. METHODS: The system used texture characteristics as its main source of information for segmentation. RESULTS: The mean correct match was 94% of correspondence between the segmented areas and ground truth. CONCLUSION: Final results showed that the proposed system was able to find and delimit tumor areas without requiring any user interaction.
Resumo:
The parenchymal distribution of the splenic artery was studied in order to obtain anatomical basis for partial splenectomy. Thirty two spleens were studied, 26 spleens of healthy horses weighing 320 to 450kg, aged 3 to 12 years and 6 spleens of fetus removed from slaughterhouse. The spleens were submitted to arteriography and scintigraphy in order to have their vascular pattern examined and compared to the external aspect of the organ aiming establish anatomo-surgical segments. All radiographs were photographed with a digital camera and the digital images were submitted to a measuring system for comparative analysis of areas of dorsal and ventral anatomo-surgical segments. Anatomical investigations into the angioarchitecture of the equine spleen showed a paucivascular area, which coincides with a thinner external area, allowing the organ to be divided in two anatomo-surgical segments of approximately 50% of the organ each.
Resumo:
Recently there has been a considerable interest in dynamic textures due to the explosive growth of multimedia databases. In addition, dynamic texture appears in a wide range of videos, which makes it very important in applications concerning to model physical phenomena. Thus, dynamic textures have emerged as a new field of investigation that extends the static or spatial textures to the spatio-temporal domain. In this paper, we propose a novel approach for dynamic texture segmentation based on automata theory and k-means algorithm. In this approach, a feature vector is extracted for each pixel by applying deterministic partially self-avoiding walks on three orthogonal planes of the video. Then, these feature vectors are clustered by the well-known k-means algorithm. Although the k-means algorithm has shown interesting results, it only ensures its convergence to a local minimum, which affects the final result of segmentation. In order to overcome this drawback, we compare six methods of initialization of the k-means. The experimental results have demonstrated the effectiveness of our proposed approach compared to the state-of-the-art segmentation methods.
Resumo:
Dynamic texture is a recent field of investigation that has received growing attention from computer vision community in the last years. These patterns are moving texture in which the concept of selfsimilarity for static textures is extended to the spatiotemporal domain. In this paper, we propose a novel approach for dynamic texture representation, that can be used for both texture analysis and segmentation. In this method, deterministic partially self-avoiding walks are performed in three orthogonal planes of the video in order to combine appearance and motion features. We validate our method on three applications of dynamic texture that present interesting challenges: recognition, clustering and segmentation. Experimental results on these applications indicate that the proposed method improves the dynamic texture representation compared to the state of the art.