911 resultados para foreground background segmentation
Resumo:
This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.
Resumo:
In the recent years, the computer vision community has shown great interest on depth-based applications thanks to the performance and flexibility of the new generation of RGB-D imagery. In this paper, we present an efficient background subtraction algorithm based on the fusion of multiple region-based classifiers that processes depth and color data provided by RGB-D cameras. Foreground objects are detected by combining a region-based foreground prediction (based on depth data) with different background models (based on a Mixture of Gaussian algorithm) providing color and depth descriptions of the scene at pixel and region level. The information given by these modules is fused in a mixture of experts fashion to improve the foreground detection accuracy. The main contributions of the paper are the region-based models of both background and foreground, built from the depth and color data. The obtained results using different database sequences demonstrate that the proposed approach leads to a higher detection accuracy with respect to existing state-of-the-art techniques.
Resumo:
Most cosmologists now believe that we live in an evolving universe that has been expanding and cooling since its origin about 15 billion years ago. Strong evidence for this standard cosmological model comes from studies of the cosmic microwave background radiation (CMBR), the remnant heat from the initial fireball. The CMBR spectrum is blackbody, as predicted from the hot Big Bang model before the discovery of the remnant radiation in 1964. In 1992 the cosmic background explorer (COBE) satellite finally detected the anisotropy of the radiation—fingerprints left by tiny temperature fluctuations in the initial bang. Careful design of the COBE satellite, and a bit of luck, allowed the 30 μK fluctuations in the CMBR temperature (2.73 K) to be pulled out of instrument noise and spurious foreground emissions. Further advances in detector technology and experiment design are allowing current CMBR experiments to search for predicted features in the anisotropy power spectrum at angular scales of 1° and smaller. If they exist, these features were formed at an important epoch in the evolution of the universe—the decoupling of matter and radiation at a temperature of about 4,000 K and a time about 300,000 years after the bang. CMBR anisotropy measurements probe directly some detailed physics of the early universe. Also, parameters of the cosmological model can be measured because the anisotropy power spectrum depends on constituent densities and the horizon scale at a known cosmological epoch. As sophisticated experiments on the ground and on balloons pursue these measurements, two CMBR anisotropy satellite missions are being prepared for launch early in the next century.
Resumo:
Theories of image segmentation suggest that the human visual system may use two distinct processes to segregate figure from background: a local process that uses local feature contrasts to mark borders of coherent regions and a global process that groups similar features over a larger spatial scale. We performed psychophysical experiments to determine whether and to what extent the global similarity process contributes to image segmentation by motion and color. Our results show that for color, as well as for motion, segmentation occurs first by an integrative process on a coarse spatial scale, demonstrating that for both modalities the global process is faster than one based on local feature contrasts. Segmentation by motion builds up over time, whereas segmentation by color does not, indicating a fundamental difference between the modalities. Our data suggest that segmentation by motion proceeds first via a cooperative linking over space of local motion signals, generating almost immediate perceptual coherence even of physically incoherent signals. This global segmentation process occurs faster than the detection of absolute motion, providing further evidence for the existence of two motion processes with distinct dynamic properties.
Resumo:
BACKGROUND AND PURPOSE In clinical diagnosis, medical image segmentation plays a key role in the analysis of pathological regions. Despite advances in automatic and semi-automatic segmentation techniques, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a lower number of interactions, and a user-independent solution to reduce the time frame between image acquisition and diagnosis. METHODS We present a new interactive method for correcting image segmentations. Our method provides 3D shape corrections through 2D interactions. This approach enables an intuitive and natural corrections of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle and knee joint segmentations from MR images. RESULTS Experimental results show that full segmentation corrections could be performed within an average correction time of 5.5±3.3 minutes and an average of 56.5±33.1 user interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.02 for both anatomies. In addition, for users with different levels of expertise, our method yields a correction time and number of interaction decrease from 38±19.2 minutes to 6.4±4.3 minutes, and 339±157.1 to 67.7±39.6 interactions, respectively.
Resumo:
The perception of an object as a single entity within a visual scene requires that its features are bound together and segregated from the background and/or other objects. Here, we used magnetoencephalography (MEG) to assess the hypothesis that coherent percepts may arise from the synchronized high frequency (gamma) activity between neurons that code features of the same object. We also assessed the role of low frequency (alpha, beta) activity in object processing. The target stimulus (i.e. object) was a small patch of a concentric grating of 3c/°, viewed eccentrically. The background stimulus was either a blank field or a concentric grating of 3c/° periodicity, viewed centrally. With patterned backgrounds, the target stimulus emerged--through rotation about its own centre--as a circular subsection of the background. Data were acquired using a 275-channel whole-head MEG system and analyzed using Synthetic Aperture Magnetometry (SAM), which allows one to generate images of task-related cortical oscillatory power changes within specific frequency bands. Significant oscillatory activity across a broad range of frequencies was evident at the V1/V2 border, and subsequent analyses were based on a virtual electrode at this location. When the target was presented in isolation, we observed that: (i) contralateral stimulation yielded a sustained power increase in gamma activity; and (ii) both contra- and ipsilateral stimulation yielded near identical transient power changes in alpha (and beta) activity. When the target was presented against a patterned background, we observed that: (i) contralateral stimulation yielded an increase in high-gamma (>55 Hz) power together with a decrease in low-gamma (40-55 Hz) power; and (ii) both contra- and ipsilateral stimulation yielded a transient decrease in alpha (and beta) activity, though the reduction tended to be greatest for contralateral stimulation. The opposing power changes across different regions of the gamma spectrum with 'figure/ground' stimulation suggest a possible dual role for gamma rhythms in visual object coding, and provide general support of the binding-by-synchronization hypothesis. As the power changes in alpha and beta activity were largely independent of the spatial location of the target, however, we conclude that their role in object processing may relate principally to changes in visual attention.
Resumo:
Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera's point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ∼10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera's PSF. The algorithm can also improve dose estimation and treatment planning.^
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera’s point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ~10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera’s PSF. The algorithm can also improve dose estimation and treatment planning.
Resumo:
Background: The rapid progress currently being made in genomic science has created interest in potential clinical applications; however, formal translational research has been limited thus far. Studies of population genetics have demonstrated substantial variation in allele frequencies and haplotype structure at loci of medical relevance and the genetic background of patient cohorts may often be complex. Methods and Findings: To describe the heterogeneity in an unselected clinical sample we used the Affymetrix 6.0 gene array chip to genotype self-identified European Americans (N = 326), African Americans (N = 324) and Hispanics (N = 327) from the medical practice of Mount Sinai Medical Center in Manhattan, NY. Additional data from US minority groups and Brazil were used for external comparison. Substantial variation in ancestral origin was observed for both African Americans and Hispanics; data from the latter group overlapped with both Mexican Americans and Brazilians in the external data sets. A pooled analysis of the African Americans and Hispanics from NY demonstrated a broad continuum of ancestral origin making classification by race/ethnicity uninformative. Selected loci harboring variants associated with medical traits and drug response confirmed substantial within-and between-group heterogeneity. Conclusion: As a consequence of these complementary levels of heterogeneity group labels offered no guidance at the individual level. These findings demonstrate the complexity involved in clinical translation of the results from genome-wide association studies and suggest that in the genomic era conventional racial/ethnic labels are of little value.
Resumo:
AIM: To evaluate the effects of meal size and three segmentations on intragastric distribution of the meal and gastric motility, by scintigraphy. METHODS: Twelve healthy volunteers were randomly assessed, twice, by scintigraphy. The test meal consisted of 60 or 180 mL of yogurt labeled with 64 MBq (99m)Tc-tin colloid. Anterior and posterior dynamic frames were simultaneously acquired for 18 min and all data were analyzed in MatLab. Three proximal-distal segmentations using regions of interest were adopted for both meals. RESULTS: Intragastric distribution of the meal between the proximal and distal compartments was strongly influenced by the way in which the stomach was divided, showing greater proximal retention after the 180 mL. An important finding was that both dominant frequencies (1 and 3 cpm) were simultaneously recorded in the proximal and distal stomach; however, the power ratio of those dominant frequencies varied in agreement with the segmentation adopted and was independent of the meal size. CONCLUSION: It was possible to simultaneously evaluate the static intragastric distribution and phasic contractility from the same recording using our scintigraphic approach. (C) 2010 Baishideng. All rights reserved.
Resumo:
Marfan syndrome is an autosomal dominant disease of connective tissue caused by mutations in the fibrillin-1 encoding gene FBN1. Patients present cardiovascular, ocular and skeletal manifestations, and although being fully penetrant, MFS is characterized by a wide clinical variability both within and between families. Here we describe a new mouse model of MFS that recapitulates the clinical heterogeneity of the syndrome in humans. Heterozygotes for the mutant Fbn1 allele mg Delta(loxPneo), carrying the same internal deletion of exons 19-24 as the mg Delta mouse model, present defective microfibrillar deposition, emphysema, deterioration of aortic wall and kyphosis. However, the onset of a clinical phenotypes is earlier in the 129/Sv than in C57BL/6 background, indicating the existence of genetic modifiers of MFS between these two mouse strains. In addition, we characterized a wide clinical variability within the 129/Sv congenic heterozygotes, suggesting involvement of epigenetic factors in disease severity. Finally, we show a strong negative correlation between overall levels of Fbn1 expression and the severity of the phenotypes, corroborating the suggested protective role of normal fibrillin-1 in MFS pathogenesis, and supporting the development of therapies based on increasing Fbn1 expression.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
We have obtained nonperturbative one-loop expressions for the mean-energy-momentum tensor and current density of Dirac's field on a constant electriclike back-round. One of the goals of this calculation is to give a consistent description of backreaction in such a theory. Two cases of initial states are considered: the vacuum state and the thermal equilibrium state. First, we perform calculations for the vacuum initial state. In the obtained expressions, we separate the contributions due to particle creation and vacuum polarization. The latter contribution,, are related to the Heisenberg-Euler Lagrangian. Then, we Study the case of the thermal initial state. Here, we separate the contributions due to particle creation, vacuum polarization, and the contributions due to the work of the external field on the particles at the initial state. All these contributions are studied in detail, in different regimes of weak and strong fields and low and high temperatures. The obtained results allow us to establish restrictions on the electric field and its duration under which QED with a strong constant electric field is consistent. Under such restrictions, one can neglect the backreaction of particles created by the electric field. Some of the obtained results generalize the calculations of Heisenberg-Euler for energy density to the case of arbitrary strong electric fields.