85 resultados para Segmentation results
em University of Queensland eSpace - Australia
Resumo:
Obstructive sleep apnea (OSA) is a highly prevalent disease in which upper airways are collapsed during sleep, leading to serious consequences. The gold standard of diagnosis, called polysomnography (PSG), requires a full-night hospital stay connected to over ten channels of measurements requiring physical contact with sensors. PSG is inconvenient, expensive and unsuited for community screening. Snoring is the earliest symptom of OSA, but its potential in clinical diagnosis is not fully recognized yet. Diagnostic systems intent on using snore-related sounds (SRS) face the tough problem of how to define a snore. In this paper, we present a working definition of a snore, and propose algorithms to segment SRS into classes of pure breathing, silence and voiced/unvoiced snores. We propose a novel feature termed the 'intra-snore-pitch-jump' (ISPJ) to diagnose OSA. Working on clinical data, we show that ISPJ delivers OSA detection sensitivities of 86-100% while holding specificity at 50-80%. These numbers indicate that snore sounds and the ISPJ have the potential to be good candidates for a take-home device for OSA screening. Snore sounds have the significant advantage in that they can be conveniently acquired with low-cost non-contact equipment. The segmentation results presented in this paper have been derived using data from eight patients as the training set and another eight patients as the testing set. ISPJ-based OSA detection results have been derived using training data from 16 subjects and testing data from 29 subjects.
Resumo:
Given the importance of syllables in the development of reading, spelling, and phonological awareness, information is needed about how children syllabify spoken words. To what extent is syllabification affected by knowledge of spelling, to what extent by phonology, and which phonological factors are influential? In Experiment 1, six- and seven-year-old children did not show effects of spelling on oral syllabification, performing similarly on words such as habit and rabbit. Spelling influenced the syllabification of older children and adults, with the results suggesting that knowledge of spelling must be well entrenched before it begins to affect oral syllabification. Experiment 2 revealed influences of phonological factors on syllabification that were similar across age groups. Young children, like older children and adults, showed differences between words with short and long vowels (e.g., lemon vs. demon) and words with sonorant and obstruent intervocalic consonants (e.g., melon vs. wagon). (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
Texture-segmentation is the crucial initial step for texture-based image retrieval. Texture is the main difficulty faced to a segmentation method. Many image segmentation algorithms either can’t handle texture properly or can’t obtain texture features directly during segmentation which can be used for retrieval purpose. This paper describes an automatic texture segmentation algorithm based on a set of features derived from wavelet domain, which are effective in texture description for retrieval purpose. Simulation results show that the proposed algorithm can efficiently capture the textured regions in arbitrary images, with the features of each region extracted as well. The features of each textured region can be directly used to index image database with applications as texture-based image retrieval.
Resumo:
Deformable models are a highly accurate and flexible approach to segmenting structures in medical images. The primary drawback of deformable models is that they are sensitive to initialisation, with accurate and robust results often requiring initialisation close to the true object in the image. Automatically obtaining a good initialisation is problematic for many structures in the body. The cartilages of the knee are a thin elastic material that cover the ends of the bone, absorbing shock and allowing smooth movement. The degeneration of these cartilages characterize the progression of osteoarthritis. The state of the art in the segmentation of the cartilage are 2D semi-automated algorithms. These algorithms require significant time and supervison by a clinical expert, so the development of an automatic segmentation algorithm for the cartilages is an important clinical goal. In this paper we present an approach towards this goal that allows us to automatically providing a good initialisation for deformable models of the patella cartilage, by utilising the strong spatial relationship of the cartilage to the underlying bone.
Resumo:
This study examined the test performance of distortion product otoacoustic emissions (DPOAEs) when used as a screening tool in the school setting. A total of 1003 children (mean age 6.2 years, SD = 0.4) were tested with pure-tone screening, tympanometry, and DPOAE assessment. Optimal DPOAE test performance was determined in comparison with pure-tone screening results using clinical decision analysis. The results showed hit rates of 0.86, 0.89, and 0.90, and false alarm rates of 0.52, 0.19, and 0.22 for criterion signal-to-noise ratio (SNR) values of 4, 5, and 11 dB at 1.1, 1.9, and 3.8 kHz respectively. DPOAE test performance was compromised at 1.1 kHz. In view of the different test performance characteristics across the frequencies, the use of a fixed SNR as a pass criterion for all frequencies in DPOAE assessments is not recommended. When compared to pure tone plus tympanometry results, the DPOAEs showed deterioration in test performance, suggesting that the use of DPOAEs alone might miss children with subtle middle ear dysfunction. However, when the results of a test protocol, which incorporates both DPOAEs and tympanometry, were used in comparison with the gold standard of pure-tone screening plus tympanometry, test performance was enhanced. In view of its high performance, the use of a protocol that includes both DPOAEs and tympanometry holds promise as a useful tool in the hearing screening of schoolchildren, including difficult-to-test children.
Resumo:
The XSophe-Sophe-XeprView((R)) computer simulation software suite enables scientists to easily determine spin Hamiltonian parameters from isotropic, randomly oriented and single crystal continuous wave electron paramagnetic resonance (CW EPR) spectra from radicals and isolated paramagnetic metal ion centers or clusters found in metalloproteins, chemical systems and materials science. XSophe provides an X-windows graphical user interface to the Sophe programme and allows: creation of multiple input files, local and remote execution of Sophe, the display of sophelog (output from Sophe) and input parameters/files. Sophe is a sophisticated computer simulation software programme employing a number of innovative technologies including; the Sydney OPera HousE (SOPHE) partition and interpolation schemes, a field segmentation algorithm, the mosaic misorientation linewidth model, parallelization and spectral optimisation. In conjunction with the SOPHE partition scheme and the field segmentation algorithm, the SOPHE interpolation scheme and the mosaic misorientation linewidth model greatly increase the speed of simulations for most spin systems. Employing brute force matrix diagonalization in the simulation of an EPR spectrum from a high spin Cr(III) complex with the spin Hamiltonian parameters g(e) = 2.00, D = 0.10 cm(-1), E/D = 0.25, A(x) = 120.0, A(y) = 120.0, A(z) = 240.0 x 10(-4) cm(-1) requires a SOPHE grid size of N = 400 (to produce a good signal to noise ratio) and takes 229.47 s. In contrast the use of either the SOPHE interpolation scheme or the mosaic misorientation linewidth model requires a SOPHE grid size of only N = 18 and takes 44.08 and 0.79 s, respectively. Results from Sophe are transferred via the Common Object Request Broker Architecture (CORBA) to XSophe and subsequently to XeprView((R)) where the simulated CW EPR spectra (1D and 2D) can be compared to the experimental spectra. Energy level diagrams, transition roadmaps and transition surfaces aid the interpretation of complicated randomly oriented CW EPR spectra and can be viewed with a web browser and an OpenInventor scene graph viewer.
Resumo:
Many images consist of two or more 'phases', where a phase is a collection of homogeneous zones. For example, the phases may represent the presence of different sulphides in an ore sample. Frequently, these phases exhibit very little structure, though all connected components of a given phase may be similar in some sense. As a consequence, random set models are commonly used to model such images. The Boolean model and models derived from the Boolean model are often chosen. An alternative approach to modelling such images is to use the excursion sets of random fields to model each phase. In this paper, the properties of excursion sets will be firstly discussed in terms of modelling binary images. Ways of extending these models to multi-phase images will then be explored. A desirable feature of any model is to be able to fit it to data reasonably well. Different methods for fitting random set models based on excursion sets will be presented and some of the difficulties with these methods will be discussed.
Resumo:
The nature of an experiment involving 204 residents is outlined and the results are reported and analysed. Two consecutive surveys of the respondents provide data about their stated knowledge of 23 wildlife species present in tropical Australia, most of which exclusively occur there. In addition, these surveys provide data about the willingness of respondents to pay for the conservation of those species belonging to three taxa; reptiles, mammals, and birds. Thus it is possible to compare the respondents’ stated knowledge of the species with their willingness to pay for their conservation, and to draw relevant inferences from this. From the initial survey and these associations, interesting relationships can be observed between those variables (knowledge and willingness to pay). The second survey was completed after the respondents’ knowledge of the species was experimentally increased and became more balanced. This is shown to result in increased dispersion (greater discrimination) in willingness to contribute to conservation of the different species in the set of wildlife species considered. Both theoretical and policy conclusions are drawn from the results.
Resumo:
Reviews the ecological status of the mahogany glider and describes its distribution, habitat and abundance, life history and threats to it. Three serial surveys of Brisbane residents provide data on the knowledge of respondents about the mahogany glider. The results provide information about the attitudes of respondents to the mahogany glider, to its conservation and relevant public policies and about variations in these factors as the knowledge of participants of the mahogany glider alters. Similarly data is provided and analysed about the willingness to pay of respondents to conserve the mahogany glider. Population viability analysis is applied to estimate the required habitat area for a minimum viable population of the mahogany glider to ensure at least a 95% probability of its survival for 100 years. Places are identified in Queensland where the requisite minimum area of critical habitat can be conserved. Using the survey results as a basis, the likely willingness of groups of Australians to pay for the conservation of the mahogany glider is estimated and consequently their willingness to pay for the minimum required area of its habitat. Methods for estimating the cost of protecting this habitat are outlined. Australia-wide benefits seem to exceed the costs. Establishing a national park containing the minimum viable population of the mahogany glider is an appealing management option. This would also be beneficial in conserving other endangered wildlife species. Therefore, additional economic benefits to those estimated on account of the mahogany glider itself can be obtained.
Resumo:
Previous studies have shown that multiple ; birth children (MBC) are prone to early phonological ;difficulties and later literacy problems. However, to date, ;there has been no systematic long-term follow-up of MBC with phonological difficulties in the preschool years to determine whether these difficulties predict later literacy problems. In this study, 20 MBC whose early speech and language skills had been previously documented were compared to normative data and 20 singleton controls on tasks assessing phonological ; processing and literacy. The major findings indicated that MBC performed significantly more poorly on some tasks :df phonological processing than singleton controls did. Further, the early phonological skills of MBC (i.e., the number of inappropriate phonological processes used) correlated with poor performance on visual rhyme recognition, word repetition, and phoneme detection tasks 5 years later. There was no significant relationship between early biological factors (birth weight and gestation period) and performance on the phonological processing and literacy-related subtests. These results cl-support the hypothesis that MBC's early speech and language difficulties are not merely a transient phase;of; development, but a real disorder, with consequences for later academic achievement.
Resumo:
The task of segmenting cell nuclei from cytoplasm in conventional Papanicolaou (Pap) stained cervical cell images is a classical image analysis problem which may prove to be crucial to the development of successful systems which automate the analysis of Pap smears for detection of cancer of the cervix. Although simple thresholding techniques will extract the nucleus in some cases, accurate unsupervised segmentation of very large image databases is elusive. Conventional active contour models as introduced by Kass, Witkin and Terzopoulos (1988) offer a number of advantages in this application, but suffer from the well-known drawbacks of initialisation and minimisation. Here we show that a Viterbi search-based dual active contour algorithm is able to overcome many of these problems and achieve over 99% accurate segmentation on a database of 20 130 Pap stained cell images. (C) 1998 Elsevier Science B.V. All rights reserved.
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.