411 resultados para image classification
Resumo:
The intervertebral disc withstands large compressive loads (up to nine times bodyweight in humans) while providing flexibility to the spinal column. At a microstructural level, the outer sheath of the disc (the annulus fibrosus) comprises 12–20 annular layers of alternately crisscrossed collagen fibres embedded in a soft ground matrix. The centre of the disc (the nucleus pulposus) consists of a hydrated gel rich in proteoglycans. The disc is the largest avascular structure in the body and is of much interest biomechanically due to the high societal burden of disc degeneration and back pain. Although the disc has been well characterized at the whole joint scale, it is not clear how the disc tissue microstructure confers its overall mechanical properties. In particular, there have been conflicting reports regarding the level of attachment between adjacent lamellae in the annulus, and the importance of these interfaces to the overall integrity of the disc is unknown. We used a polarized light micrograph of the bovine tail disc in transverse cross-section to develop an image-based finite element model incorporating sliding and separation between layers of the annulus, and subjected the model to axial compressive loading. Validation experiments were also performed on four bovine caudal discs. Interlamellar shear resistance had a strong effect on disc compressive stiffness, with a 40% drop in stiffness when the interface shear resistance was changed from fully bonded to freely sliding. By contrast, interlamellar cohesion had no appreciable effect on overall disc mechanics. We conclude that shear resistance between lamellae confers disc mechanical resistance to compression, and degradation of the interlamellar interface structure may be a precursor to macroscopic disc degeneration.
Resumo:
J.W.Lindt’s Colonial man and Aborigine image from the GRAFTON ALBUM: “On chemistry and optics all does not depend, art must with these in triple union blend” (text from J.W. Lindt’s photographic backing card) In this paper, I follow an argument that Lindt held a position in his particular colonial environment where he was simultaneously both an insider and an outsider and that such a position may be considered prerequisite in stimulating exchange. A study of the transition of J.W. Lindt in Grafton, N.S.W. in the 1860s from a traveller to a migrant and subsequently to a professional photographer, as well as Lindt’s photographic career, which evolved through strategic action and technical approaches to photography, bears witness to his cultural relativity. One untitled photograph from this period of work constructs a unique commentary of Australian colonial life that illustrates a non-hegemonic position, particularly as it was included in one of the first albums of photographs of Aborigines that Lindt gifted to an illustrious person (in this case the Mayor of Grafton). As in his other studio constructions, props and backdrops were arranged and sitters were positioned with care, but this photograph is the only one in the album that includes a non-Aborigine in a relationship to an Aborigine. An analysis of the props, technical details of the album and the image suggests a reconciliatory aspect that thwarts the predominant attitudes towards Aborigines in the area at that time.
Resumo:
This paper presents a validation study on the application of a novel interslice interpolation technique for musculoskeletal structure segmentation of articulated joints and muscles on human magnetic resonance imaging data. The interpolation technique is based on morphological shape-based interpolation combined with intensity based voxel classification. Shape-based interpolation in the absence of the original intensity image has been investigated intensively. However, in some applications of medical image analysis, the intensity image of the slice to be interpolated is available. For example, when manual segmentation is conducted on selected slices, the segmentation on those unselected slices can be obtained by interpolation. We proposed a two- step interpolation method to utilize both the shape information in the manual segmentation and local intensity information in the image. The method was tested on segmentations of knee, hip and shoulder joint bones and hamstring muscles. The results were compared with two existing interpolation methods. Based on the calculated Dice similarity coefficient and normalized error rate, the proposed method outperformed the other two methods.
Resumo:
Purpose This study evaluated the impact of a daily and weekly image-guided radiotherapy protocols in reducing setup errors and setting of appropriate margins in head and neck cancer patients. Materials and methods Interfraction and systematic shifts for the hypothetical day 1–3 plus weekly imaging were extrapolated from daily imaging data from 31 patients (964 cone beam computed tomography (CBCT) scans). In addition, residual setup errors were calculated by taking the average shifts in each direction for each patient based on the first three shifts and were presumed to represent systematic setup error. The clinical target volume (CTV) to planning target volume (PTV) margins were calculated using van Herk formula and analysed for each protocol. Results The mean interfraction shifts for daily imaging were 0·8, 0·3 and 0·5 mm in the S-I (superior-inferior), L-R (left-right) and A-P (anterior-posterior) direction, respectively. On the other hand the mean shifts for day 1–3 plus weekly imaging were 0·9, 1·8 and 0·5 mm in the S-I, L-R and A-P direction, respectively. The mean day 1–3 residual shifts were 1·5, 2·1 and 0·7 mm in the S-I, L-R and A-P direction, respectively. No significant difference was found in the mean setup error for the daily and hypothetical day 1–3 plus weekly protocol. However, the calculated CTV to PTV margins for the daily interfraction imaging data were 1·6, 3·8 and 1·4 mm in the S-I, L-R and A-P directions, respectively. Hypothetical day 1–3 plus weekly resulted in CTV–PTV margins of 5, 4·2 and 5 mm in the S-I, L-R and A-P direction. Conclusions The results of this study show that a daily CBCT protocol reduces setup errors and allows setup margin reduction in head and neck radiotherapy compared to a weekly imaging protocol.
Resumo:
While the popularity of destination image research has increased exponentially in the literature, there has been relatively little published about perceptions held by international consumers of destinations in South America. The purpose of this paper is to report the findings of a research project that aimed to identify the baseline market perceptions of Brazil, Argentina and Chile amongst Australian residents, at the time of the emergence of this long haul market. Of interest was the extent to which Australians differentiate the three distinct countries versus perceiving the continent as a gestalt. These baseline perceptions enable the effectiveness of future marketing communications in Australia by the three national tourism offices to be monitored over time. Importance-Performance Analysis (IPA) is used as a practical analytical tool to guide decision makers. In terms of operationalising destination image, a key research finding was the very high ratio or participants using the ‘Don’t know’ (DK) option for each destination performance scale item. This finding has practical implications for the destination marketers, as well as for researchers engaged in destination image research in long haul and/or emerging markets.
Resumo:
Being able to accurately predict the risk of falling is crucial in patients with Parkinson’s dis- ease (PD). This is due to the unfavorable effect of falls, which can lower the quality of life as well as directly impact on survival. Three methods considered for predicting falls are decision trees (DT), Bayesian networks (BN), and support vector machines (SVM). Data on a 1-year prospective study conducted at IHBI, Australia, for 51 people with PD are used. Data processing are conducted using rpart and e1071 packages in R for DT and SVM, con- secutively; and Bayes Server 5.5 for the BN. The results show that BN and SVM produce consistently higher accuracy over the 12 months evaluation time points (average sensitivity and specificity > 92%) than DT (average sensitivity 88%, average specificity 72%). DT is prone to imbalanced data so needs to adjust for the misclassification cost. However, DT provides a straightforward, interpretable result and thus is appealing for helping to identify important items related to falls and to generate fallers’ profiles.
Resumo:
Objective Death certificates provide an invaluable source for cancer mortality statistics; however, this value can only be realised if accurate, quantitative data can be extracted from certificates – an aim hampered by both the volume and variable nature of certificates written in natural language. This paper proposes an automatic classification system for identifying cancer related causes of death from death certificates. Methods Detailed features, including terms, n-grams and SNOMED CT concepts were extracted from a collection of 447,336 death certificates. These features were used to train Support Vector Machine classifiers (one classifier for each cancer type). The classifiers were deployed in a cascaded architecture: the first level identified the presence of cancer (i.e., binary cancer/nocancer) and the second level identified the type of cancer (according to the ICD-10 classification system). A held-out test set was used to evaluate the effectiveness of the classifiers according to precision, recall and F-measure. In addition, detailed feature analysis was performed to reveal the characteristics of a successful cancer classification model. Results The system was highly effective at identifying cancer as the underlying cause of death (F-measure 0.94). The system was also effective at determining the type of cancer for common cancers (F-measure 0.7). Rare cancers, for which there was little training data, were difficult to classify accurately (F-measure 0.12). Factors influencing performance were the amount of training data and certain ambiguous cancers (e.g., those in the stomach region). The feature analysis revealed a combination of features were important for cancer type classification, with SNOMED CT concept and oncology specific morphology features proving the most valuable. Conclusion The system proposed in this study provides automatic identification and characterisation of cancers from large collections of free-text death certificates. This allows organisations such as Cancer Registries to monitor and report on cancer mortality in a timely and accurate manner. In addition, the methods and findings are generally applicable beyond cancer classification and to other sources of medical text besides death certificates.
Resumo:
In this presentation, I reflect upon the global landscape surrounding the governance and classification of media content, at a time of rapid change in media platforms and services for content production and distribution, and contested cultural and social norms. I discuss the tensions and contradictions arising in the relationship between national, regional and global dimensions of media content distribution, as well as the changing relationships between state and non-state actors. These issues will be explored through consideration of issues such as: recent debates over film censorship; the review of the National Classification Scheme conducted by the Australian Law Reform Commission; online controversies such as the future of the Reddit social media site; and videos posted online by the militant group ISIS.
Resumo:
Background The purpose of this presentation is to outline the relevance of the categorization of the load regime data to assess the functional output and usage of the prosthesis of lower limb amputees. The objectives are • To highlight the need for categorisation of activities of daily living • To present a categorization of load regime applied on residuum, • To present some descriptors of the four types of activity that could be detected, • To provide an example the results for a case. Methods The load applied on the osseointegrated fixation of one transfemoral amputee was recorded using a portable kinetic system for 5 hours. The load applied on the residuum was divided in four types of activities corresponding to inactivity, stationary loading, localized locomotion and directional locomotion as detailed in previously publications. Results The periods of directional locomotion, localized locomotion, and stationary loading occurred 44%, 34%, and 22% of recording time and each accounted for 51%, 38%, and 12% of the duration of the periods of activity, respectively. The absolute maximum force during directional locomotion, localized locomotion, and stationary loading was 19%, 15%, and 8% of the body weight on the anteroposterior axis, 20%, 19%, and 12% on the mediolateral axis, and 121%, 106%, and 99% on the long axis. A total of 2,783 gait cycles were recorded. Discussion Approximately 10% more gait cycles and 50% more of the total impulse than conventional analyses were identified. The proposed categorization and apparatus have the potential to complement conventional instruments, particularly for difficult cases.
Resumo:
This paper reports a rare investigation of stopover destination image. Although the topic of destination image has been one of the most popular in the tourism literature since the 1970s, there has been a lack of research attention in relation to the context of stopover destinations for long haul international travellers. The purpose of this study was to identify attributes deemed salient to Australian consumers when considering stopover destinations for long haul travel to the United Kingdom and Europe. Underpinned by Personal Construct Theory (PCT), the study used the Repertory Test to identify 21 salient attributes, which could be used in the development of a survey instrument to measure the attractiveness of a competitive set of stopover destinations. While the list of attributes shared some commonality with general studies of destination image reported in the literature, the elicitation of a relatively large number of stopover context specific attributes highlights the potential benefit of engaging with consumers in qualitative research, such as using the Repertory Test, during the questionnaire development stage.
Resumo:
Environmental changes have put great pressure on biological systems leading to the rapid decline of biodiversity. To monitor this change and protect biodiversity, animal vocalizations have been widely explored by the aid of deploying acoustic sensors in the field. Consequently, large volumes of acoustic data are collected. However, traditional manual methods that require ecologists to physically visit sites to collect biodiversity data are both costly and time consuming. Therefore it is essential to develop new semi-automated and automated methods to identify species in automated audio recordings. In this study, a novel feature extraction method based on wavelet packet decomposition is proposed for frog call classification. After syllable segmentation, the advertisement call of each frog syllable is represented by a spectral peak track, from which track duration, dominant frequency and oscillation rate are calculated. Then, a k-means clustering algorithm is applied to the dominant frequency, and the centroids of clustering results are used to generate the frequency scale for wavelet packet decomposition (WPD). Next, a new feature set named adaptive frequency scaled wavelet packet decomposition sub-band cepstral coefficients is extracted by performing WPD on the windowed frog calls. Furthermore, the statistics of all feature vectors over each windowed signal are calculated for producing the final feature set. Finally, two well-known classifiers, a k-nearest neighbour classifier and a support vector machine classifier, are used for classification. In our experiments, we use two different datasets from Queensland, Australia (18 frog species from commercial recordings and field recordings of 8 frog species from James Cook University recordings). The weighted classification accuracy with our proposed method is 99.5% and 97.4% for 18 frog species and 8 frog species respectively, which outperforms all other comparable methods.
Resumo:
In this paper we investigate the effectiveness of class specific sparse codes in the context of discriminative action classification. The bag-of-words representation is widely used in activity recognition to encode features, and although it yields state-of-the art performance with several feature descriptors it still suffers from large quantization errors and reduces the overall performance. Recently proposed sparse representation methods have been shown to effectively represent features as a linear combination of an over complete dictionary by minimizing the reconstruction error. In contrast to most of the sparse representation methods which focus on Sparse-Reconstruction based Classification (SRC), this paper focuses on a discriminative classification using a SVM by constructing class-specific sparse codes for motion and appearance separately. Experimental results demonstrates that separate motion and appearance specific sparse coefficients provide the most effective and discriminative representation for each class compared to a single class-specific sparse coefficients.
Resumo:
Early detection of (pre-)signs of ulceration on a diabetic foot is valuable for clinical practice. Hyperspectral imaging is a promising technique for detection and classification of such (pre-)signs. However, the number of the spectral bands should be limited to avoid overfitting, which is critical for pixel classification with hyperspectral image data. The goal was to design a detector/classifier based on spectral imaging (SI) with a small number of optical bandpass filters. The performance and stability of the design were also investigated. The selection of the bandpass filters boils down to a feature selection problem. A dataset was built, containing reflectance spectra of 227 skin spots from 64 patients, measured with a spectrometer. Each skin spot was annotated manually by clinicians as "healthy" or a specific (pre-)sign of ulceration. Statistical analysis on the data set showed the number of required filters is between 3 and 7, depending on additional constraints on the filter set. The stability analysis revealed that shot noise was the most critical factor affecting the classification performance. It indicated that this impact could be avoided in future SI systems with a camera sensor whose saturation level is higher than 106, or by postimage processing.
Resumo:
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.
Resumo:
An application that translates raw thermal melt curve data into more easily assimilated knowledge is described. This program, called ‘Meltdown’, performs a number of data remediation steps before classifying melt curves and estimating melting temperatures. The final output is a report that summarizes the results of a differential scanning fluorimetry experiment. Meltdown uses a Bayesian classification scheme, enabling reproducible identification of various trends commonly found in DSF datasets. The goal of Meltdown is not to replace human analysis of the raw data, but to provide a sensible interpretation of the data to make this useful experimental technique accessible to naïve users, as well as providing a starting point for detailed analyses by more experienced users.