970 resultados para Datasets
Resumo:
Accurate road lane information is crucial for advanced vehicle navigation and safety applications. With the increasing of very high resolution (VHR) imagery of astonishing quality provided by digital airborne sources, it will greatly facilitate the data acquisition and also significantly reduce the cost of data collection and updates if the road details can be automatically extracted from the aerial images. In this paper, we proposed an effective approach to detect road lanes from aerial images with employment of the image analysis procedures. This algorithm starts with constructing the (Digital Surface Model) DSM and true orthophotos from the stereo images. Next, a maximum likelihood clustering algorithm is used to separate road from other ground objects. After the detection of road surface, the road traffic and lane lines are further detected using texture enhancement and morphological operations. Finally, the generated road network is evaluated to test the performance of the proposed approach, in which the datasets provided by Queensland department of Main Roads are used. The experiment result proves the effectiveness of our approach.
Resumo:
Appearance-based mapping and localisation is especially challenging when separate processes of mapping and localisation occur at different times of day. The problem is exacerbated in the outdoors where continuous change in sun angle can drastically affect the appearance of a scene. We confront this challenge by fusing the probabilistic local feature based data association method of FAB-MAP with the pose cell filtering and experience mapping of RatSLAM. We evaluate the effectiveness of our amalgamation of methods using five datasets captured throughout the day from a single camera driven through a network of suburban streets. We show further results when the streets are re-visited three weeks later, and draw conclusions on the value of the system for lifelong mapping.
Resumo:
Objective: To examine the sources of coding discrepancy for injury morbidity data and explore the implications of these sources for injury surveillance.-------- Method: An on-site medical record review and recoding study was conducted for 4373 injury-related hospital admissions across Australia. Codes from the original dataset were compared to the recoded data to explore the reliability of coded data aand sources of discrepancy.---------- Results: The most common reason for differences in coding overall was assigning the case to a different external cause category with 8.5% assigned to a different category. Differences in the specificity of codes assigned within a category accounted for 7.8% of coder difference. Differences in intent assignment accounted for 3.7% of the differences in code assignment.---------- Conclusions: In the situation where 8 percent of cases are misclassified by major category, the setting of injury targets on the basis of extent of burden is a somewhat blunt instrument Monitoring the effect of prevention programs aimed at reducing risk factors is not possible in datasets with this level of misclassification error in injury cause subcategories. Future research is needed to build the evidence base around the quality and utility of the ICD classification system and application of use of this for injury surveillance in the hospital environment.
Resumo:
The recently proposed data-driven background dataset refinement technique provides a means of selecting an informative background for support vector machine (SVM)-based speaker verification systems. This paper investigates the characteristics of the impostor examples in such highly-informative background datasets. Data-driven dataset refinement individually evaluates the suitability of candidate impostor examples for the SVM background prior to selecting the highest-ranking examples as a refined background dataset. Further, the characteristics of the refined dataset were analysed to investigate the desired traits of an informative SVM background. The most informative examples of the refined dataset were found to consist of large amounts of active speech and distinctive language characteristics. The data-driven refinement technique was shown to filter the set of candidate impostor examples to produce a more disperse representation of the impostor population in the SVM kernel space, thereby reducing the number of redundant and less-informative examples in the background dataset. Furthermore, data-driven refinement was shown to provide performance gains when applied to the difficult task of refining a small candidate dataset that was mis-matched to the evaluation conditions.
Resumo:
This study assesses the recently proposed data-driven background dataset refinement technique for speaker verification using alternate SVM feature sets to the GMM supervector features for which it was originally designed. The performance improvements brought about in each trialled SVM configuration demonstrate the versatility of background dataset refinement. This work also extends on the originally proposed technique to exploit support vector coefficients as an impostor suitability metric in the data-driven selection process. Using support vector coefficients improved the performance of the refined datasets in the evaluation of unseen data. Further, attempts are made to exploit the differences in impostor example suitability measures from varying features spaces to provide added robustness.
Resumo:
Background It remains unclear over whether it is possible to develop an epidemic forecasting model for transmission of dengue fever in Queensland, Australia. Objectives To examine the potential impact of El Niño/Southern Oscillation on the transmission of dengue fever in Queensland, Australia and explore the possibility of developing a forecast model of dengue fever. Methods Data on the Southern Oscillation Index (SOI), an indicator of El Niño/Southern Oscillation activity, were obtained from the Australian Bureau of Meteorology. Numbers of dengue fever cases notified and the numbers of postcode areas with dengue fever cases between January 1993 and December 2005 were obtained from the Queensland Health and relevant population data were obtained from the Australia Bureau of Statistics. A multivariate Seasonal Auto-regressive Integrated Moving Average model was developed and validated by dividing the data file into two datasets: the data from January 1993 to December 2003 were used to construct a model and those from January 2004 to December 2005 were used to validate it. Results A decrease in the average SOI (ie, warmer conditions) during the preceding 3–12 months was significantly associated with an increase in the monthly numbers of postcode areas with dengue fever cases (β=−0.038; p = 0.019). Predicted values from the Seasonal Auto-regressive Integrated Moving Average model were consistent with the observed values in the validation dataset (root-mean-square percentage error: 1.93%). Conclusions Climate variability is directly and/or indirectly associated with dengue transmission and the development of an SOI-based epidemic forecasting system is possible for dengue fever in Queensland, Australia.
Resumo:
Background Older adults may find it problematic to attend hospital appointments due to the difficulty associated with travelling to, within and from a hospital facility for the purpose of a face-to-face assessment. This study aims to investigate equivalence between telephone and face-to-face administration for the Frenchay Activities Index (FAI) and the Euroqol-5D (EQ-5D) generic health-related quality of life instrument amongst an older adult population. Methods Patients aged >65 (n = 53) who had been discharged to the community following an acute hospital admission underwent telephone administration of the FAI and EQ-5D instruments seven days prior to attending a hospital outpatient appointment where they completed a face-to-face administration of these instruments. Results Overall, 40 subjects' datasets were complete for both assessments and included in analysis. The FAI items had high levels of agreement between the two modes of administration (item kappa's ranged 0.73 to 1.00) as did the EQ-5D (item kappa's ranged 0.67–0.83). For the FAI, EQ-5D VAS and EQ-5D utility score, intraclass correlation coefficients were 0.94, 0.58 and 0.82 respectively with paired t-tests indicating no significant systematic difference (p = 0.100, p = 0.690 and p = 0.290 respectively). Conclusion Telephone administration of the FAI and EQ-5D instruments provides comparable results to face-to-face administration amongst older adults deemed to have cognitive functioning intact at a basic level, indicating that this is a suitable alternate approach for collection of this information.
Resumo:
Queensland University of Technology (QUT) is a multidisciplinary university in Brisbane, Queensland, Australia, and has 40,000 students and 1,700 researchers. Notable eResearch infrastructure includes the QUT ePrints repository, Microsoft QUT Research Centre, the OAK (Open Access to Knowledge) Law Project, Cambia and leading research institutes. ---------- The Australian Government, via the Australian National Data Service (ANDS), is funding institutions to identify and describe their research datasets, to develop and populate data repositories and collaborative infrastructure, and to seed the Australian Research Data Commons. QUT is currently broadening its range of research support services, including those to support the management of research data, in recognition of the value of these datasets as products of the research process, and in order to maximize the potential for reuse. QUT is integrating Library and High Performance Computing (HPC) services to achieve its research support goals. ---------- The Library and HPC released an online survey using Key Survey to 1,700 researchers in September 2009. A comprehensive range of eResearch practices and skills was presented for response, and grouped into areas of scholarly communication and open access publishing, using collaborative technologies, data management, data collection and management, computation and visualization tools. Researchers were asked to rate their skill level on each practice. 254 responses were received over two weeks. Eight focus groups were also held with 35 higher degree research (HDR) students and staff to provide additional qualitative feedback. A similar survey was released to 100 support staff and 73 responses were received.---------- Preliminary results from the researcher survey and focus groups indicate a gap between current eResearch practices, and the potential for researchers to engage in eResearch practices. Researchers are more likely to seek advice from their peers, than from support staff. HDR students are more positive about eResearch practices and are more willing to learn new ways of conducting research. An account of the survey methodology, the results obtained, and proposed strategies to embed eResearch practices and skills across and within the research disciplines will be provided.
Resumo:
A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07±0.52 mm standard deviation (SD) for edge distances and −0.647±0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CTbased 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.
Resumo:
Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.
Resumo:
This report explains the objectives, datasets and evaluation criteria of both the clustering and classification tasks set in the INEX 2009 XML Mining track. The report also describes the approaches and results obtained by the different participants.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Presentation describling a project in data intensive research in the humanities. Measuring activity of publically available data in social networks such as Blogosphere, Twitter, Flickr, YouTube
Resumo:
Nature Refuges encompass the second largest extent of protected area estate in Queensland. Major problems exist in the data capture, map presentation, data quality and integrity of these boundaries. The spatial accuracies/inaccuracies of the Nature Refuge administrative boundaries directly influence the ability to preserve valuable ecosystems by challenging negative environmental impacts on these properties. This research work is about supporting the Nature Refuge Programs efforts to secure Queensland’s natural and cultural values on private land by utilising GIS and its advanced functionalities. The research design organizes and enters Queensland’s Nature Refuge boundaries into a spatial environment. Survey quality data collection techniques such as the Global Positioning Systems (GPS) are investigated to capture Nature Refuge boundary information. Using the concepts of map communication GIS Cartography is utilised for the protected area plan design. New spatial datasets are generated facilitating the effectiveness of investigative data analysis. The geodatabase model developed by this study adds rich GIS behaviour providing the capability to store, query, and manipulate geographic information. It provides the ability to leverage data relationships and enforces topological integrity creating savings in customization and productivity. The final phase of the research design incorporates the advanced functions of ArcGIS. These functions facilitate building spatial system models. The geodatabase and process models developed by this research can be easily modified and the data relating to mining can be replaced by other negative environmental impacts affecting the Nature Refuges. Results of the research are presented as graphs and maps providing visual evidence supporting the usefulness of GIS as means for capturing, visualising and enhancing spatial quality and integrity of Nature Refuge boundaries.