924 resultados para User Evaluation
Resumo:
The use of appropriate features to characterize an output class or object is critical for all classification problems. This paper evaluates the capability of several spectral and texture features for object-based vegetation classification at the species level using airborne high resolution multispectral imagery. Image-objects as the basic classification unit were generated through image segmentation. Statistical moments extracted from original spectral bands and vegetation index image are used as feature descriptors for image objects (i.e. tree crowns). Several state-of-art texture descriptors such as Gray-Level Co-Occurrence Matrix (GLCM), Local Binary Patterns (LBP) and its extensions are also extracted for comparison purpose. Support Vector Machine (SVM) is employed for classification in the object-feature space. The experimental results showed that incorporating spectral vegetation indices can improve the classification accuracy and obtained better results than in original spectral bands, and using moments of Ratio Vegetation Index obtained the highest average classification accuracy in our experiment. The experiments also indicate that the spectral moment features also outperform or can at least compare with the state-of-art texture descriptors in terms of classification accuracy.
Resumo:
The Australian National Data Service (ANDS) was established in 2008 and aims to: influence national policy in the area of data management in the Australian research community; inform best practice for the curation of data, and, transform the disparate collections of research data around Australia into a cohesive collection of research resources One high profile ANDS activity is to establish the population of Research Data Australia, a set of web pages describing data collections produced by or relevant to Australian researchers. It is designed to promote visibility of research data collections in search engines, in order to encourage their re-use. As part of activities associated with the Australian National Data Service, an increasing number of Australian Universities are choosing to implement VIVO, not as a platform to profile information about researchers, but as a 'metadata store' platform to profile information about institutional research data sets, both locally and as part of a national data commons. To date, the University of Melbourne, Griffith University, the Queensland University of Technology, and the University of Western Australia have all chosen to implement VIVO, with interest from other Universities growing.
Resumo:
Purpose: The purpose of the paper is to develop a framework for evaluation of accessibility for knowledge based cities. ----- ----- Design/methodology/approach: This approach notifies common mistakes and problems in accessibility assessment for knowledge cities. ----- ----- Originality/value: Accessibility plays a key role in transport sustainability and recognizes the crucial links between transport and sustainable goals like air quality, environmental resource consumption & social equity. In knowledge cities, accessibility has significant effects on quality of life and social equity by improving the mobility of people and goods. Accessibility also influences patterns of growth and economic health by providing access to land. Accessibility is not only one of the components of knowledge cities but also affects other elements of knowledge cities directly or indirectly. ----- ----- Practical implications: The outcomes of the application will be helpful for developing particular methodologies for evaluating knowledge cities. On other words, this methodology attempts to develop an assessment procedure for examining accessibility of knowledge-based cities.
Resumo:
Griffith University is developing a digital repository system using HarvestRoad Hive software to better meet the needs of academics and students using institutional learning and teaching, course readings, and institutional intellectual capital systems. Issues with current operations and systems are discussed in terms of user behaviour. New repository systems are being designed in such a way that they address current service and user behaviour issues by closely aligning systems with user needs. By developing attractive online services, Griffith is working to change current user behaviour to achieve strategic priorities in the sharing and reuse of learning objects, improved selection and use of digitised course readings, the development of ePrint and eScience services, and the management of a research portfolio service.
Resumo:
OBJECTIVE: To examine whether some drivers with hemianopia or quadrantanopia display safe driving skills on the road compared with drivers with normal visual fields. ---------- METHOD: An occupational therapist evaluated 22 people with hemianopia, 8 with quadrantanopia, and 30 with normal vision for driving skills during naturalistic driving using six rating scales. ---------- RESULTS: Of drivers with normal vision, >90% drove flawlessly or had minor errors. Although drivers with hemianopia were more likely to receive poorer ratings for all skills, 59.1%–81.8% performed with no or minor errors. A skill commonly problematic for them was lane keeping (40.9%). Of 8 drivers with quadrantanopia, 7 (87.5%) exhibited no or minor errors. ---------- CONCLUSION: This study of people with hemianopia or quadrantanopia with no lateral spatial neglect highlights the need to provide individual opportunities for on-road driving evaluation under natural traffic conditions if a person is motivated to return to driving after brain injury.
Resumo:
We describe research into the identification of anomalous events and event patterns as manifested in computer system logs. Prototype software has been developed with a capability that identifies anomalous events based on usage patterns or user profiles, and alerts administrators when such events are identified. To reduce the number of false positive alerts we have investigated the use of different user profile training techniques and introduce the use of abstractions to group together applications which are related. Our results suggest that the number of false alerts that are generated is significantly reduced when a growing time window is used for user profile training and when abstraction into groups of applications is used.
Resumo:
Expenditure on R&D in the China construction industry has been relatively low in comparison with many developed countries for a number of years – a situation considered to be a major barrier to the industry’s competitiveness in general and unsatisfactory industry development of the 31 regions involved. A major problem with this is the lack of a sufficiently sophisticated method of objectively evaluating R&D activity in what are quite complex circumstances considering the size and regional differences that exist in this part of the world. A regional construction R&D evaluation system (RCRES) is presented aimed at rectifying the situation. This is based on 12 indicators drawn from the Chinese Government’s R&D Inventory of Resources in consultation with a small group of experts in the field, and further factor analysed into three groups. From this, the required evaluation is obtained by a simple formula. Examination of the results provides a ranking list of the R&D performance of each of the 31 regions, indicating a general disproportion between coastal and inland regions and highlighting regions receiving special emphasis or currently lacking in development. The understanding on this is vital for the future of China’s construction industry.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
This paper assesses the capacity of high-frequency ultrasonic waves for detecting changes in the proteoglycan (PG) content of articular cartilage. 50 cartilage-on-bone samples were exposed to ultrasonic waves via an ultrasound transducer at a frequency of 20MHz. Histology and ImageJ processing were conducted to determine the PG content of the specimen. The ratios of the reflected signals from both the surface and the osteochondral junction (OCJ) were determined from the experimental data. The initial results show an inconsistency in the capacity of ultrasound to distinguish samples with severe proteoglycan loss (i.e. >90% PG loss) from the normal intact sample. This lack of clear distinction was also demonstrated at for samples with less than 60% depletion, while there is a clear differentiation between the normal intact sample and those with 55-70% PG loss.
Resumo:
The health of tollbooth workers is seriously threatened by long-term exposure to polluted air from vehicle exhausts. Using traffic data collected at a toll plaza, vehicle movements were simulated by a system dynamics model with different traffic volumes and toll collection procedures. This allowed the average travel time of vehicles to be calculated. A three-dimension Computational Fluid Dynamics (CFD) model was used with a k–ε turbulence model to simulate pollutant dispersion at the toll plaza for different traffic volumes and toll collection procedures. It was shown that pollutant concentration around tollbooths increases as traffic volume increases. Whether traffic volume is low or high (1500 vehicles/h or 2500 vehicles/h), pollutant concentration decreases if electronic toll collection (ETC) is adopted. In addition, pollutant concentration around tollbooths decreases as the proportion of ETC-equipped vehicles increases. However, if the proportion of ETC-equipped vehicles is very low and the traffic volume is not heavy, then pollutant concentration increases as the number of ETC lanes increases.
Resumo:
Objective: To identify agreement levels between conventional longitudinal evaluation of change (post–pre) and patient-perceived change (post–then test) in health-related quality of life. Design: A prospective cohort investigation with two assessment points (baseline and six-month follow-up) was implemented. Setting: Community rehabilitation setting. Subjects: Frail older adults accessing community-based rehabilitation services. Intervention: Nil as part of this investigation. Main measures: Conventional longitudinal change in health-related quality of life was considered the difference between standard EQ-5D assessments completed at baseline and follow-up. To evaluate patient-perceived change a ‘then test’ was also completed at the follow-up assessment. This required participants to report (from their current perspective) how they believe their health-related quality of life was at baseline (using the EQ-5D). Patient-perceived change was considered the difference between ‘then test’ and standard follow-up EQ-5D assessments. Results: The mean (SD) age of participants was 78.8 (7.3). Of the 70 participants 62 (89%) of data sets were complete and included in analysis. Agreement between conventional (post–pre) and patient-perceived (post–then test) change was low to moderate (EQ-5D utility intraclass correlation coefficient (ICC)¼0.41, EQ-5D visual analogue scale (VAS) ICC¼0.21). Neither approach inferred greater change than the other (utility P¼0.925, VAS P¼0.506). Mean (95% confidence interval (CI)) conventional change in EQ-5D utility and VAS were 0.140 (0.045,0.236) and 8.8 (3.3,14.3) respectively, while patient-perceived change was 0.147 (0.055,0.238) and 6.4 (1.7,11.1) respectively. Conclusions: Substantial disagreement exists between conventional longitudinal evaluation of change in health-related quality of life and patient-perceived change in health-related quality of life (as measured using a then test) within individuals.
Resumo:
In recent years several scientific Workflow Management Systems (WfMSs) have been developed with the aim to automate large scale scientific experiments. As yet, many offerings have been developed, but none of them has been promoted as an accepted standard. In this paper we propose a pattern-based evaluation of three among the most widely used scientific WfMSs: Kepler, Taverna and Triana. The aim is to compare them with traditional business WfMSs, emphasizing the strengths and deficiencies of both systems. Moreover, a set of new patterns is defined from the analysis of the three considered systems.
Resumo:
Special collections, because of the issues associated with conservation and use, a feature they share with archives, tend to be the most digitized areas in libraries. The Nineteenth Century Schoolbooks collection is a collection of 9000 rarely held nineteenth-century schoolbooks that were painstakingly collected over a lifetime of work by Prof. John A. Nietz, and donated to the Hillman Library at the University of Pittsburgh in 1958, which has since grown to 15,000. About 140 of these texts are completely digitized and showcased in a publicly accessible website through the University of Pittsburgh’s Library, along with a searchable bibliography of the entire collection, which expanded the awareness of this collection and its user base to beyond the academic community. The URL for the website is http://digital.library.pitt.edu/nietz/. The collection is a rich resource for researchers studying the intellectual, educational, and textbook publishing history of the United States. In this study, we examined several existing records collected by the Digital Research Library at the University of Pittsburgh in order to determine the identity and searching behaviors of the users of this collection. Some of the records examined include: 1) The results of a 3-month long user survey, 2) User access statistics including search queries for a period of one year, a year after the digitized collection became publicly available in 2001, and 3) E-mail input received by the website over 4 years from 2000-2004. The results of the study demonstrate the differences in online retrieval strategies used by academic researchers and historians, archivists, avocationists, and the general public, and the importance of facilitating the discovery of digitized special collections through the use of electronic finding aids and an interactive interface with detailed metadata.
Resumo:
This chapter sets out the debates about the changing role of audiences in relation to user-created content as they appear in New Media and Cultural Studies. The discussion moves beyond the simple dichotomies between active producers and passive audiences, and draws on empirical evidence, in order to examine those practices that are most ordinary and widespread. Building on the knowledge of television’s role in facilitating public life, and the everyday, affective practices through which it is experienced and used, I focus on the way in which YouTube operates as a site of community, creativity and cultural citizenship; and as an archive of popular cultural memory.