827 resultados para 3D user interface
Resumo:
Special collections, because of the issues associated with conservation and use, a feature they share with archives, tend to be the most digitized areas in libraries. The Nineteenth Century Schoolbooks collection is a collection of 9000 rarely held nineteenth-century schoolbooks that were painstakingly collected over a lifetime of work by Prof. John A. Nietz, and donated to the Hillman Library at the University of Pittsburgh in 1958, which has since grown to 15,000. About 140 of these texts are completely digitized and showcased in a publicly accessible website through the University of Pittsburgh’s Library, along with a searchable bibliography of the entire collection, which expanded the awareness of this collection and its user base to beyond the academic community. The URL for the website is http://digital.library.pitt.edu/nietz/. The collection is a rich resource for researchers studying the intellectual, educational, and textbook publishing history of the United States. In this study, we examined several existing records collected by the Digital Research Library at the University of Pittsburgh in order to determine the identity and searching behaviors of the users of this collection. Some of the records examined include: 1) The results of a 3-month long user survey, 2) User access statistics including search queries for a period of one year, a year after the digitized collection became publicly available in 2001, and 3) E-mail input received by the website over 4 years from 2000-2004. The results of the study demonstrate the differences in online retrieval strategies used by academic researchers and historians, archivists, avocationists, and the general public, and the importance of facilitating the discovery of digitized special collections through the use of electronic finding aids and an interactive interface with detailed metadata.
Resumo:
Since the availability of 3D full body scanners and the associated software systems for operations with large point clouds, 3D anthropometry has been marketed as a breakthrough and milestone in ergonomic design. The assumptions made by the representatives of the 3D paradigm need to be critically reviewed though. 3D anthropometry has advantages as well as shortfalls, which need to be carefully considered. While it is apparent that the measurement of a full body point cloud allows for easier storage of raw data and improves quality control, the difficulties in calculation of standardized measurements from the point cloud are widely underestimated. Early studies that made use of 3D point clouds to derive anthropometric dimensions have shown unacceptable deviations from the standardized results measured manually. While 3D human point clouds provide a valuable tool to replicate specific single persons for further virtual studies, or personalize garment, their use in ergonomic design must be critically assessed. Ergonomic, volumetric problems are defined by their 2-dimensional boundary or one dimensional sections. A 1D/2D approach is therefore sufficient to solve an ergonomic design problem. As a consequence, all modern 3D human manikins are defined by the underlying anthropometric girths (2D) and lengths/widths (1D), which can be measured efficiently using manual techniques. Traditionally, Ergonomists have taken a statistical approach to design for generalized percentiles of the population rather than for a single user. The underlying method is based on the distribution function of meaningful single and two-dimensional anthropometric variables. Compared to these variables, the distribution of human volume has no ergonomic relevance. On the other hand, if volume is to be seen as a two-dimensional integral or distribution function of length and girth, the calculation of combined percentiles – a common ergonomic requirement - is undefined. Consequently, we suggest to critically review the cost and use of 3D anthropometry. We also recommend making proper use of widely available single and 2-dimensional anthropometric data in ergonomic design.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
With the goal of improving the academic performance of primary and secondary students in Malaysia by 2020, the Malaysian Ministry of Education has made a significant investment in developing a Smart School Project. The aim of this project is to introduce interactive courseware into primary and secondary schools across Malaysia. As has been the case around the world, interactive courseware is regarded as a tool to motivate students to learn meaningfully and enhance learning experiences. Through an initial pilot phase, the Malaysian government has commissioned the development of interactive courseware by a number of developers and has rolled this courseware out to selected schools over the past 12 years. However, Ministry reports and several independent researchers have concluded that its uptake has been limited, and that much of the courseware has not been used effectively in schools. This has been attributed to weaknesses in the interface design of the courseware, which, it has been argued, fails to accommodate the needs of students and teachers. Taking the Smart School Project's science courseware as a sample, this research project has investigated the extent, nature, and reasons for the problems that have arisen. In particular, it has focused on examining the quality and effectivity of the interface design in facilitating interaction and supporting learning experiences. The analysis has been conducted empirically, by first comparing the interface design principles, characteristics and components of the existing courseware against best practice, as described in the international literature, as well as against the government guidelines provided to the developers. An ethnographic study was then undertaken to observe how the courseware is used and received in the classroom, and to investigate the stakeholders' (school principal, teachers and students') perceptions of its usability and effectivity. Finally, to understand how issues may have arisen, a review of the development process has been undertaken and it has been compared to development methods recommended in the literature, as well as the guidelines provided to the developers. The outcomes of the project include an empirical evaluation of the quality of the interface design of the Smart School Project's science courseware; the identification of other issues that have affected its uptake; an evaluation of the development process and, out of this, an extended set of principles to guide the design and development of future Smart School Project courseware to ensure that it accommodates the various stakeholders' needs.
Resumo:
This paper aims to inform design strategies for smart space technology to enhance libraries as environments for co-working and informal social learning. The focus is on understanding user motivations, behaviour, and activities in the library when there is no programmed agenda. The study analyses gathered data over five months of ethnographic research at ‘The Edge’ – a bookless library space at the State Library of Queensland in Brisbane, Australia, that is explicitly dedicated to co-working, social learning, peer collaboration, and creativity around digital culture and technology. The results present five personas that embody people’s main usage patterns as well as motivations, attitudes, and perceived barriers to social learning. It appears that most users work individually or within pre-organised groups, but usually do not make new connections with co-present, unacquainted users. Based on the personas, four hybrid design dimensions are suggested to improve the library as a social interface for shared learning encounters across physical and digital spaces. The findings in this paper offer actionable knowledge for managers, decision makers, and designers of technology-enhanced library spaces and similar collaboration and co-working spaces.
Resumo:
This paper reports on the implementation of a non-invasive electroencephalography-based brain-computer interface to control functions of a car in a driving simulator. The system is comprised of a Cleveland Medical Devices BioRadio 150 physiological signal recorder, a MATLAB-based BCI and an OKTAL SCANeR advanced driving experience simulator. The system utilizes steady-state visual-evoked potentials for the BCI paradigm, elicited by frequency-modulated high-power LEDs and recorded with the electrode placement of Oz-Fz with Fz as ground. A three-class online brain-computer interface was developed and interfaced with an advanced driving simulator to control functions of the car, including acceleration and steering. The findings are mainly exploratory but provide an indication of the feasibility and challenges of brain-controlled on-road cars for the future, in addition to a safe, simulated BCI driving environment to use as a foundation for research into overcoming these challenges.
Resumo:
We demonstrated for the first time by large-scale ab initio calculations that a graphene/titania interface in the ground electronic state forms a charge-transfer complex due to the large difference of work functions between graphene and titania, leading to substantial hole doping in graphene. Interestingly, electrons in the upper valence band can be directly excited from graphene to the conduction band, that is, the 3d orbitals of titania, under visible light irradiation. This should yield well-separated electron−hole pairs, with potentially high photocatalytic or photovoltaic performance in hybrid graphene and titania nanocomposites. Experimental wavelength-dependent photocurrent generation of the graphene/titania photoanode demonstrated noticeable visible light response and evidently verified our ab initio prediction.
Resumo:
This paper presents a comparative study to evaluate the usability of a tag-based interface alongside the present 'conventional' interface in the Australian mobile banking context. The tag-based interface is based on user-assigned tags to banking resources with support for different types of customization. And the conventional interface is based on standard HTML objects such as select boxes, lists, tables and etc, with limited customization. A total of 20 banking users evaluated both interfaces based on a set of tasks and completed a post-test usability questionnaire. Efficiency, effectiveness, and user satisfaction were considered to evaluate the usability of the interfaces. Results of the evaluation show improved usability in terms of user satisfaction with the tag-based interface compared to the conventional interface. This outcome is more apparent among participants without prior mobile banking experience. Therefore, there is a potential for the tag-based interface to improve user satisfaction of mobile banking and also positively affect the adoption and acceptance of mobile banking, particularly in Australia.
Resumo:
Video presented as part of ACIS 2009 conference in Melbourne Australia. This video outlines a collaborative BPMN editing system, developed by Stephen West, an IT Research Masters student at QUT, Brisbane, Australia. The editor uses a number of tools to facilitate collaborative process modelling, including a presentation wall, to view text descriptions of business processes, and a tile-based BPMN editor. We will post a video soon focussing on the multi-user capabilities of this editor. For more details see www.bpmve.org.
Resumo:
Modelling business processes for analysis or redesign usually requires the collaboration of many stakeholders. These stakeholders may be spread across locations or even companies, making co-located collaboration costly and difficult to organize. Modern process modelling technologies support remote collaboration but lack support for visual cues used in co-located collaboration. Previously we presented a prototype 3D virtual world process modelling tool that supports a number of visual cues to facilitate remote collaborative process model creation and validation. However, the added complexity of having to navigate a virtual environment and using an avatar for communication made the tool difficult to use for novice users. We now present an evolved version of the technology that addresses these issues by providing natural user interfaces for non-verbal communication, navigation and model manipulation.
Resumo:
CubIT is a multi-user, large-scale presentation and collaboration framework installed at the Queensland University of Technology’s (QUT) Cube facility, an interactive facility made up 48 multi-touch screens and very large projected display screens. The CubIT system allows users to upload, interact with and share their own content on the Cube’s display surfaces. This paper outlines the collaborative features of CubIT which are implemented via three user interfaces, a large-screen multi-touch interface, a mobile phone and tablet application and a web-based content management system. Each of these applications plays a different role and supports different interaction mechanisms supporting a wide range of collaborative features including multi-user shared workspaces, drag and drop upload and sharing between users, session management and dynamic state control between different parts of the system.
Resumo:
The ability to identify and assess user engagement with transmedia productions is vital to the success of individual projects and the sustainability of this mode of media production as a whole. It is essential that industry players have access to tools and methodologies that offer the most complete and accurate picture of how audiences/users engage with their productions and which assets generate the most valuable returns of investment. Drawing upon research conducted with Hoodlum Entertainment, a Brisbane-based transmedia producer, this project involved an initial assessment of the way engagement tends to be understood, why standard web analytics tools are ill-suited to measuring it, how a customised tool could offer solutions, and why this question of measuring engagement is so vital to the future of transmedia as a sustainable industry. Working with data provided by Hoodlum Entertainment and Foxtel Marketing, the outcome of the study was a prototype for a custom data visualisation tool that allowed access, manipulation and presentation of user engagement data, both historic and predictive. The prototyped interfaces demonstrate how the visualization tool would collect and organise data specific to multiplatform projects by aggregating data across a number of platform reporting tools. Such a tool is designed to encompass not only platforms developed by the transmedia producer but also sites developed by fans. This visualisation tool accounted for multiplatform experience projects whose top level is comprised of people, platforms and content. People include characters, actors, audience, distributors and creators. Platforms include television, Facebook and other relevant social networks, literature, cinema and other media that might be included in the multiplatform experience. Content refers to discreet media texts employed within the platform, such as tweet, a You Tube video, a Facebook post, an email, a television episode, etc. Core content is produced by the creators’ multiplatform experiences to advance the narrative, while complimentary content generated by audience members offers further contributions to the experience. Equally important is the timing with which the components of the experience are introduced and how they interact with and impact upon each other. Being able to combine, filter and sort these elements in multiple ways we can better understand the value of certain components of a project. It also offers insights into the relationship between the timing of the release of components and user activity associated with them, which further highlights the efficacy (or, indeed, failure) of assets as catalysts for engagement. In collaboration with Hoodlum we have developed a number of design scenarios experimenting with the ways in which data can be visualised and manipulated to tell a more refined story about the value of user engagement with certain project components and activities. This experimentation will serve as the basis for future research.
Resumo:
Aims The Medical Imaging Training Immersive Environment (MITIE) system is a recently developed virtual reality (VR) platform that allows students to practice a range of medical imaging techniques. The aim of this pilot study was to harvest user feedback about the educational value of the application and inform future pedagogical development. This presentation explores the use of this technology for skills training and blurring the boundaries between academic learning and clinical skills training. Background MITIE is a 3D VR environment that allows students to manipulate a patient and radiographic equipment in order to produce a VR-generated image for comparison with a gold standard. As with VR initiatives in other health disciplines (1-6) the software mimics clinical practice as much as possible and uses 3D technology to enhance immersion and realism. The software was developed by the Medical Imaging Course Team at a provider University with funding from a Health Workforce Australia “Simulated Learning Environments” grant. Methods Over 80 students undertaking the Bachelor of Medical Imaging Course were randomised to receive practical experience with either MITIE or radiographic equipment in the medical radiation laboratory. Student feedback about the educational value of the software was collected and performance with an assessed setup was measured for both groups for comparison. Ethical approval for the project was provided by the university ethics panel. Results This presentation provides qualitative analysis of student perceptions relating to satisfaction, usability and educational value as well as comparative quantitative performance data. Students reported high levels of satisfaction and both feedback and assessment results confirmed the application’s significance as a pre-clinical training tool. There was a clear emerging theme that MITIE could be a useful learning tool that students could access to consolidate their clinical learning, either during their academic timetables or their clinical placement. Conclusion Student feedback and performance data indicate that MITIE has a valuable role to play in the clinical skills training for medical imaging students both in the academic and the clinical environment. Future work will establish a framework for an appropriate supporting pedagogy that can cross the boundary between the two environments. This project was possible due to funding made available by Health Workforce Australia.
Resumo:
This thesis developed a method for real-time and handheld 3D temperature mapping using a combination of off-the-shelf devices and efficient computer algorithms. It contributes a new sensing and data processing framework to the science of 3D thermography, unlocking its potential for application areas such as building energy auditing and industrial monitoring. New techniques for the precise calibration of multi-sensor configurations were developed, along with several algorithms that ensure both accurate and comprehensive surface temperature estimates can be made for rich 3D models as they are generated by a non-expert user.