963 resultados para API (Application Programming Interface)
Resumo:
Gesture interfaces are an attractive avenue for human-computer interaction, given the range of expression that people are able to engage when gesturing. Consequently, there is a long running stream of research into gesture as a means of interaction in the field of human-computer interaction. However, most of this research has focussed on the technical challenges of detecting and responding to people’s movements, or on exploring the interaction possibilities opened up by technical developments. There has been relatively little research on how to actually design gesture interfaces, or on the kinds of understandings of gesture that might be most useful to gesture interface designers. Running parallel to research in gesture interfaces, there is a body of research into human gesture, which would seem a useful source to draw knowledge that could inform gesture interface design. However, there is a gap between the ways that ‘gesture’ is conceived of in gesture interface research compared to gesture research. In this dissertation, I explore this gap and reflect on the appropriateness of existing research into human gesturing for the needs of gesture interface design. Through a participatory design process, I designed, prototyped and evaluated a gesture interface for the work of the dental examination. Against this grounding experience, I undertook an analysis of the work of the dental examination with particular focus on the roles that gestures play in the work to compare and discuss existing gesture research. I take the work of the gesture researcher McNeill as a point of focus, because he is widely cited within gesture interface research literature. I show that although McNeill’s research into human gesture can be applied to some important aspects of the gestures of dentistry, there remain range of gestures that McNeill’s work does not deal with directly, yet which play an important role in the work and could usefully be responded to with gesture interface technologies. I discuss some other strands of gesture research, which are less widely cited within gesture interface research, but offer a broader conception of gesture that would be useful for gesture interface design. Ultimately, I argue that the gap in conceptions of gesture between gesture interface research and gesture research is an outcome of the different interests that each community brings to bear on the research. What gesture interface research requires is attention to the problems of designing gesture interfaces for authentic context of use and assessment of existing theory in light of this.
Resumo:
miRDeep and its varieties are widely used to quantify known and novel micro RNA (miRNA) from small RNA sequencing (RNAseq). This article describes miRDeep*, our integrated miRNA identification tool, which is modeled off miRDeep, but the precision of detecting novel miRNAs is improved by introducing new strategies to identify precursor miRNAs. miRDeep* has a user-friendly graphic interface and accepts raw data in FastQ and Sequence Alignment Map (SAM) or the binary equivalent (BAM) format. Known and novel miRNA expression levels, as measured by the number of reads, are displayed in an interface, which shows each RNAseq read relative to the pre-miRNA hairpin. The secondary pre-miRNA structure and read locations for each predicted miRNA are shown and kept in a separate figure file. Moreover, the target genes of known and novel miRNAs are predicted using the TargetScan algorithm, and the targets are ranked according to the confidence score. miRDeep* is an integrated standalone application where sequence alignment, pre-miRNA secondary structure calculation and graphical display are purely Java coded. This application tool can be executed using a normal personal computer with 1.5 GB of memory. Further, we show that miRDeep* outperformed existing miRNA prediction tools using our LNCaP and other small RNAseq datasets. miRDeep* is freely available online at http://www.australianprostatecentre.org/research/software/mirdeep-star
Resumo:
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.
Resumo:
Recent research has proposed Neo-Piagetian theory as a useful way of describing the cognitive development of novice programmers. Neo-Piagetian theory may also be a useful way to classify materials used in learning and assessment. If Neo-Piagetian coding of learning resources is to be useful then it is important that practitioners can learn it and apply it reliably. We describe the design of an interactive web-based tutorial for Neo-Piagetian categorization of assessment tasks. We also report an evaluation of the tutorial's effectiveness, in which twenty computer science educators participated. The average classification accuracy of the participants on each of the three Neo-Piagetian stages were 85%, 71% and 78%. Participants also rated their agreement with the expert classifications, and indicated high agreement (91%, 83% and 91% across the three Neo-Piagetian stages). Self-rated confidence in applying Neo-Piagetian theory to classifying programming questions before and after the tutorial were 29% and 75% respectively. Our key contribution is the demonstration of the feasibility of the Neo-Piagetian approach to classifying assessment materials, by demonstrating that it is learnable and can be applied reliably by a group of educators. Our tutorial is freely available as a community resource.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
Modified montmorillonite was prepared at different surfactant (HDTMA) loadings through ion exchange. The conformational arrangement of the loaded surfactants within the interlayer space of MMT was obtained by computational modelling. The conformational change of surfactant molecules enhance the visual understanding of the results obtained from characterization methods such as XRD and surface analysis of the organoclays. Batch experiments were carried out for the adsorption of p-chlorophenol (PCP) and different conditions (pH and temperature) were used in order to determine the optimum sorption. For comparison purpose, the experiments were repeated under the same conditions for p-nitrophenol (PNP). Langmuir and Freundlich equations were applied to the adsorption isotherm of PCP and PNP. The Freundlich isotherm model was found to be the best fit for both of the phenolic compounds. This involved multilayer adsorptions in the adsorption process. In particular, the binding affinity value of PNP was higher than that of PCP and this is attributable to their hydrophobicities. The adsorption of the phenolic compounds by organoclays intercalated with highly loaded surfactants was markedly improved possibly due to the fact that the intercalated surfactant molecules within the interlayer space contribute to the partition phases, which result in greater adsorption of the organic pollutants.
Resumo:
This paper considers the problem of reconstructing the motion of a 3D articulated tree from 2D point correspondences subject to some temporal prior. Hitherto, smooth motion has been encouraged using a trajectory basis, yielding a hard combinatorial problem with time complexity growing exponentially in the number of frames. Branch and bound strategies have previously attempted to curb this complexity whilst maintaining global optimality. However, they provide no guarantee of being more efficient than exhaustive search. Inspired by recent work which reconstructs general trajectories using compact high-pass filters, we develop a dynamic programming approach which scales linearly in the number of frames, leveraging the intrinsically local nature of filter interactions. Extension to affine projection enables reconstruction without estimating cameras.
Resumo:
The Australian state -based educational system of a national school curriculum that includes a pre-Year 1 Foundation Year has raised questions about the purpose of this year of early education. A document analysis was undertaken across three Australian states, examining three constructions of the pre-Year 1 class and tensions arising from varied perspectives. Tensions have emerged over state-based adaptations of the national curriculum, scripted pedagogies for change management, differing ideological perspectives and positioning of stakeholders. The results indicate that since 2012 there has been a shift in constructions of the pre-Year 1 class towards school-based ideologies, especially in Queensland. Accordingly, positioning of children, parents and teachers has also changed. These results resonate with previous international indications of ‘schooling’ early education. The experiences of Australian early adopters of the curriculum offer insights for other jurisdictions in Australia and internationally, and raise questions about future development in early years education.
Resumo:
With a monolayer honeycomb-lattice of sp2-hybridized carbon atoms, graphene has demonstrated exceptional electrical, mechanical and thermal properties. One of its promising applications is to create graphene-polymer nanocomposites with tailored mechanical and physical properties. In general, the mechanical properties of graphene nanofiller as well as graphene-polymer interface govern the overall mechanical performance of graphene-polymer nanocomposites. However, the strengthening and toughening mechanisms in these novel nanocomposites have not been well understood. In this work, the deformation and failure of graphene sheet and graphene-polymer interface were investigated using molecular dynamics (MD) simulations. The effect of structural defects on the mechanical properties of graphene and graphene-polymer interface was investigated as well. The results showed that structural defects in graphene (e.g. Stone-Wales defect and multi-vacancy defect) can significantly deteriorate the fracture strength of graphene but may still make full utilization of corresponding strength of graphene and keep the interfacial strength and the overall mechanical performance of graphene-polymer nanocomposites.
Resumo:
Trauma education, both formal and informal, is an essential component of professional development for both nursing anf medical staff working in the Emergency Department. Ideally, this education will be multidisciplinary. As a result, the day to day aspects of emergency care such and team work and crew resource management are maintained.
Resumo:
User interfaces for source code editing are a crucial component in any software development environment, and in many editors visual annotations (overlaid on the textual source code) are used to provide important contextual information to the programmer. This paper focuses on the real-time programming activity of ‘cyberphysical’ programming, and considers the type of visual annotations which may be helpful in this programming context.
Resumo:
Balancing the competing interests of autonomy and protection of individuals is an escalating challenge confronting an ageing Australian society. Legal and medical professionals are increasingly being asked to determine whether individuals are legally competent/capable to make their own testamentary and substitute decision-making, that is financial and/or personal/health care, decisions. No consistent and transparent competency/capacity assessment paradigm currently exists in Australia. Consequently, assessments are currently being undertaken on an ad hoc basis which is concerning as Australia’s population ages and issues of competency/capacity increase. The absence of nationally accepted competency/capacity assessment guidelines and supporting principles results in legal and medical professionals involved with competency/capacity assessment implementing individual processes tailored to their own abilities. Legal and medical approaches differ both between and within the professions. The terminology used also varies. The legal practitioner is concerned with whether the individual has the legal ability to make the decision. A medical practitioner assesses fluctuations in physical and mental abilities. The problem is that the terms competency and capacity are used interchangeably resulting in confusion about what is actually being assessed. The terminological and methodological differences subsequently create miscommunication and misunderstanding between the professions. Consequently, it is not necessarily a simple solution for a legal professional to seek the opinion of a medical practitioner when assessing testamentary and/or substitute decision-making competency/capacity. This research investigates the effects of the current inadequate testamentary and substitute decision-making assessment paradigm and whether there is a more satisfactory approach. This exploration is undertaken within a framework of therapeutic jurisprudence which promotes principles fundamentally important in this context. Empirical research has been undertaken to first, explore the effects of the current process with practising legal and medical professionals; and second, to determine whether miscommunication and misunderstanding actually exist between the professions such that it gives rise to a tense relationship which is not conducive to satisfactory competency/capacity assessments. The necessity of reviewing the adequacy of the existing competency/capacity assessment methodology in the testamentary and substitute decision-making domain will be demonstrated and recommendations for the development of a suitable process made.
Resumo:
Background Standard operating procedures state that police officers should not drive while interacting with their mobile data terminal (MDT) which provides in-vehicle information essential to police work. Such interactions do however occur in practice and represent a potential source of driver distraction. The MDT comprises visual output with manual input via touch screen and keyboard. This study investigated the potential for alternative input and output methods to mitigate driver distraction with specific focus on eye movements. Method Nineteen experienced drivers of police vehicles (one female) from the NSW Police Force completed four simulated urban drives. Three drives included a concurrent secondary task: imitation licence plate search using an emulated MDT. Three different interface methods were examined: Visual-Manual, Visual-Voice, and Audio-Voice (“Visual” and “Audio” = output modality; “Manual” and “Voice” = input modality). During each drive, eye movements were recorded using FaceLAB™ (Seeing Machines Ltd, Canberra, ACT). Gaze direction and glances on the MDT were assessed. Results The Visual-Voice and Visual-Manual interfaces resulted in a significantly greater number of glances towards the MDT than Audio-Voice or Baseline. The Visual-Manual and Visual-Voice interfaces resulted in significantly more glances to the display than Audio-Voice or Baseline. For longer duration glances (>2s and 1-2s) the Visual-Manual interface resulted in significantly more fixations than Baseline or Audio-Voice. The short duration glances (<1s) were significantly greater for both Visual-Voice and Visual-Manual compared with Baseline and Audio-Voice. There were no significant differences between Baseline and Audio-Voice. Conclusion An Audio-Voice interface has the greatest potential to decrease visual distraction to police drivers. However, it is acknowledged that an audio output may have limitations for information presentation compared with visual output. The Visual-Voice interface offers an environment where the capacity to present information is sustained, whilst distraction to the driver is reduced (compared to Visual-Manual) by enabling adaptation of fixation behaviour.
Resumo:
Environmental degradation has become increasingly aggressive in recent years due to rapid urban development and other land use pressures. This chapter looks at BioCondition, a newly developed vegetation assessment framework by Queensland Department of Resource Management (DERM) and how mobile technology can assist beginners in conducting the survey. Even though BioCondition is designed to be simple, it is still fairly inaccessible to beginners due to its complex, time consuming, and repetitive nature. A Windows Phone mobile application, BioCondition Assessment Tool, was developed to provide on-site guidance to beginners and document the assessment process for future revision and comparison. The application was tested in an experiment at Samford Conservation Park with 12 students studying ecology in Queensland University of Technology.
Resumo:
Police in-vehicle systems include a visual output mobile data terminal (MDT) with manual input via touch screen and keyboard. This study investigated the potential for voice-based input and output modalities for reducing subjective workload of police officers while driving. Nineteen experienced drivers of police vehicles (one female) from New South Wales (NSW) Police completed four simulated urban drives. Three drives included a concurrent secondary task: an imitation licence number search using an emulated MDT. Three different interface output-input modalities were examined: Visual-Manual, Visual-Voice, and Audio-Voice. Following each drive, participants rated their subjective workload using the NASA - Raw Task Load Index and completed questions on acceptability. A questionnaire on interface preferences was completed by participants at the end of their session. Engaging in secondary tasks while driving significantly increased subjective workload. The Visual-Manual interface resulted in higher time demand than either of the voice-based interfaces and greater physical demand than the Audio-Voice interface. The Visual-Voice and Audio-Voice interfaces were rated easier to use and more useful than the Visual-Manual interface, although not significantly different from each other. Findings largely echoed those deriving from the analysis of the objective driving performance data. It is acknowledged that under standard procedures, officers should not drive while performing tasks concurrently with certain invehicle policing systems; however, in practice this sometimes occurs. Taking action now to develop voice-based technology for police in-vehicle systems has potential to realise visions for potentially safer and more efficient vehicle-based police work.