289 resultados para graphical user interface
Resumo:
The cross-sections of the Social Web and the Semantic Web has put folksonomy in the spot light for its potential in overcoming knowledge acquisition bottleneck and providing insight for "wisdom of the crowds". Folksonomy which comes as the results of collaborative tagging activities has provided insight into user's understanding about Web resources which might be useful for searching and organizing purposes. However, collaborative tagging vocabulary poses some challenges since tags are freely chosen by users and may exhibit synonymy and polysemy problem. In order to overcome these challenges and boost the potential of folksonomy as emergence semantics we propose to consolidate the diverse vocabulary into a consolidated entities and concepts. We propose to extract a tag ontology by ontology learning process to represent the semantics of a tagging community. This paper presents a novel approach to learn the ontology based on the widely used lexical database WordNet. We present personalization strategies to disambiguate the semantics of tags by combining the opinion of WordNet lexicographers and users’ tagging behavior together. We provide empirical evaluations by using the semantic information contained in the ontology in a tag recommendation experiment. The results show that by using the semantic relationships on the ontology the accuracy of the tag recommender has been improved.
Resumo:
Road traffic crashes have emerged as a major health problem around the world. Road crash fatalities and injuries have been reduced significantly in developed countries, but they are still an issue in low and middle-income countries. The World Health Organization (WHO, 2009) estimates that the death toll from road crashes in low- and middle-income nations is more than 1 million people per year, or about 90% of the global road toll, even though these countries only account for 48% of the world's vehicles. Furthermore, it is estimated that approximately 265,000 people die every year in road crashes in South Asian countries and Pakistan stands out with 41,494 approximately deaths per year. Pakistan has the highest rate of fatalities per 100,000 population in the region and its road crash fatality rate of 25.3 per 100,000 population is more than three times that of Australia's. High numbers of road crashes not only cause pain and suffering to the population at large, but are also a serious drain on the country's economy, which Pakistan can ill-afford. Most studies identify human factors as the main set of contributing factors to road crashes, well ahead of road environment and vehicle factors. In developing countries especially, attention and resources are required in order to improve things such as vehicle roadworthiness and poor road infrastructure. However, attention to human factors is also critical. Human factors which contribute to crashes include high risk behaviours like speeding and drink driving, and neglect of protective behaviours such as helmet wearing and seat belt wearing. Much research has been devoted to the attitudes, beliefs and perceptions which contribute to these behaviours and omissions, in order to develop interventions aimed at increasing safer road use behaviours and thereby reducing crashes. However, less progress has been made in addressing human factors contributing to crashes in developing countries as compared to the many improvements in road environments and vehicle standards, and this is especially true of fatalistic beliefs and behaviours. This is a significant omission, since in different cultures in developing countries there are strong worldviews in which predestination persists as a central idea, i.e. that one's life (and death) and other events have been mapped out and are predetermined. Fatalism refers to a particular way in which people regard the events that occur in their lives, usually expressed as a belief that an individual does not have personal control over circumstances and that their lives are determined through a divine or powerful external agency (Hazen & Ehiri, 2006). These views are at odds with the dominant themes of modern health promotion movements, and present significant challenges for health advocates who aim to avert road crashes and diminish their consequences. The limited literature on fatalism reveals that it is not a simple concept, with religion, culture, superstition, experience, education and degree of perceived control of one's life all being implicated in accounts of fatalism. One distinction in the literature that seems promising is the distinction between empirical and theological fatalism, although there are areas of uncertainty about how well-defined the distinction between these types of fatalism is. Research into road safety in Pakistan is scarce, as is the case for other South Asian countries. From the review of the literature conducted, it is clear that the descriptions given of the different belief systems in developing countries including Pakistan are not entirely helpful for health promotion purposes and that further research is warranted on the influence of fatalism, superstition and other related beliefs in road safety. Based on the information available, a conceptual framework is developed as a means of structuring and focusing the research and analysis. The framework is focused on the influence of fatalism, superstition, religion and culture on beliefs about crashes and road user behaviour. Accordingly, this research aims to provide an understanding of the operation of fatalism and related beliefs in Pakistan to assist in the development and implementation of effective and culturally appropriate interventions. The research examines the influence of fatalism, superstition, religious and cultural beliefs on risky road use in Pakistan and is guided by three research questions: 1. What are the perceptions of road crash causation in Pakistan, in particular the role of fatalism, superstition, religious and cultural beliefs? 2. How does fatalism, superstition, and religious and cultural beliefs influence road user behaviour in Pakistan? 3. Do fatalism, superstition, and religious and cultural beliefs work as obstacles to road safety interventions in Pakistan? To address these questions, a qualitative research methodology was developed. The research focused on gathering data through individual in-depth interviewing using a semi-structured interview format. A sample of 30 participants was interviewed in Pakistan in the cities of Lahore, Rawalpindi and Islamabad. The participants included policy makers (with responsibility for traffic law), experienced police officers, religious orators, professional drivers (truck, bus and taxi) and general drivers selected through a combination of purposive, criterion and snowball sampling. The transcripts were translated from Urdu and analysed using a thematic analysis approach guided by the conceptual framework. The findings were divided into four areas: attribution of crash causation to fatalism; attribution of road crashes to beliefs about superstition and malicious acts; beliefs about road crash causation linked to popular concepts of religion; and implications for behaviour, safety and enforcement. Fatalism was almost universally evident, and expressed in a number of ways. Fate was used to rationalise fatal crashes using the argument that the people killed were destined to die that day, one way or another. Related to this was the sense of either not being fully in control of the vehicle, or not needing to take safety precautions, because crashes were predestined anyway. A variety of superstitious-based crash attributions and coping methods to deal with road crashes were also found, such as belief in the role of the evil eye in contributing to road crashes and the use of black magic by rivals or enemies as a crash cause. There were also beliefs related to popular conceptions of religion, such as the role of crashes as a test of life or a source of martyrdom. However, superstitions did not appear to be an alternative to religious beliefs. Fate appeared as the 'default attribution' for a crash when all other explanations failed to account for the incident. This pervasive belief was utilised to justify risky road use behaviour and to resist messages about preventive measures. There was a strong religious underpinning to the statement of fatalistic beliefs (this reflects popular conceptions of Islam rather than scholarly interpretations), but also an overlap with superstitious and other culturally and religious-based beliefs which have longer-standing roots in Pakistani culture. A particular issue which is explored in more detail is the way in which these beliefs and their interpretation within Pakistani society contributed to poor police reporting of crashes. The pervasive nature of fatalistic beliefs in Pakistan affects road user behaviour by supporting continued risk taking behaviour on the road, and by interfering with public health messages about behaviours which would reduce the risk of traffic crashes. The widespread influence of these beliefs on the ways that people respond to traffic crashes and the death of family members contribute to low crash reporting rates and to a system which appears difficult to change. Fate also appeared to be a major contributing factor to non-reporting of road crashes. There also appeared to be a relationship between police enforcement and (lack of) awareness of road rules. It also appears likely that beliefs can influence police work, especially in the case of road crash investigation and the development of strategies. It is anticipated that the findings could be used as a blueprint for the design of interventions aimed at influencing broad-spectrum health attitudes and practices among the communities where fatalism is prevalent. The findings have also identified aspects of beliefs that have complex social implications when designing and piloting driver intervention strategies. By understanding attitudes and behaviours related to fatalism, superstition and other related concepts, it should be possible to improve the education of general road users, such that they are less likely to attribute road crashes to chance, fate, or superstition. This study also underscores the understanding of this issue in high echelons of society (e.g., policy makers, senior police officers) as their role is vital in dispelling road users' misconceptions about the risks of road crashes. The promotion of an evidence or scientifically-based approach to road user behaviour and road safety is recommended, along with improved professional education for police and policy makers.
Resumo:
This article investigates the ethnographic methodological question of how the researcher observes objectively while being part of the problem they are observing. It uses a case study of ABC Pool to argue a cooperative approach that combines the roles of the ethnographer with that of a community manager who assists in constructing a true representation of the researched environment. By using reflexivity as a research tool, the ethnographer engages in a process to self-check their personal presumptions and prejudices, and to strengthen the constructed representation of the researched environment. This article also suggests combining management and expertise research from the social sciences with ethnography, to understand and engage with the research field participants more intimately - which, ultimately, assists in gathering and analysing richer qualitative data.
Resumo:
Our paper approaches Twitter through the lens of “platform politics” (Gillespie, 2010), focusing in particular on controversies around user data access, ownership, and control. We characterise different actors in the Twitter data ecosystem: private and institutional end users of Twitter, commercial data resellers such as Gnip and DataSift, data scientists, and finally Twitter, Inc. itself; and describe their conflicting interests. We furthermore study Twitter’s Terms of Service and application programming interface (API) as material instantiations of regulatory instruments used by the platform provider and argue for a more promotion of data rights and literacy to strengthen the position of end users.
Resumo:
Gesture interfaces are an attractive avenue for human-computer interaction, given the range of expression that people are able to engage when gesturing. Consequently, there is a long running stream of research into gesture as a means of interaction in the field of human-computer interaction. However, most of this research has focussed on the technical challenges of detecting and responding to people’s movements, or on exploring the interaction possibilities opened up by technical developments. There has been relatively little research on how to actually design gesture interfaces, or on the kinds of understandings of gesture that might be most useful to gesture interface designers. Running parallel to research in gesture interfaces, there is a body of research into human gesture, which would seem a useful source to draw knowledge that could inform gesture interface design. However, there is a gap between the ways that ‘gesture’ is conceived of in gesture interface research compared to gesture research. In this dissertation, I explore this gap and reflect on the appropriateness of existing research into human gesturing for the needs of gesture interface design. Through a participatory design process, I designed, prototyped and evaluated a gesture interface for the work of the dental examination. Against this grounding experience, I undertook an analysis of the work of the dental examination with particular focus on the roles that gestures play in the work to compare and discuss existing gesture research. I take the work of the gesture researcher McNeill as a point of focus, because he is widely cited within gesture interface research literature. I show that although McNeill’s research into human gesture can be applied to some important aspects of the gestures of dentistry, there remain range of gestures that McNeill’s work does not deal with directly, yet which play an important role in the work and could usefully be responded to with gesture interface technologies. I discuss some other strands of gesture research, which are less widely cited within gesture interface research, but offer a broader conception of gesture that would be useful for gesture interface design. Ultimately, I argue that the gap in conceptions of gesture between gesture interface research and gesture research is an outcome of the different interests that each community brings to bear on the research. What gesture interface research requires is attention to the problems of designing gesture interfaces for authentic context of use and assessment of existing theory in light of this.
Resumo:
This paper describes the use of property graphs for mapping data between AEC software tools, which are not linked by common data formats and/or other interoperability measures. The intention of introducing this in practice, education and research is to facilitate the use of diverse, non-integrated design and analysis applications by a variety of users who need to create customised digital workflows, including those who are not expert programmers. Data model types are examined by way of supporting the choice of directed, attributed, multi-relational graphs for such data transformation tasks. A brief exemplar design scenario is also presented to illustrate the concepts and methods proposed, and conclusions are drawn regarding the feasibility of this approach and directions for further research.
Resumo:
Phenomenography is a research approach devised to allow the investigation of varying ways in which people experience aspects of their world. Whilst growing attention is being paid to interpretative research in LIS, it is not always clear how the outcomes of such research can be used in practice. This article explores the potential contribution of phenomenography in advancing the application of phenomenological and hermeneutic frameworks to LIS theory, research and practice. In phenomenography we find a research toll which in revealing variation, uncovers everyday understandings of phenomena and provides outcomes which are readily applicable to professional practice. THe outcomes may be used in human computer interface design, enhancement, implementation and training, in the design and evaluation of services, and in education and training for both end users and information professionals. A proposed research territory for phenomenography in LIS includes investigating qualitative variation in the experienced meaning of: 1) information and its role in society 2) LIS concepts and principles 3) LIS processes and; 4) LIS elements.
Resumo:
The rapid growth of services available on the Internet and exploited through ever globalizing business networks poses new challenges for service interoperability. New services, from consumer “apps”, enterprise suites, platform and infrastructure resources, are vying for demand with quickly evolving and overlapping capabilities, and shorter cycles of extending service access from user interfaces to software interfaces. Services, drawn from a wider global setting, are subject to greater change and heterogeneity, demanding new requirements for structural and behavioral interface adaptation. In this paper, we analyze service interoperability scenarios in global business networks, and propose new patterns for service interactions, above those proposed over the last 10 years through the development of Web service standards and process choreography languages. By contrast, we reduce assumptions of design-time knowledge required to adapt services, giving way to run-time mismatch resolutions, extend the focus from bilateral to multilateral messaging interactions, and propose declarative ways in which services and interactions take part in long-running conversations via the explicit use of state.
Resumo:
This paper describes an architecture for robotic telepresence and teleoperation based on the well known tools ROS and Skype. We discuss how Skype can be used as a framework for robotic communication and can be integrated into a ROS/Linux framework to allow a remote user to not only interact with people near the robot, but to view maps, sensory data, robot pose and to issue commands to the robot’s navigation stack. This allows the remote user to exploit the robot’s autonomy, providing a much more convenient navigation interface than simple remote joysticking.
Resumo:
We introduce a lightweight biometric solution for user authentication over networks using online handwritten signatures. The algorithm proposed is based on a modified Hausdorff distance and has favorable characteristics such as low computational cost and minimal training requirements. Furthermore, we investigate an information theoretic model for capacity and performance analysis for biometric authentication which brings additional theoretical insights to the problem. A fully functional proof-of-concept prototype that relies on commonly available off-the-shelf hardware is developed as a client-server system that supports Web services. Initial experimental results show that the algorithm performs well despite its low computational requirements and is resilient against over-the-shoulder attacks.
Resumo:
Many older people have difficulties using modern consumer products due to increased product complexity both in terms of functionality and interface design. Previous research has shown that older people have more difficulty in using complex devices intuitively when compared to the younger. Furthermore, increased life expectancy and a falling birth rate have been catalysts for changes in world demographics over the past two decades. This trend also suggests a proportional increase of older people in the work-force. This realisation has led to research on the effective use of technology by older populations in an effort to engage them more productively and to assist them in leading independent lives. Ironically, not enough attention has been paid to the development of interaction design strategies that would actually enable older users to better exploit new technologies. Previous research suggests that if products are designed to reflect people's prior knowledge, they will appear intuitive to use. Since intuitive interfaces utilise domain-specific prior knowledge of users, they require minimal learning for effective interaction. However, older people are very diverse in their capabilities and domain-specific prior knowledge. In addition, ageing also slows down the process of acquiring new knowledge. Keeping these suggestions and limitations in view, the aim of this study was set to investigate possible approaches to developing interfaces that facilitate their intuitive use by older people. In this quest to develop intuitive interfaces for older people, two experiments were conducted that systematically investigated redundancy (the use of both text and icons) in interface design, complexity of interface structure (nested versus flat), and personal user factors such as cognitive abilities, perceived self-efficacy and technology anxiety. All of these factors could interfere with intuitive use. The results from the first experiment suggest that, contrary to what was hypothesised, older people (65+ years) completed the tasks on the text only based interface design faster than on the redundant interface design. The outcome of the second experiment showed that, as expected, older people took more time on a nested interface. However, they did not make significantly more errors compared with younger age groups. Contrary to what was expected, older age groups also did better under anxious conditions. The findings of this study also suggest that older age groups are more heterogeneous in their capabilities and their intuitive use of contemporary technological devices is mediated more by domain-specific technology prior knowledge and by their cognitive abilities, than chronological age. This makes it extremely difficult to develop product interfaces that are entirely intuitive to use. However, by keeping in view the cognitive limitations of older people when interfaces are developed, and using simple text-based interfaces with flat interface structure, would help them intuitively learn and use complex technological products successfully during early encounter with a product. These findings indicate that it might be more pragmatic if interfaces are designed for intuitive learning rather than for intuitive use. Based on this research and the existing literature, a model for adaptable interface design as a strategy for developing intuitively learnable product interfaces was proposed. An adaptable interface can initially use a simple text only interface to help older users to learn and successfully use the new system. Over time, this can be progressively changed to a symbols-based nested interface for more efficient and intuitive use.
Resumo:
We present a virtual test bed for network security evaluation in mid-scale telecommunication networks. Migration from simulation scenarios towards the test bed is supported and enables researchers to evaluate experiments in a more realistic environment. We provide a comprehensive interface to manage, run and evaluate experiments. On basis of a concrete example we show how the proposed test bed can be utilized.
Resumo:
The IEEE Wireless LAN standard has been a true success story by enabling convenient, efficient and low-cost access to broadband networks for both private and professional use. However, the increasing density and uncoordinated operation of wireless access points, combined with constantly growing traffic demands have started hurting the users' quality of experience. On the other hand, the emerging ubiquity of wireless access has placed it at the center of attention for network attacks, which not only raises users' concerns on security but also indirectly affects connection quality due to proactive measures against security attacks. In this work, we introduce an integrated solution to congestion avoidance and attack mitigation problems through cooperation among wireless access points. The proposed solution implements a Partially Observable Markov Decision Process (POMDP) as an intelligent distributed control system. By successfully differentiating resource hampering attacks from overload cases, the control system takes an appropriate action in each detected anomaly case without disturbing the quality of service for end users. The proposed solution is fully implemented on a small-scale testbed, on which we present our observations and demonstrate the effectiveness of the system to detect and alleviate both attack and congestion situations.
Resumo:
There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.