911 resultados para pacs: geography and cartography computing
Unpacking user relations in an emerging ubiquitous computing environment : introducing the bystander
Resumo:
The move towards technological ubiquity is allowing a more idiosyncratic and dynamic working environment to emerge that may result in the restructuring of information communication technologies, and changes in their use through different user groups' actions. Taking a ‘practice’ lens to human agency, we explore the evolving roles of, and relationships between these user groups and their appropriation of emergent technologies by drawing upon Lamb and Kling's social actor framework. To illustrate our argument, we draw upon a study of a UK Fire Brigade that has introduced a variety of technologies in an attempt to move towards embracing mobile and ubiquitous computing. Our analysis of the enactment of such technologies reveals that Bystanders, a group yet to be taken as the central unit of analysis in information systems research, or considered in practice, are emerging as important actors. The research implications of our work relate to the need to further consider Bystanders in deployments other than those that are mobile and ubiquitous. For practice, we suggest that Bystanders require consideration in the systems development life cycle, particularly in terms of design and education in processes of use.
Resumo:
This thesis developed and evaluated strategies for social and ubiquitous computing designs that can enhance connected learning and networking opportunities for users in coworking spaces. Based on a social and a technical design intervention deployed at the State Library of Queensland, the research findings illustrate the potential of combining social, spatial and digital affordances in order to nourish peer-to-peer learning, creativity, inspiration, and innovation. The study proposes a hybrid notion of placemaking as a new way of thinking about the design of coworking and interactive learning spaces.
Resumo:
Facial expression recognition (FER) systems must ultimately work on real data in uncontrolled environments although most research studies have been conducted on lab-based data with posed or evoked facial expressions obtained in pre-set laboratory environments. It is very difficult to obtain data in real-world situations because privacy laws prevent unauthorized capture and use of video from events such as funerals, birthday parties, marriages etc. It is a challenge to acquire such data on a scale large enough for benchmarking algorithms. Although video obtained from TV or movies or postings on the World Wide Web may also contain ‘acted’ emotions and facial expressions, they may be more ‘realistic’ than lab-based data currently used by most researchers. Or is it? One way of testing this is to compare feature distributions and FER performance. This paper describes a database that has been collected from television broadcasts and the World Wide Web containing a range of environmental and facial variations expected in real conditions and uses it to answer this question. A fully automatic system that uses a fusion based approach for FER on such data is introduced for performance evaluation. Performance improvements arising from the fusion of point-based texture and geometry features, and the robustness to image scale variations are experimentally evaluated on this image and video dataset. Differences in FER performance between lab-based and realistic data, between different feature sets, and between different train-test data splits are investigated.
Resumo:
This is the first volume in a book series examining how organizations in the creative industries respond to disruptive change and how they themselves generate business innovations. The aspiration of this book series is to understand some of the common forces behind the disruptions occurring in so many creative industries today and identifying the most promising strategies and responses by organizations to create new value propositions, business models and business practices that can enable these industry participants to cope with and eventually thrive as their industries and sectors are transformed. The chapters included in the volume examine the processes of disruption and transformation due to the technology of the Internet, social forces driven by social media, the development of new portable digital devices with greater capabilities and smaller size, the decreasing costs of new information, and the creation of new business models and forms of intellectual property ownership rights for a digitized industry. The context for this volume is the publishing industries, understood as the industries for the publishing of fiction and non-fiction books, academic literature, consumer as well as trade magazines, and daily newspapers. This volume includes chapters by an internationally diverse array of media scholars whose chapters provide insights into these phenomena in Eastern Europe, Finland, France, Germany, Norway, Portugal, Russia, and the United States, using different methodological frameworks including, but not limited to, surveys, in-depth interviews and multiple-case studies. One gap that this book series seeks to fill is that between the study of business innovation and disruption by innovation scholars largely based in business school settings and similar studies by scholarly experts from non-business school disciplines, including the broader social sciences (e.g. sociology, political science, economic geography) and creative industry based professional school disciplines (e.g. architecture, communications, design, film making, journalism, media studies, performing arts, photography and television). Future volumes of this book series will examine disruption and business innovation in the film, video and photography sectors (volume two), the music sector (volume three) and interactive entertainment (volume four), with subsequent volumes focusing on the most relevant developments in creative industry business innovation and disruption that emerge.
Resumo:
This study investigated the impact of digital networked social interactions on the design of public urban spaces. Urban informatics, social media, ubiquitous computing, and mobile technology offer great potential to urban planning, as tools of communication, community engagement, and placemaking. The study considers the function of public spaces as 'third places,' that is, social places that are familiar, comfortable, social and meaningful for everyday life outside the home and work. Libraries were chosen as the study's focus. The study produced findings and insights at the intersection of urban planning (place), cultural geography and urban sociology (people), and information communication technology (technology) – the triad of urban informatics.
Resumo:
"Geography education is indispensable to the development of responsible and active citizens in the present and future world" is one of the main statements in the International Charter on Geographical Education. This charter was edited in 1992 by Haubrich, chair of the Commission on Geographical Education of the International Geographical Union (IGU). Twenty years later this statement is still true. Geography educators all over the world are looking for ways to talk with young people about their image of their world and to help them to develop their knowledge, skills and ideas about the complex world we live in. However, different ideas exist about what geography we should learn and teach and how. The Commission on Geographical Education of the International Geographical Union tries to help to improve the quality and position of geography education worldwide promoting the dissemination of good practices and research results in the field of geography education.
Resumo:
There is no doubt that place branding is a powerful and ubiquitous practice deployed around the globe. Parallel to its acceptance and development as a distinct discipline is an understanding that place branding as responsible practice offers the means to achieve widespread economic, social and cultural benefits. Drawing on work around place and identity in cultural geography and cultural studies, this paper engages critically with this vision. Specifically, it challenges the widely-held assumption that the relationship between place branding and place identity is fundamentally reflective, arguing instead that this relationship is inherently generative. This shift in perspective, explored in relation to current responsible place branding practice, is central to the realisation of place branding as a force for good.
Resumo:
In this paper we describe the preliminary results of a field study which evaluated the use of MiniOrb, a system that employs ambient and tangible interaction mechanisms to allow inhabitants of office environments to report on subjectively perceived office comfort levels. The purpose of this study was to explore the role of ubiquitous computing in the individual control of indoor climate and specifically answer the question to what extent ambient and tangible interaction mechanisms are suited for the task of capturing individual comfort preferences in a non-obtrusive manner. We outline the preliminary results of an in-situ trial of the system.
Resumo:
It has been called “the world’s worst recorded natural disaster,” and “the largest earthquake in 40 years,” galvanizing the largest global relief effort in history. For those of us involved in the discipline and/or the practice of communications, we realized that it presented a unique case study from a number of perspectives. Both the media and the public became so enraptured and enmeshed in the story of the tsunami of December 26, 2004, bringing to the fore a piece of geography and a peoples too rarely considered prior to the tragedy, that we felt compelled to examine the phenomenon. The overwhelming significance of this volume comes from its being a combination of both academic scholars and development practitioners in the field. Its poignancy becomes underscored from their wide-ranging perspectives, with 21 chapters representing some 14 different countries. Their realities provide not only credibility but also an unprecedented sensitivity to communication issues. Our approach here considers Tsunami 2004 from five communication perspectives: 1.) Interpersonal/ intercultural, 2.) Mass media, 3.) Telecommunications, 4.) Ethics, philanthropy, and development communication, and; 5.) Personal testimonies and observations. You will learn even more here about the theory and practice of disaster/crisis communication.
Resumo:
The generation of a correlation matrix for set of genomic sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. Each sequence may be millions of bases long and there may be thousands of such sequences which we wish to compare, so not all sequences may fit into main memory at the same time. Each sequence needs to be compared with every other sequence, so we will generally need to page some sequences in and out more than once. In order to minimize execution time we need to minimize this I/O. This paper develops an approach for faster and scalable computing of large-size correlation matrices through the maximal exploitation of available memory and reducing the number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different bioinformatics problems with different correlation matrix sizes. The significant performance improvement of the approach over previous work is demonstrated through benchmark examples.
Resumo:
Airborne particulate pollutant is considered to be one of the major harmful emissions produced by vehicle engines as it has been directly linked to serious health problems. Passengers spend long times at bus stations and may be exposed to high concentrations of pollution. Particle pollution at two bus stations in Brisbane, Australia were monitored. The two bus stations consisted of markedly different site geography and surroundings with one situated in a street canyon and the other elevated above ground level. The same flow of traffic operated through both stations. Real time measurements of ultrafine particle concentration, size distribution and meteorological conditions were carried out on the platform continuously over several days. The results showed that the particle number concentrations were significantly different at the two stations, suggesting that the layout of site geometry and surroundings was a dominant determining factor through the injection of fresh air into the station platforms and the rates of dilution.
Resumo:
For robots operating in outdoor environments, a number of factors, including weather, time of day, rough terrain, high speeds, and hardware limitations, make performing vision-based simultaneous localization and mapping with current techniques infeasible due to factors such as image blur and/or underexposure, especially on smaller platforms and low-cost hardware. In this paper, we present novel visual place-recognition and odometry techniques that address the challenges posed by low lighting, perceptual change, and low-cost cameras. Our primary contribution is a novel two-step algorithm that combines fast low-resolution whole image matching with a higher-resolution patch-verification step, as well as image saliency methods that simultaneously improve performance and decrease computing time. The algorithms are demonstrated using consumer cameras mounted on a small vehicle in a mixed urban and vegetated environment and a car traversing highway and suburban streets, at different times of day and night and in various weather conditions. The algorithms achieve reliable mapping over the course of a day, both when incrementally incorporating new visual scenes from different times of day into an existing map, and when using a static map comprising visual scenes captured at only one point in time. Using the two-step place-recognition process, we demonstrate for the first time single-image, error-free place recognition at recall rates above 50% across a day-night dataset without prior training or utilization of image sequences. This place-recognition performance enables topologically correct mapping across day-night cycles.
Resumo:
Representation of facial expressions using continuous dimensions has shown to be inherently more expressive and psychologically meaningful than using categorized emotions, and thus has gained increasing attention over recent years. Many sub-problems have arisen in this new field that remain only partially understood. A comparison of the regression performance of different texture and geometric features and investigation of the correlations between continuous dimensional axes and basic categorized emotions are two of these. This paper presents empirical studies addressing these problems, and it reports results from an evaluation of different methods for detecting spontaneous facial expressions within the arousal-valence dimensional space (AV). The evaluation compares the performance of texture features (SIFT, Gabor, LBP) against geometric features (FAP-based distances), and the fusion of the two. It also compares the prediction of arousal and valence, obtained using the best fusion method, to the corresponding ground truths. Spatial distribution, shift, similarity, and correlation are considered for the six basic categorized emotions (i.e. anger, disgust, fear, happiness, sadness, surprise). Using the NVIE database, results show that the fusion of LBP and FAP features performs the best. The results from the NVIE and FEEDTUM databases reveal novel findings about the correlations of arousal and valence dimensions to each of six basic emotion categories.
Resumo:
Distributed computation and storage have been widely used for processing of big data sets. For many big data problems, with the size of data growing rapidly, the distribution of computing tasks and related data can affect the performance of the computing system greatly. In this paper, a distributed computing framework is presented for high performance computing of All-to-All Comparison Problems. A data distribution strategy is embedded in the framework for reduced storage space and balanced computing load. Experiments are conducted to demonstrate the effectiveness of the developed approach. They have shown that about 88% of the ideal performance capacity have be achieved in multiple machines through using the approach presented in this paper.
Resumo:
This collection explores male sex work from an array of perspectives and disciplines. It aims to help enrich the ways in which we view both male sex work as a field of commerce and male sex workers themselves. Leading contributors examine the field both historically and cross-culturally from fields including public health, sociology, psychology, social services, history, filmography, economics, mental health, criminal justice, geography, and migration studies, and more. Synthesizing introductions by the editors help the reader understand the implications of the findings and conclusions for scholars, practitioners, students, and members of the interested/concerned public.