883 resultados para pacs: information technolgy applications
Resumo:
Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment. In this work, we present the benefit of using native applications in Android. Android includes a fully functional Linux, and using it for heavy computational tasks when developing applications can bring in substantional performance increase. We present how to develop native applications and software components, as well as how to let Linux applications and components communicate with Java programs. Additionally, we present performance measurements of native and Java applications executing identical tasks. The results show that native C applications can be up to 30 times as fast as an identical algorithm running in Dalvik VM. Java applications can become a speed-up of up to 10 times if utilizing JNI.
Resumo:
This report looks at opportunities in relation to what is either already available or starting to take off in Information and Communication Technology (ICT). ICT focuses on the entire system of information, communication, processes and knowledge within an organisation. It focuses on how technology can be implemented to serve the information and communication needs of people and organisations. An ICT system involves a combination of work practices, information, people and a range of technologies and applications organised to make the business or organisation fully functional and efficient, and to accomplish goals in an organisation. Our focus is on vocational, workbased education in New Zealand. It is not about eLearning, although we briefly touch on the topic. We provide a background on vocational education in New Zealand, cover what we consider to be key trends impacting workbased, vocational education and training (VET), and offer practical suggestions for leveraging better value from ICT initiatives across the main activities of an Industry Training Organisation (ITO). We use a learning value chain approach to demonstrate the main functions ITOs engage in and also use this approach as the basis for developing and prioritising an ICT strategy. Much of what we consider in this report is applicable to the wider tertiary education sector as it relates to life-long learning. We consider ICT as an enabler that: a) connects education businesses (all types including tertiary education institutions) to learners, their career decisions and their learning, and as well, b) enables those same businesses to run more efficiently. We suggest that these two sets of activities are considered as interconnected parts of the same education or training business ICT strategy.
Resumo:
Qualitative research methods are widely accepted in Information Systems and multiple approaches have been successfully used in IS qualitative studies over the years. These approaches include narrative analysis, discourse analysis, grounded theory, case study, ethnography and phenomenological analysis. Guided by critical, interpretive and positivist epistemologies (Myers 1997), qualitative methods are continuously growing in importance in our research community. In this special issue, we adopt Van Maanen's (1979: 520) definition of qualitative research as an umbrella term to cover an “array of interpretive techniques that can describe, decode, translate, and otherwise come to terms with the meaning, not the frequency, of certain more or less naturally occurring phenomena in the social world”. In the call for papers, we stated that the aim of the special issue was to provide a forum within which we can present and debate the significant number of issues, results and questions arising from the pluralistic approach to qualitative research in Information Systems. We recognise both the potential and the challenges that qualitative approaches offers for accessing the different layers and dimensions of a complex and constructed social reality (Orlikowski, 1993). The special issue is also a response to the need to showcase the current state of the art in IS qualitative research and highlight advances and issues encountered in the process of continuous learning that includes questions about its ontology, epistemological tenets, theoretical contributions and practical applications.
Resumo:
Cooperative Systems provide, through the multiplication of information sources over the road, a lot of potential to improve the safety of road users, especially drivers. However, developing cooperative ITS applications requires additional resources compared to non-cooperative applications which are both time consuming and expensive. In this paper, we present a simulation architecture aimed at prototyping cooperative ITS applications in an accurate and detailed, close-to-reality environment; the architecture is designed to be modular and generalist. It can be used to simulate any type of CS applications as well as augmented perception. Then, we discuss the results of two applications deployed with our architecture, using a common freeway emergency braking scenario. The first application is Emergency Electronic Brake Light (EEBL); we discuss improvements in safety in terms of the number of crashes and the severity of crashes. The second application compares the performance of a cooperative risk assessment using an augmented map against a non-cooperative approach based on local-perception only. Our results show a systematic improvement of forward warning time for most vehicles in the string when using the augmented-map-based risk assessment.
Resumo:
This project was a step forward in developing and evaluating a novel, mathematical model that can deduce the meaning of words based on their use in language. This model can be applied to a wide range of natural language applications, including the information seeking process most of us undertake on a daily basis.
Resumo:
Social media tools are often the result of innovations in Information Technology and developed by IT professionals and innovators. Nevertheless, IT professionals, many of whom are responsible for designing and building social media technologies, have not been investigated on how they themselves use or experience social media for professional purposes. This study will use Information Grounds Theory (Pettigrew, 1998) as a framework to study IT professionals’ experience in using social media for professional purposes. Information grounds facilitates the opportunistic discovery of information within social settings created temporarily at a place where people gather for a specific purpose (e.g., doctors’ waiting rooms, office tea rooms etc.), but the social atmosphere stimulates spontaneous sharing of information (Pettigrew, 1999). This study proposes that social media has the qualities that make it a rich information grounds; people participate from separate “places” in cyberspace in a synchronous manner in real-time, making it almost as dynamic and unplanned as physical information grounds. There is limited research on how social media platforms are perceived as a “place,” (a place to go to, a place to gather, or a place to be seen in) that is comparable to physical spaces. There is also no empirical study on how IT professionals use or “experience” social media. The data for this study is being collected through a study of IT professionals who currently use Twitter. A digital ethnography approach is being taken wherein the researcher uses online observations and “follows” the participants online and observes their behaviours and interactions on social media. Next, a sub-set of participants will be interviewed on their experiences with and within social media and how social media compares with traditional methods of information grounds, information communication, and collaborative environments. An Evolved Grounded Theory (Glaser, 1992) approach will be used to analyse tweets data and interviews and to map the findings against the Information Ground Theory. Findings from this study will provide foundational understanding of IT professionals’ experiences within social media, and can help both professionals and researchers understand this fast-evolving method of communications.
Resumo:
Modernized GPS and GLONASS, together with new GNSS systems, BeiDou and Galileo, offer code and phase ranging signals in three or more carriers. Traditionally, dual-frequency code and/or phase GPS measurements are linearly combined to eliminate effects of ionosphere delays in various positioning and analysis. This typical treatment method has imitations in processing signals at three or more frequencies from more than one system and can be hardly adapted itself to cope with the booming of various receivers with a broad variety of singles. In this contribution, a generalized-positioning model that the navigation system independent and the carrier number unrelated is promoted, which is suitable for both single- and multi-sites data processing. For the synchronization of different signals, uncalibrated signal delays (USD) are more generally defined to compensate the signal specific offsets in code and phase signals respectively. In addition, the ionospheric delays are included in the parameterization with an elaborate consideration. Based on the analysis of the algebraic structures, this generalized-positioning model is further refined with a set of proper constrains to regularize the datum deficiency of the observation equation system. With this new model, uncalibrated signal delays (USD) and ionospheric delays are derived for both GPS and BeiDou with a large dada set. Numerical results demonstrate that, with a limited number of stations, the uncalibrated code delays (UCD) are determinate to a precision of about 0.1 ns for GPS and 0.4 ns for BeiDou signals, while the uncalibrated phase delays (UPD) for L1 and L2 are generated with 37 stations evenly distributed in China for GPS with a consistency of about 0.3 cycle. Extra experiments concerning the performance of this novel model in point positioning with mixed-frequencies of mixed-constellations is analyzed, in which the USD parameters are fixed with our generated values. The results are evaluated in terms of both positioning accuracy and convergence time.
Resumo:
With the increasing popularity and adoption of building information modeling (BIM), the amount of digital information available about a building is overwhelming. Enormous challenges remain however in identifying meaningful and required information from a complex BIM model to support a particular construction management (CM) task. Detailed specifications of information required by different construction domains and expressive and easy-to-use BIM reasoning mechanisms are seen as an important means in addressing these challenges. This paper analyzes some of the characteristics and requirements of component-specific construction knowledge in relation to the current work practice and BIM-based applications. It is argued that domain ontologies and information extraction approaches, such as queries could significantly bring much needed support for knowledge sharing and integration of information between design, construction and facility management.
Resumo:
This paper is a work in progress that examines current consumer engagement with eHealth information through Smartphones or tablets. We focus on three activity types: seeking, posting and ‘other’ engagement activity and compare two age groups, 25-40s and over 40-55s. Findings show that around 30% of the younger age group is engaging with Government and other Health providers’ websites, receiving eHealth emails, and reading other people’s comments about health related issues in online discussion groups/websites/blog. Approximately 20% engage with Government and other Health providers’ social media and watch or listen to audio or video podcasts. For the older age group, their most active engagement with eHealth information is in the seeking category through Government or other health websites (approximately 15%), and less than 10% for social media sites. Their posting activity is less than 5%. Other activities show that less than 15% of the older age group engages through receiving emails and reading blogs, less than 10% watch or listen to podcasts, and their online consulting activity is less than 7%. We note that scores are low for both groups in terms of engaging with eHealth information through Twitter.
Resumo:
Whilst alcohol is a common feature of many social gatherings, there are numerous immediate and long-term health and social harms associated with its abuse. Alcohol consumption is the world’s third largest risk factor for disease and disability with almost 4% of all deaths worldwide attributed to alcohol. Not surprisingly, alcohol use and binge drinking by young people is of particular concern with Australian data reporting that 39% of young people (18-19yrs) admitted drinking at least weekly and 32% drank to levels that put them at risk of alcohol-related harm. The growing market penetration and connectivity of smartphones may be an opportunities for innovation in promoting health-related self-management of substance use. However, little is known about how best to harness and optimise this technology for health-related intervention and behaviour change. This paper explores the utility and interface of smartphone technology as a health intervention tool to monitor and moderate alcohol use. A review of the psychological health applications of this technology will be presented along with the findings of a series of focus groups, surveys and behavioural field trials of several drink-monitoring applications. Qualitative and quantitative data will be presented on the perceptions, preferences and utility of the design, usability and functionality of smartphone apps to monitoring and moderate alcohol use. How these findings have shaped the development and evolution of the OnTrack app will be specifically discussed, along with future directions and applications of this technology in health intervention, prevention and promotion.
Resumo:
The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information overlays. In addition, very few of these representations have undergone a thorough analysis or design process with reference to psychological theories on data and process visualization. This dearth of visualization research, we believe, has led to problems with BPM uptake in some organizations, as the representations can be difficult for stakeholders to understand, and thus remains an open research question for the BPM community. In addition, business analysts and process modeling experts themselves need visual representations that are able to assist with key BPM life cycle tasks in the process of generating optimal solutions. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment have much potential in areas of BPM; to engage, provide insight, and to promote collaboration amongst analysts and stakeholders alike. We believe this is a timely topic, with research emerging in a number of places around the globe, relevant to this workshop. This is the second TAProViz workshop being run at BPM. The intention this year is to consolidate on the results of last year's successful workshop by further developing this important topic, identifying the key research topics of interest to the BPM visualization community.
Resumo:
Community support agencies routinely employ a web presence to provide information on their services. While this online information provision helps to increase an agency’s reach, this paper argues that it can be further extended by mapping relationships between services and by facilitating two-way communication and collaboration with local communities. We argue that emergent technologies, such as locative media and networking tools, can assist in harnessing this social capital. However, new applications must be designed in ways that both persuade and support community members to contribute information and support others in need. An analysis of the online presence of community service agencies and social benefit applications is presented against Fogg’s Behaviour Model. From this evaluation, design principles are proposed for developing new locative, collaborative online applications for social benefit.
Resumo:
After first observing a person, the task of person re-identification involves recognising an individual at different locations across a network of cameras at a later time. Traditionally, this task has been performed by first extracting appearance features of an individual and then matching these features to the previous observation. However, identifying an individual based solely on appearance can be ambiguous, particularly when people wear similar clothing (i.e. people dressed in uniforms in sporting and school settings). This task is made more difficult when the resolution of the input image is small as is typically the case in multi-camera networks. To circumvent these issues, we need to use other contextual cues. In this paper, we use "group" information as our contextual feature to aid in the re-identification of a person, which is heavily motivated by the fact that people generally move together as a collective group. To encode group context, we learn a linear mapping function to assign each person to a "role" or position within the group structure. We then combine the appearance and group context cues using a weighted summation. We demonstrate how this improves performance of person re-identification in a sports environment over appearance based-features.
Resumo:
A fear of imminent information overload predates the World Wide Web by decades. Yet, that fear has never abated. Worse, as the World Wide Web today takes the lion’s share of the information we deal with, both in amount and in time spent gathering it, the situation has only become more precarious. This chapter analyses new issues in information overload that have emerged with the advent of the Web, which emphasizes written communication, defined in this context as the exchange of ideas expressed informally, often casually, as in verbal language. The chapter focuses on three ways to mitigate these issues. First, it helps us, the users, to be more specific in what we ask for. Second, it helps us amend our request when we don't get what we think we asked for. And third, since only we, the human users, can judge whether the information received is what we want, it makes retrieval techniques more effective by basing them on how humans structure information. This chapter reports on extensive experiments we conducted in all three areas. First, to let users be more specific in describing an information need, they were allowed to express themselves in an unrestricted conversational style. This way, they could convey their information need as if they were talking to a fellow human instead of using the two or three words typically supplied to a search engine. Second, users were provided with effective ways to zoom in on the desired information once potentially relevant information became available. Third, a variety of experiments focused on the search engine itself as the mediator between request and delivery of information. All examples that are explained in detail have actually been implemented. The results of our experiments demonstrate how a human-centered approach can reduce information overload in an area that grows in importance with each day that passes. By actually having built these applications, I present an operational, not just aspirational approach.
Resumo:
Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users’ information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users’ interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features, and; (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user’s information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models