726 resultados para Additional somatosensory information
Resumo:
In-place digital augmentation enhances the experience of physical spaces through digital technologies that are directly accessible within that space. This can take place in many forms and ways, e.g., through location-aware applications running on the individuals’ portable devices, such as smart phones, or through large static devices, such as public displays, which are located within the augmented space and accessible by everyone. The hypothesis of this study is that in-place digital augmentation, in the context of civic participation, where citizens collaboratively aim at making their community or city a better place, offers significant new benefits, because it allows access to services or information that are currently inaccessible to urban dwellers where and when they are needed: in place. This paper describes our work in progress deploying a public screen to promote civic issues in public, urban spaces, and to encourage public feedback and discourse via mobile phones.
Resumo:
Effective information and knowledge management (IKM) is critical to corporate success; yet, its actual establishment and management is not yet fully understood. We identify ten organizational elements that need to be addressed to ensure the effective implementation and maintenance of information and knowledge management within organizations. We define these elements and provide key characterizations. We then discuss a case study that describes the implementation of an information system (designed to support IKM) in a medical supplies organization. We apply the framework of organizational elements in our analysis to uncover the enablers and barriers in this systems implementation project. Our analysis suggests that taking the ten organizational elements into consideration when implementing information systems will assist practitioners in managing information and knowledge processes more effectively and efficiently. We discuss implications for future research.
Resumo:
Do commencing students possess the level of information literacy (IL) knowledge and skills they need to succeed at university? What impact does embedding IL within the engineering and design curriculum have? This paper reports on the self-perception versus the reality of IL knowledge and skills, across a large cohort of first year built environment and engineering students. Acting on the findings of this evaluation, the authors (a team of academic librarians) developed an intensive IL skills program which was integrated into a faculty wide unit. Perceptions, knowledge and skills were re-evaluated at the end of the semester to determine if embedded IL education made a difference. Findings reveal that both the perception and reality of IL skills were significantly and measurably improved.
Resumo:
Purpose – The purpose of this paper is to examine the use of bid information, including both price and non-price factors in predicting the bidder’s performance. Design/methodology/approach – The practice of the industry was first reviewed. Data on bid evaluation and performance records of the successful bids were then obtained from the Hong Kong Housing Department, the largest housing provider in Hong Kong. This was followed by the development of a radial basis function (RBF) neural network based performance prediction model. Findings – It is found that public clients are more conscientious and include non-price factors in their bid evaluation equations. With the input variables used the information is available at the time of the bid and the output variable is the project performance score recorded during work in progress achieved by the successful bidder. It was found that past project performance score is the most sensitive input variable in predicting future performance. Research limitations/implications – The paper shows the inadequacy of using price alone for bid award criterion. The need for a systemic performance evaluation is also highlighted, as this information is highly instrumental for subsequent bid evaluations. The caveat for this study is that the prediction model was developed based on data obtained from one single source. Originality/value – The value of the paper is in the use of an RBF neural network as the prediction tool because it can model non-linear function. This capability avoids tedious ‘‘trial and error’’ in deciding the number of hidden layers to be used in the network model. Keywords Hong Kong, Construction industry, Neural nets, Modelling, Bid offer spreads Paper type Research paper
Resumo:
The explosive growth of the World-Wide-Web and the emergence of ecommerce are the major two factors that have led to the development of recommender systems (Resnick and Varian, 1997). The main task of recommender systems is to learn from users and recommend items (e.g. information, products or books) that match the users’ personal preferences. Recommender systems have been an active research area for more than a decade. Many different techniques and systems with distinct strengths have been developed to generate better quality recommendations. One of the main factors that affect recommenders’ recommendation quality is the amount of information resources that are available to the recommenders. The main feature of the recommender systems is their ability to make personalised recommendations for different individuals. However, for many ecommerce sites, it is difficult for them to obtain sufficient knowledge about their users. Hence, the recommendations they provided to their users are often poor and not personalised. This information insufficiency problem is commonly referred to as the cold-start problem. Most existing research on recommender systems focus on developing techniques to better utilise the available information resources to achieve better recommendation quality. However, while the amount of available data and information remains insufficient, these techniques can only provide limited improvements to the overall recommendation quality. In this thesis, a novel and intuitive approach towards improving recommendation quality and alleviating the cold-start problem is attempted. This approach is enriching the information resources. It can be easily observed that when there is sufficient information and knowledge base to support recommendation making, even the simplest recommender systems can outperform the sophisticated ones with limited information resources. Two possible strategies are suggested in this thesis to achieve the proposed information enrichment for recommenders: • The first strategy suggests that information resources can be enriched by considering other information or data facets. Specifically, a taxonomy-based recommender, Hybrid Taxonomy Recommender (HTR), is presented in this thesis. HTR exploits the relationship between users’ taxonomic preferences and item preferences from the combination of the widely available product taxonomic information and the existing user rating data, and it then utilises this taxonomic preference to item preference relation to generate high quality recommendations. • The second strategy suggests that information resources can be enriched simply by obtaining information resources from other parties. In this thesis, a distributed recommender framework, Ecommerce-oriented Distributed Recommender System (EDRS), is proposed. The proposed EDRS allows multiple recommenders from different parties (i.e. organisations or ecommerce sites) to share recommendations and information resources with each other in order to improve their recommendation quality. Based on the results obtained from the experiments conducted in this thesis, the proposed systems and techniques have achieved great improvement in both making quality recommendations and alleviating the cold-start problem.
Resumo:
An examination of Information Security (IS) and Information Security Management (ISM) research in Saudi Arabia has shown the need for more rigorous studies focusing on the implementation and adoption processes involved with IS culture and practices. Overall, there is a lack of academic and professional literature about ISM and more specifically IS culture in Saudi Arabia. Therefore, the overall aim of this paper is to identify issues and factors that assist the implementation and the adoption of IS culture and practices within the Saudi environment. The goal of this paper is to identify the important conditions for creating an information security culture in Saudi Arabian organizations. We plan to use this framework to investigate whether security culture has emerged into practices in Saudi Arabian organizations.
Resumo:
Understanding the complex dynamic and uncertain characteristics of organisational employees who perform authorised or unauthorised information security activities is deemed to be a very important and challenging task. This paper presents a conceptual framework for classifying and organising the characteristics of organisational subjects involved in these information security practices. Our framework expands the traditional Human Behaviour and the Social Environment perspectives used in social work by identifying how knowledge, skills and individual preferences work to influence individual and group practices with respect to information security management. The classification of concepts and characteristics in the framework arises from a review of recent literature and is underpinned by theoretical models that explain these concepts and characteristics. Further, based upon an exploratory study of three case organisations in Saudi Arabia involving extensive interviews with senior managers, department managers, IT managers, information security officers, and IT staff; this article describes observed information security practices and identifies several factors which appear to be particularly important in influencing information security behaviour. These factors include values associated with national and organisational culture and how they manifest in practice, and activities related to information security management.
Resumo:
Traffic congestion is an increasing problem with high costs in financial, social and personal terms. These costs include psychological and physiological stress, aggressivity and fatigue caused by lengthy delays, and increased likelihood of road crashes. Reliable and accurate traffic information is essential for the development of traffic control and management strategies. Traffic information is mostly gathered from in-road vehicle detectors such as induction loops. Traffic Message Chanel (TMC) service is popular service which wirelessly send traffic information to drivers. Traffic probes have been used in many cities to increase traffic information accuracy. A simulation to estimate the number of probe vehicles required to increase the accuracy of traffic information in Brisbane is proposed. A meso level traffic simulator has been developed to facilitate the identification of the optimal number of probe vehicles required to achieve an acceptable level of traffic reporting accuracy. Our approach to determine the optimal number of probe vehicles required to meet quality of service requirements, is to simulate runs with varying numbers of traffic probes. The simulated traffic represents Brisbane’s typical morning traffic. The road maps used in simulation are Brisbane’s TMC maps complete with speed limits and traffic lights. Experimental results show that that the optimal number of probe vehicles required for providing a useful supplement to TMC (induction loop) data lies between 0.5% and 2.5% of vehicles on the road. With less probes than 0.25%, little additional information is provided, while for more probes than 5%, there is only a negligible affect on accuracy for increasingly many probes on the road. Our findings are consistent with on-going research work on traffic probes, and show the effectiveness of using probe vehicles to supplement induction loops for accurate and timely traffic information.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
Over the years, people have often held the hypothesis that negative feedback should be very useful for largely improving the performance of information filtering systems; however, we have not obtained very effective models to support this hypothesis. This paper, proposes an effective model that use negative relevance feedback based on a pattern mining approach to improve extracted features. This study focuses on two main issues of using negative relevance feedback: the selection of constructive negative examples to reduce the space of negative examples; and the revision of existing features based on the selected negative examples. The former selects some offender documents, where offender documents are negative documents that are most likely to be classified in the positive group. The later groups the extracted features into three groups: the positive specific category, general category and negative specific category to easily update the weight. An iterative algorithm is also proposed to implement this approach on RCV1 data collections, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
This qualitative study views international students as information-using learners, through an information literacy lens. Focusing on the experiences of 25 international students at two Australian universities, the study investigates how international students use online information resources to learn, and identifies associated information literacy learning needs. An expanded critical incident approach provided the methodological framework for the study. Building on critical incident technique, this approach integrated a variety of concepts and research strategies. The investigation centred on real-life critical incidents experienced by the international students whilst using online resources for assignment purposes. Data collection involved semi-structured interviews and an observed online resource-using task. Inductive data analysis and interpretation enabled the creation of a multifaceted word picture of international students using online resources and a set of critical findings about their information literacy learning needs. The study’s key findings reveal: • the complexity of the international students’ experience of using online information resources to learn, which involves an interplay of their interactions with online resources, their affective and reflective responses to using them, and the cultural and linguistic dimensions of their information use. • the array of strengths as well as challenges that the international students experience in their information use and learning. • an apparent information literacy imbalance between the international students’ more developed information skills and less developed critical and strategic approaches to using information • the need for enhanced information literacy education that responds to international students’ identified information literacy needs. Responding to the findings, the study proposes an inclusive informed learning approach to support reflective information use and inclusive information literacy learning in culturally diverse higher education environments.
Resumo:
This paper investigates self–Googling through the monitoring of search engine activities of users and adds to the few quantitative studies on this topic already in existence. We explore this phenomenon by answering the following questions: To what extent is the self–Googling visible in the usage of search engines; is any significant difference measurable between queries related to self–Googling and generic search queries; to what extent do self–Googling search requests match the selected personalised Web pages? To address these questions we explore the theory of narcissism in order to help define self–Googling and present the results from a 14–month online experiment using Google search engine usage data.
Resumo:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
Resumo:
In this paper, we propose an unsupervised segmentation approach, named "n-gram mutual information", or NGMI, which is used to segment Chinese documents into n-character words or phrases, using language statistics drawn from the Chinese Wikipedia corpus. The approach alleviates the tremendous effort that is required in preparing and maintaining the manually segmented Chinese text for training purposes, and manually maintaining ever expanding lexicons. Previously, mutual information was used to achieve automated segmentation into 2-character words. The NGMI approach extends the approach to handle longer n-character words. Experiments with heterogeneous documents from the Chinese Wikipedia collection show good results.
Resumo:
Objective: To summarise the extent to which narrative text fields in administrative health data are used to gather information about the event resulting in presentation to a health care provider for treatment of an injury, and to highlight best practise approaches to conducting narrative text interrogation for injury surveillance purposes.----- Design: Systematic review----- Data sources: Electronic databases searched included CINAHL, Google Scholar, Medline, Proquest, PubMed and PubMed Central.. Snowballing strategies were employed by searching the bibliographies of retrieved references to identify relevant associated articles.----- Selection criteria: Papers were selected if the study used a health-related database and if the study objectives were to a) use text field to identify injury cases or use text fields to extract additional information on injury circumstances not available from coded data or b) use text fields to assess accuracy of coded data fields for injury-related cases or c) describe methods/approaches for extracting injury information from text fields.----- Methods: The papers identified through the search were independently screened by two authors for inclusion, resulting in 41 papers selected for review. Due to heterogeneity between studies metaanalysis was not performed.----- Results: The majority of papers reviewed focused on describing injury epidemiology trends using coded data and text fields to supplement coded data (28 papers), with these studies demonstrating the value of text data for providing more specific information beyond what had been coded to enable case selection or provide circumstantial information. Caveats were expressed in terms of the consistency and completeness of recording of text information resulting in underestimates when using these data. Four coding validation papers were reviewed with these studies showing the utility of text data for validating and checking the accuracy of coded data. Seven studies (9 papers) described methods for interrogating injury text fields for systematic extraction of information, with a combination of manual and semi-automated methods used to refine and develop algorithms for extraction and classification of coded data from text. Quality assurance approaches to assessing the robustness of the methods for extracting text data was only discussed in 8 of the epidemiology papers, and 1 of the coding validation papers. All of the text interrogation methodology papers described systematic approaches to ensuring the quality of the approach.----- Conclusions: Manual review and coding approaches, text search methods, and statistical tools have been utilised to extract data from narrative text and translate it into useable, detailed injury event information. These techniques can and have been applied to administrative datasets to identify specific injury types and add value to previously coded injury datasets. Only a few studies thoroughly described the methods which were used for text mining and less than half of the studies which were reviewed used/described quality assurance methods for ensuring the robustness of the approach. New techniques utilising semi-automated computerised approaches and Bayesian/clustering statistical methods offer the potential to further develop and standardise the analysis of narrative text for injury surveillance.