460 resultados para information systems applications
Resumo:
Substantial research efforts have been expended to deal with the complexity of concurrent systems that is inherent to their analysis, e.g., works that tackle the well-known state space explosion problem. Approaches differ in the classes of properties that they are able to suitably check and this is largely a result of the way they balance the trade-off between analysis time and space employed to describe a concurrent system. One interesting class of properties is concerned with behavioral characteristics. These properties are conveniently expressed in terms of computations, or runs, in concurrent systems. This article introduces the theory of untanglings that exploits a particular representation of a collection of runs in a concurrent system. It is shown that a representative untangling of a bounded concurrent system can be constructed that captures all and only the behavior of the system. Representative untanglings strike a unique balance between time and space, yet provide a single model for the convenient extraction of various behavioral properties. Performance measurements in terms of construction time and size of representative untanglings with respect to the original specifications of concurrent systems, conducted on a collection of models from practice, confirm the scalability of the approach. Finally, this article demonstrates practical benefits of using representative untanglings when checking various behavioral properties of concurrent systems.
Resumo:
What are the information practices of teen content creators? In the United States over two thirds of teens have participated in creating and sharing content in online communities that are developed for the purpose of allowing users to be producers of content. This study investigates how teens participating in digital participatory communities find and use information as well as how they experience the information. From this investigation emerged a model of their information practices while creating and sharing content such as film-making, visual art work, story telling, music, programming, and web site design in digital participatory communities. The research uses grounded theory methodology in a social constructionist framework to investigate the research problem: what are the information practices of teen content creators? Data was gathered through semi-structured interviews and observation of teen’s digital communities. Analysis occurred concurrently with data collection, and the principle of constant comparison was applied in analysis. As findings were constructed from the data, additional data was collected until a substantive theory was constructed and no new information emerged from data collection. The theory that was constructed from the data describes five information practices of teen content creators. The five information practices are learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. In describing the five information practices there are three necessary descriptive components, the community of practice, the experiences of information and the information actions. The experiences of information include information as participation, inspiration, collaboration, process, and artifact. Information actions include activities that occur in the categories of gathering, thinking and creating. The experiences of information and information actions intersect in the information practices, which are situated within the specific community of practice, such as a digital participatory community. Finally, the information practices interact and build upon one another and this is represented in a graphic model and explanation.
Resumo:
This paper presents a graph-based method to weight medical concepts in documents for the purposes of information retrieval. Medical concepts are extracted from free-text documents using a state-of-the-art technique that maps n-grams to concepts from the SNOMED CT medical ontology. In our graph-based concept representation, concepts are vertices in a graph built from a document, edges represent associations between concepts. This representation naturally captures dependencies between concepts, an important requirement for interpreting medical text, and a feature lacking in bag-of-words representations. We apply existing graph-based term weighting methods to weight medical concepts. Using concepts rather than terms addresses vocabulary mismatch as well as encapsulates terms belonging to a single medical entity into a single concept. In addition, we further extend previous graph-based approaches by injecting domain knowledge that estimates the importance of a concept within the global medical domain. Retrieval experiments on the TREC Medical Records collection show our method outperforms both term and concept baselines. More generally, this work provides a means of integrating background knowledge contained in medical ontologies into data-driven information retrieval approaches.
Resumo:
To protect the health information security, cryptography plays an important role to establish confidentiality, authentication, integrity and non-repudiation. Keys used for encryption/decryption and digital signing must be managed in a safe, secure, effective and efficient fashion. The certificate-based Public Key Infrastructure (PKI) scheme may seem to be a common way to support information security; however, so far, there is still a lack of successful large-scale certificate-based PKI deployment in the world. In addressing the limitations of the certificate-based PKI scheme, this paper proposes a non-certificate-based key management scheme for a national e-health implementation. The proposed scheme eliminates certificate management and complex certificate validation procedures while still maintaining security. It is also believed that this study will create a new dimension to the provision of security for the protection of health information in a national e-health environment.
Resumo:
Phenomenography is a research approach devised to allow the investigation of varying ways in which people experience aspects of their world. Whilst growing attention is being paid to interpretative research in LIS, it is not always clear how the outcomes of such research can be used in practice. This article explores the potential contribution of phenomenography in advancing the application of phenomenological and hermeneutic frameworks to LIS theory, research and practice. In phenomenography we find a research toll which in revealing variation, uncovers everyday understandings of phenomena and provides outcomes which are readily applicable to professional practice. THe outcomes may be used in human computer interface design, enhancement, implementation and training, in the design and evaluation of services, and in education and training for both end users and information professionals. A proposed research territory for phenomenography in LIS includes investigating qualitative variation in the experienced meaning of: 1) information and its role in society 2) LIS concepts and principles 3) LIS processes and; 4) LIS elements.
Resumo:
Encompasses the whole BPM lifecycle, including process identification, modelling, analysis, redesign, automation and monitoring Class-tested textbook complemented with additional teaching material on the accompanying website Covers both relevant conceptual background, industrial standards and actionable skills Business Process Management (BPM) is the art and science of how work should be performed in an organization in order to ensure consistent outputs and to take advantage of improvement opportunities, e.g. reducing costs, execution times or error rates. Importantly, BPM is not about improving the way individual activities are performed, but rather about managing entire chains of events, activities and decisions that ultimately produce added value for an organization and its customers. This textbook encompasses the entire BPM lifecycle, from process identification to process monitoring, covering along the way process modelling, analysis, redesign and automation. Concepts, methods and tools from business management, computer science and industrial engineering are blended into one comprehensive and inter-disciplinary approach. The presentation is illustrated using the BPMN industry standard defined by the Object Management Group and widely endorsed by practitioners and vendors worldwide. In addition to explaining the relevant conceptual background, the book provides dozens of examples, more than 100 hands-on exercises – many with solutions – as well as numerous suggestions for further reading. The textbook is the result of many years of combined teaching experience of the authors, both at the undergraduate and graduate levels as well as in the context of professional training. Students and professionals from both business management and computer science will benefit from the step-by-step style of the textbook and its focus on fundamental concepts and proven methods. Lecturers will appreciate the class-tested format and the additional teaching material available on the accompanying website fundamentals-of-bpm.org.
Resumo:
The design and construction community has shown increasing interest in adopting building information models (BIMs). The richness of information provided by BIMs has the potential to streamline the design and construction processes by enabling enhanced communication, coordination, automation and analysis. However, there are many challenges in extracting construction-specific information out of BIMs. In most cases, construction practitioners have to manually identify the required information, which is inefficient and prone to error, particularly for complex, large-scale projects. This paper describes the process and methods we have formalized to partially automate the extraction and querying of construction-specific information from a BIM. We describe methods for analyzing a BIM to query for spatial information that is relevant for construction practitioners, and that is typically represented implicitly in a BIM. Our approach integrates ifcXML data and other spatial data to develop a richer model for construction users. We employ custom 2D topological XQuery predicates to answer a variety of spatial queries. The validation results demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.
Resumo:
Due to the development of XML and other data models such as OWL and RDF, sharing data is an increasingly common task since these data models allow simple syntactic translation of data between applications. However, in order for data to be shared semantically, there must be a way to ensure that concepts are the same. One approach is to employ commonly usedschemas—called standard schemas —which help guarantee that syntactically identical objects have semantically similar meanings. As a result of the spread of data sharing, there has been widespread adoption of standard schemas in a broad range of disciplines and for a wide variety of applications within a very short period of time. However, standard schemas are still in their infancy and have not yet matured or been thoroughly evaluated. It is imperative that the data management research community takes a closer look at how well these standard schemas have fared in real-world applications to identify not only their advantages, but also the operational challenges that real users face. In this paper, we both examine the usability of standard schemas in a comparison that spans multiple disciplines, and describe our first step at resolving some of these issues in our Semantic Modeling System. We evaluate our Semantic Modeling System through a careful case study of the use of standard schemas in architecture, engineering, and construction, which we conducted with domain experts. We discuss how our Semantic Modeling System can help the broader problem and also discuss a number of challenges that still remain.
Resumo:
Convergence of pervasive technologies, techno-centric customers and the emergence of digitized channels, overabundance of user friendly retail applications are having a profound impact on retail experience, leading to the advent of ‘everywhere retailing’. With the rapid uptake of digital complimentary assets and smart mobile applications are revolutionizing the relationship of retailers with their customers and suppliers. Retail firms are increasingly investing substantial resources on dynamic Customer Relationship Management systems (D-CRM / U-CRM) to better engage with customers to sense and respond quickly (Agility of the firm) to their demands. However, unlike traditional CRM systems, engagement with U-CRM systems requires that firms be hyper sensitive to volatile customer needs and wants. Following the notions of firm agility, this study attempts to develop a framework to understand such unforeseen benefits and issues of U-CRM. This research-in-progress paper reports an a-priory framework including 62 U-CRM benefits derived through an archival analysis of literature.
Resumo:
A global, online quantitative study among 300 consumers of digital technology products found the most reliable information sources were friends, family or word of mouth (WOM) from someone they knew, followed by expert product reviews, and product reviews written by other consumers. The most unreliable information sources were advertising or infomercials, automated recommendations based on purchasing patterns or retailers. While a very small number of consumers evaluated products online, rating of products and online discussions were more frequent activities. The most popular social media websites for reviews were Facebook, Twitter, Amazon and e-Bay, indicating the importance of WOM in social networks and online media spaces that feature product reviews as it is the most persuasive piece of information in both online and offline social networks. These results suggest that ‘social customers’ must be considered as an integral part of a marketing strategy.
Resumo:
This paper presents the findings from the first phase of a larger study into the information literacy of website designers. Using a phenomenographic approach, it maps the variation in experiencing the phenomenon of information literacy from the viewpoint of website designers. The current result reveals important insights into the lived experience of this group of professionals. Analysis of data has identified five different ways in which website designers experience information literacy: problem-solving, using best practices, using a knowledge base, building a successful website, and being part of a learning community of practice. As there is presently relatively little research in the area of workplace information literacy, this study provides important additional insights into our understanding of information literacy in the workplace, especially in the specific context of website design. Such understandings are of value to library and information professionals working with web professionals either within or beyond libraries. These understandings may also enable information professionals to take a more proactive role in the industry of website design. Finally, the obtained knowledge will contribute to the education of both website-design science and library and information science (LIS) students.
Resumo:
Measures of semantic similarity between medical concepts are central to a number of techniques in medical informatics, including query expansion in medical information retrieval. Previous work has mainly considered thesaurus-based path measures of semantic similarity and has not compared different corpus-driven approaches in depth. We evaluate the effectiveness of eight common corpus-driven measures in capturing semantic relatedness and compare these against human judged concept pairs assessed by medical professionals. Our results show that certain corpus-driven measures correlate strongly (approx 0.8) with human judgements. An important finding is that performance was significantly affected by the choice of corpus used in priming the measure, i.e., used as evidence from which corpus-driven similarities are drawn. This paper provides guidelines for the implementation of semantic similarity measures for medical informatics and concludes with implications for medical information retrieval.
Resumo:
A technologically innovative study was undertaken across two suburbs in Brisbane, Australia, to assess socioeconomic differences in women's use of the local environment for work, recreation, and physical activity. Mothers from high and low socioeconomic suburbs were instructed to continue with usual daily routines, and to use mobile phone applications (Facebook Places, Twitter, and Foursquare) on their mobile phones to ‘check-in’ at each location and destination they reached during a one-week period. These smartphone applications are able to track travel logistics via built-in geographical information systems (GIS), which record participants’ points of latitude and longitude at each destination they reach. Location data were downloaded to Google Earth and excel for analysis. Women provided additional qualitative data via text regarding the reasons and social contexts of their travel. We analysed 2183 ‘check-ins’ for 54 women in this pilot study to gain quantitative, qualitative, and spatial data on human-environment interactions. Data was gathered on distances travelled, mode of transport, reason for travel, social context of travel, and categorised in terms of physical activity type – walking, running, sports, gym, cycling, or playing in the park. We found that the women in both suburbs had similar daily routines with the exception of physical activity. We identified 15% of ‘check-ins’ in the lower socioeconomic group as qualifying for the physical activity category, compared with 23% in the higher socioeconomic group. This was explained by more daily walking for transport (1.7kms to 0.2kms) and less car travel each week (28.km to 48.4kms) in the higher socioeconomic suburb. We ascertained insights regarding the socio-cultural influences on these differences via additional qualitative data. We discuss the benefits and limitations of using new technologies and Google Earth with implications for informing future physical and social aspects of urban design, and health promotion in socioeconomically diverse cities.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
The tertiary sector is an important employer and its growth is well above average. The Texo project’s aim is to support this development by making services tradable. The composition of new or value-added services is a cornerstone of the proposed architecture. It is, however, intended to cater for build-time. Yet, at run-time unforseen exceptions may occur and user’s requirements may change. Varying circumstances require immediate sensemaking of the situation’s context and call for prompt extensions of existing services. Lightweight composition technology provided by the RoofTop project enables domain experts to create simple widget-like applications, also termed enterprise mashups, without extensive methodological skills. In this way RoofTop can assist and extend the idea of service delivery through the Texo platform and is a further step towards a next generation internet of services.