847 resultados para Information interfaces and presentation: Miscellaneous.
Resumo:
The research question of this thesis was how knowledge can be managed with information systems. Information systems can support but not replace knowledge management. Systems can mainly store epistemic organisational knowledge included in content, and process data and information. Certain value can be achieved by adding communication technology to systems. All communication, however, can not be managed. A new layer between communication and manageable information was named as knowformation. Knowledge management literature was surveyed, together with information species from philosophy, physics, communication theory, and information system science. Positivism, post-positivism, and critical theory were studied, but knowformation in extended organisational memory seemed to be socially constructed. A memory management model of an extended enterprise (M3.exe) and knowformation concept were findings from iterative case studies, covering data, information and knowledge management systems. The cases varied from groups towards extended organisation. Systems were investigated, and administrators, users (knowledge workers) and managers interviewed. The model building required alternative sets of data, information and knowledge, instead of using the traditional pyramid. Also the explicit-tacit dichotomy was reconsidered. As human knowledge is the final aim of all data and information in the systems, the distinction between management of information vs. management of people was harmonised. Information systems were classified as the core of organisational memory. The content of the systems is in practice between communication and presentation. Firstly, the epistemic criterion of knowledge is not required neither in the knowledge management literature, nor from the content of the systems. Secondly, systems deal mostly with containers, and the knowledge management literature with applied knowledge. Also the construction of reality based on the system content and communication supports the knowformation concept. Knowformation belongs to memory management model of an extended enterprise (M3.exe) that is divided into horizontal and vertical key dimensions. Vertically, processes deal with content that can be managed, whereas communication can be supported, mainly by infrastructure. Horizontally, the right hand side of the model contains systems, and the left hand side content, which should be independent from each other. A strategy based on the model was defined.
Resumo:
A prototype presentation system base is described. It offers mechanisms, tools, and ready-made parts for building user interfaces. A general user interface model underlies the base, organized around the concept of a presentation: a visible text or graphic for conveying information. Te base and model emphasize domain independence and style independence, to apply to the widest possible range of interfaces. The primitive presentation system model treats the interface as a system of processes maintaining a semantic relation between an application data base and a presentation data base, the symbolic screen description containing presentations. A presenter continually updates the presentation data base from the application data base. The user manipulates presentations with a presentation editor. A recognizer translates the user's presentation manipulation into application data base commands. The primitive presentation system can be extended to model more complex systems by attaching additional presentation systems. In order to illustrate the model's generality and descriptive capabilities, extended model structures for several existing user interfaces are discussed. The base provides support for building the application and presentation data bases, linked together into a single, uniform network, including descriptions of classes of objects as we as the objects themselves. The base provides an initial presentation data base network graphics to continually display it, and editing functions. A variety of tools and mechanisms help create and control presenters and recognizers. To demonstrate the base's utility, three interfaces to an operating system were constructed, embodying different styles: icons, menu, and graphical annotation.
Resumo:
The use of laser-accelerated protons as a particle probe for the detection of electric fields in plasmas has led in recent years to a wealth of novel information regarding the ultrafast plasma dynamics following high intensity laser-matter interactions. The high spatial quality and short duration of these beams have been essential to this purpose. We will discuss some of the most recent results obtained with this diagnostic at the Rutherford Appleton Laboratory (UK) and at LULI - Ecole Polytechnique (France), also applied to conditions of interest to conventional Inertial Confinement Fusion. In particular, the technique has been used to measure electric fields responsible for proton acceleration from solid targets irradiated with ps pulses, magnetic fields formed by ns pulse irradiation of solid targets, and electric fields associated with the ponderomotive channelling of ps laser pulses in under-dense plasmas.
Resumo:
A computer code has been developed to simulate and study the evolution of ion charge states inside the trap region of an electron beam ion trap. In addition to atomic physics phenomena previously included in similar codes such as electron impact ionization, radiative recombination, and charge exchange, several aspects of the relevant physics such as dielectronic recombination, ionization heating, and ion cloud expansion have been included for the first time in the model. The code was developed using object oriented concepts with database support, making it readable, accurate, and well organized. The simulation results show a good agreement with various experiments, and give useful information for selection of operating conditions and experiment design.
Resumo:
Much interest now focuses on the use of the contingent valuation method (CVM) to assess non-use value of environmental goods. The paper reviews recent literature and highlights particular problems of information provision and respondent knowledge, comprehension and cognition. These must be dealt with by economists in designing CVM surveys for eliciting non-use values. Cognitive questionnaire design methods are presented which invoke concepts from psychology and tools from cognitive survey design (focus groups and verbal reports) to reduce a complex environmnetal good into a meaningful commodity that can be valued by respondents in a contingent market. This process is illustrated with examples from the authors' own research valuing alternative afforestation programmes. -Authors
Resumo:
The Virtual Lightbox for Museums and Archives (VLMA) is a tool for collecting and reusing, in a structured fashion, the online contents of museums and archive datasets. It is not restricted to datasets with visual components although VLMA includes a lightbox service that enables comparison and manipulation of visual information. With VLMA, one can browse and search collections, construct personal collections, annotate them, export these collections to XML or Impress (Open Office) presentation format, and share collections with other VLMA users. VLMA was piloted as an e-Learning tool as part of JISC’s e-Learning focus in its first phase (2004-2005) and in its second phase (2005-2006) it has incorporated new partner collections while improving and expanding interfaces and services. This paper concerns its development as a research and teaching tool, especially to teachers using museum collections, and discusses the recent development of VLMA.
Resumo:
This paper uses Shannon's information theory to give a quantitative definition of information flow in systems that transform inputs to outputs. For deterministic systems, the definition is shown to specialise to a simpler form when the information source and the known inputs jointly determine the inputs. For this special case, the definition is related to the classical security condition of non-interference and an equivalence is established between non-interference and independence of random variables. Quantitative information flow for deterministic systems is then presented in relational form. With this presentation, it is shown how relational parametricity can be used to derive upper and lower bounds on information flows through families of functions defined in the second order lambda calculus.
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
Recently, two international standard organizations, ISO and OGC, have done the work of standardization for GIS. Current standardization work for providing interoperability among GIS DB focuses on the design of open interfaces. But, this work has not considered procedures and methods for designing river geospatial data. Eventually, river geospatial data has its own model. When we share the data by open interface among heterogeneous GIS DB, differences between models result in the loss of information. In this study a plan was suggested both to respond to these changes in the information envirnment and to provide a future Smart River-based river information service by understanding the current state of river geospatial data model, improving, redesigning the database. Therefore, primary and foreign key, which can distinguish attribute information and entity linkages, were redefined to increase the usability. Database construction of attribute information and entity relationship diagram have been newly redefined to redesign linkages among tables from the perspective of a river standard database. In addition, this study was undertaken to expand the current supplier-oriented operating system to a demand-oriented operating system by establishing an efficient management of river-related information and a utilization system, capable of adapting to the changes of a river management paradigm.
Resumo:
The ability of public health practitioners (PHPs) to work efficiently and effectively is negatively impacted by their lack of knowledge of the broad range of evidence-based practice information resources and tools that can be utilized to guide them in their development of health policies and programs. This project, a three-hour continuing education hands-on workshop with supporting resources, was designed to increase knowledge and skills of these resources. The workshop was presented as a pre-conference continuing education program for the Texas Public Health Association (TPHA) 2008 Annual Conference. Topics included: identification of evidence-based practice resources to aid in the development of policies and programs; identification of sources of publicly available data; utilization of data for community assessments; and accessing and searching the literature through a collection of databases available to all citizens of Texas. Supplemental resources included a blog that served as a gateway to the resources explored during the presentation, a community assessment workbook that incorporates both Healthy People 2010 objectives and links to reliable sources of data, and handouts providing additional instruction on the use of the resources covered during the workshop.^ Before- and after-workshop surveys based on Kirkpatrick's 4-level model of evaluation and the Theory of Planned Behavior were administered. Of the questions related to the trainer, the workshop, and the usefulness of the workshop, participants gave "Good" to "Excellent" responses to all one question. Confidence levels overall increased a statistically significant amount; measurements of attitude, social norms, and control showed no significant differences before and after the workshop. Lastly, participants indicated they were likely to use resources shown during the workshop within a one to three month time period on average. ^ The workshop and creation of supplemental resources served as a pilot for a funded project that will be continued with the development and delivery of four 4-week long webinar-based training sessions to be completed by December 2008. ^
Resumo:
This paper describes a novel architecture to introduce automatic annotation and processing of semantic sensor data within context-aware applications. Based on the well-known state-charts technologies, and represented using W3C SCXML language combined with Semantic Web technologies, our architecture is able to provide enriched higher-level semantic representations of user’s context. This capability to detect and model relevant user situations allows a seamless modeling of the actual interaction situation, which can be integrated during the design of multimodal user interfaces (also based on SCXML) for them to be adequately adapted. Therefore, the final result of this contribution can be described as a flexible context-aware SCXML-based architecture, suitable for both designing a wide range of multimodal context-aware user interfaces, and implementing the automatic enrichment of sensor data, making it available to the entire Semantic Sensor Web
Resumo:
"Conf-651131, General, miscellaneous, and progress reports (TID-4500)."
Resumo:
The fast development and wide application of digital methods, combined with broadened access to the Internet and falling computing costs, have created intense interest in electronic presentation and access to cultural and scientific heritage resources. Information technologies have offered cultural institutions new opportunities for the presentation of their holdings, which are now made accessible not only to the specialists, but also to the citizens and interested parties worldwide. The paper presents an overview of the Bulgarian experience in the field of digital preservation and access and on-going work on the project “Knowledge Transfer for the Digitisation of Scientific and Cultural Heritage to Bulgaria” (MTKD-CT-2004-509754) supported by the Marie Curie programme of the FP6 of the EC.
Resumo:
The concept of knowledge is the central one used when solving the various problems of data mining and pattern recognition in finite spaces of Boolean or multi-valued attributes. A special form of knowledge representation, called implicative regularities, is proposed for applying in two powerful tools of modern logic: the inductive inference and the deductive inference. The first one is used for extracting the knowledge from the data. The second is applied when the knowledge is used for calculation of the goal attribute values. A set of efficient algorithms was developed for that, dealing with Boolean functions and finite predicates represented by logical vectors and matrices.