976 resultados para Information Requirements: Data Availability
Resumo:
In the Department of Health, Social Services and Public safety (DHSSPS) Information and Statistics, and Research, are viewed as policies in their own right rather than support functions to other policies. This paper presents - in synoptic form - an overview of the information availability, quality and deficits required for DHSSPS and the HPSS to meet its statutory requirements, as known by Information and Analysis Unit (IAU) in the Department. åÊ
Resumo:
Auditors: Arthur Anderson, 1996 ; Geo S. Olive & Co., 1997 ; Olive, 1998-
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
Fasciolosis is a disease of importance for both veterinary and public health. For the first time, georeferenced prevalence data of Fasciola hepatica in bovines were collected and mapped for the Brazilian territory and data availability was discussed. Bovine fasciolosis in Brazil is monitored on a Federal, State and Municipal level, and to improve monitoring it is essential to combine the data collected on these three levels into one dataset. Data were collected for 1032 municipalities where livers were condemned by the Federal Inspection Service (MAPA/SIF) because of the presence of F. hepatica. The information was distributed over 11 states: Espírito Santo, Goiás, Minas Gerais, Mato Grosso do Sul, Mato Grosso, Pará, Paraná, Rio de Janeiro, Rio Grande do Sul, Santa Catarina and São Paulo. The highest prevalence of fasciolosis was observed in the southern states, with disease clusters along the coast of Paraná and Santa Catarina and in Rio Grande do Sul. Also, temporal variation of the prevalence was observed. The observed prevalence and the kriged prevalence maps presented in this paper can assist both animal and human health workers in estimating the risk of infection in their state or municipality.
Resumo:
We discuss the phenomenon of system tailoring in the context of data from an observational study of anaesthesia. We found that anaesthetists tailor their monitoring equipment so that the auditory alarms are more informative. However, the occurrence of tailoring by anaesthetists in the operating theatre was infrequent, even though the flexibility to tailor exists on many of the patient monitoring systems used in the study. We present an influence diagram to explain how alarm tailoring can increase situation awareness in the operating theatre but why factors inhibiting tailoring prevent widespread use. Extending the influence diagram, we discuss ways that more informative displays could achieve the results sought by anaesthetists when they tailor their alarm systems. In particular, we argue that we should improve our designs rather than simply provide more flexible tailoring systems. because users often find tailoring a complex task. We conclude that properly designed auditory displays may benefit anaesthetists in achieving greater patient situation awareness and that designers should consider carefully how factors promoting and inhibiting tailoring will affect the end-users' likelihood of conducting tailoring. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
One of the major changes introduced in the GPLv3 and LGPLv3 is the clause preventing "tivoisation". Richard Stallman defines Tivoisation as "the practice of designing hardware so that a modified version cannot function properly". In this presentation we will go through the reasons of why the information installation requirement was introduced, how you can comply with this requirement and why you may want to think about it now rather than later.
Resumo:
The study of information requirements for e-business systems reveals that the level of detail, granularity, format of presentation, and a broad range of information types are required for the applications. The provision of relevant information affects how e-business systems can efficiently support the business goals and processes. This paper presents an approach for determining information requirements for e-business systems (DIRES) which will allow the user to describe the core business processes, whose specification maps onto a business activity space. It further aids a configuration of information requirements into an information space. A case study of a logistics company in China demonstrates the use of DIRES techniques and assesses the validity of the research.
Resumo:
The quality of information provision influences considerably knowledge construction driven by individual users’ needs. In the design of information systems for e-learning, personal information requirements should be incorporated to determine a selection of suitable learning content, instructive sequencing for learning content, and effective presentation of learning content. This is considered as an important part of instructional design for a personalised information package. The current research reveals that there is a lack of means by which individual users’ information requirements can be effectively incorporated to support personal knowledge construction. This paper presents a method which enables an articulation of users’ requirements based on the rooted learning theories and requirements engineering paradigms. The user’s information requirements can be systematically encapsulated in a user profile (i.e. user requirements space), and further transformed onto instructional design specifications (i.e. information space). These two spaces allow the discovering of information requirements patterns for self-maintaining and self-adapting personalisation that enhance experience in the knowledge construction process.
Resumo:
Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.
Resumo:
Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector`s orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge.
Resumo:
In this paper we describe Fénix, a data model for exchanging information between Natural Language Processing applications. The format proposed is intended to be flexible enough to cover both current and future data structures employed in the field of Computational Linguistics. The Fénix architecture is divided into four separate layers: conceptual, logical, persistence and physical. This division provides a simple interface to abstract the users from low-level implementation details, such as programming languages and data storage employed, allowing them to focus in the concepts and processes to be modelled. The Fénix architecture is accompanied by a set of programming libraries to facilitate the access and manipulation of the structures created in this framework. We will also show how this architecture has been already successfully applied in different research projects.