879 resultados para eresearch and data management
Resumo:
The purpose of this study was to determine the impact of traditional psychiatric services with case management services on the functioning of people with schizophrenia. Traditional services were defined as routine clinic services consisting of medication follow-along, psychotherapy, and support services. Case management consisted of activities involved in linking, planning, and monitoring services for the outpatient client who has schizophrenia. The target population was adult schizophrenics who had been receiving outpatient clinic services for a minimum of six months. Structured interviews were conducted using standardized scales (e.g., Quality of Life, Self-Efficacy, and Brief Symptom Inventory) with 78 outpatient client volunteers from two sites: Nova Scotia (Canada) and Texas (USA). The researcher tested for differences in psychiatric symptomatology, recidivism, and quality of life for persons with schizophrenia receiving traditional psychiatric services in Nova Scotia and traditional plus case management services in Texas. Data were collected from the structured interviews and medical records review forms. Types of services were blocked into low and high levels of Intensity (frequency x minutes) and compared to determine the relative contribution of each. Finally, the role of clients' self-efficacy was tested as an intervening variable. Although the findings did not support the hypotheses in the direction anticipated, there were some interesting and useful results. From the Nova Scotia site, clients who received low levels of services were hospitalized less compared to the Texas site. The more psychotic a patient was the higher their involvement in medication follow-along and the more monitoring they received. The more psychotherapy received, the lower the reported satisfaction with social relationships. Of particular interest is the role that self-efficacy played in improved client outcomes. Although self-efficacy scores were related to improved functioning, the mechanism for this still needs to be clarified through subsequent research. ^
Resumo:
These Data Management Plans are more comprehensive and complex than in the past. Libraries around the nation are trying to put together tools to help researchers write plans that conform to the new requirements. This session will look at some of these tools.
Resumo:
Work on distributed data management commenced shortly after the introduction of the relational model in the mid-1970's. 1970's and 1980's were very active periods for the development of distributed relational database technology, and claims were made that in the following ten years centralized databases will be an “antique curiosity” and most organizations will move toward distributed database managers [1]. That prediction has certainly become true, and all commercial DBMSs today are distributed.
Resumo:
The electrical power distribution and commercialization scenario is evolving worldwide, and electricity companies, faced with the challenge of new information requirements, are demanding IT solutions to deal with the smart monitoring of power networks. Two main challenges arise from data management and smart monitoring of power networks: real-time data acquisition and big data processing over short time periods. We present a solution in the form of a system architecture that conveys real time issues and has the capacity for big data management.
Resumo:
This paper reports results derived from a mixed methods study where 13 hotel managers were initially interviewed, followed by a quantitative study of 355 additional managers. Data were analysed using partial least squares path modelling. The research question related to the relationship between quality and environmental management and the competitive advantage sought by hotels. The results indicate that quality management and environmental management permit the improvement of competitive advantage in terms of both costs and differentiation. Moreover, hotels implementing quality programmes find fewer obstacles in implementing environmental management.
Resumo:
Includes bibliographical references.
Resumo:
NTIS: PB81-929403.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Thesis (M.S.)--University of Illinois at Urbana-Champaign.
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Large amounts of information can be overwhelming and costly to process, especially when transmitting data over a network. A typical modern Geographical Information System (GIS) brings all types of data together based on the geographic component of the data and provides simple point-and-click query capabilities as well as complex analysis tools. Querying a Geographical Information System, however, can be prohibitively expensive due to the large amounts of data which may need to be processed. Since the use of GIS technology has grown dramatically in the past few years, there is now a need more than ever, to provide users with the fastest and least expensive query capabilities, especially since an approximated 80 % of data stored in corporate databases has a geographical component. However, not every application requires the same, high quality data for its processing. In this paper we address the issues of reducing the cost and response time of GIS queries by preaggregating data by compromising the data accuracy and precision. We present computational issues in generation of multi-level resolutions of spatial data and show that the problem of finding the best approximation for the given region and a real value function on this region, under a predictable error, in general is "NP-complete.