973 resultados para Collection management (Libraries)
Resumo:
Aims: Patient management following elective cranial surgery varies between different neurosurgical institutions. Early routine postoperative cranial computed tomography (CT) is often performed while keeping patients sedated and ventilated for several hours. We hypothesize that fast track management without routine CT scanning, i.e., early extubation within one hour allowing neurological monitoring, is safe and does not increase the rate of return to OR compared with published data. Methods: We prospectively screened 1118 patients with cranial procedures performed at our department over a period of two years. 420 patients with elective brain surgery older than 18 years with no history of prior cranial surgery were included. Routine neurosurgical practice as it is performed at our department was not altered for this observational study. Fast track management was aimed for all cases, extubated and awake patients were further monitored. CT scanning within 48 hours after surgery was not performed except for unexpected neurological deterioration. This study was registered at ClinicalTrials.gov (NCT01987648). Results: 420 elective craniotomies were performed for 310 supra- and 110 infratentorial lesions. 398 patients (94.8%) were able to be extubated within 1 hour, 21 (5%) within 6 hours, and 1 patient (0.2%) was extubated 9 hours after surgery. Emergency CT within 48 hours was performed for 36 patients (8.6%, 26 supra- and 10 infratentorial cases) due to unexpected neurological worsening. Of these 36 patients 5 had to return to the OR (hemorrhage in 3, swelling in 2 cases). Return to OR rate of all included cases was 1.2%. This rate compares favorably with 1-4% as quoted in the current literature. No patient returned to the OR without prior CT imaging. Of 398 patients extubated within one hour 2 (0.5%) returned to the OR. Patients who couldn’t be extubated within the first hour had a higher risk of returning to the OR (3 of 22, i.e., 14%). Overall 30-day mortality was 0.2% (1 patient). Conclusions: Early extubation and CT imaging performed only for patients with unexpected neurological worsening after elective craniotomy procedures is safe and does not increase patient mortality or the return to OR rate. With this fast track approach early postoperative cranial CT for detection of postoperative complications in the absence of an unexpected neurological finding is not justified. Acknowledgments The authors thank Nicole Söll, study nurse, Department of Neurosurgery, Bern University Hospital, Switzerland for crucial support in data collection and managing the database.
Resumo:
Antimicrobial drugs may be used to treat diarrheal illness in companion animals. It is important to monitor antimicrobial use to better understand trends and patterns in antimicrobial resistance. There is no monitoring of antimicrobial use in companion animals in Canada. To explore how the use of electronic medical records could contribute to the ongoing, systematic collection of antimicrobial use data in companion animals, anonymized electronic medical records were extracted from 12 participating companion animal practices and warehoused at the University of Calgary. We used the pre-diagnostic, clinical features of diarrhea as the case definition in this study. Using text-mining technologies, cases of diarrhea were described by each of the following variables: diagnostic laboratory tests performed, the etiological diagnosis and antimicrobial therapies. The ability of the text miner to accurately describe the cases for each of the variables was evaluated. It could not reliably classify cases in terms of diagnostic tests or etiological diagnosis; a manual review of a random sample of 500 diarrhea cases determined that 88/500 (17.6%) of the target cases underwent diagnostic testing of which 36/88 (40.9%) had an etiological diagnosis. Text mining, compared to a human reviewer, could accurately identify cases that had been treated with antimicrobials with high sensitivity (92%, 95% confidence interval, 88.1%-95.4%) and specificity (85%, 95% confidence interval, 80.2%-89.1%). Overall, 7400/15,928 (46.5%) of pets presenting with diarrhea were treated with antimicrobials. Some temporal trends and patterns of the antimicrobial use are described. The results from this study suggest that informatics and the electronic medical records could be useful for monitoring trends in antimicrobial use.
Resumo:
OBJECTIVE To determine the success of medical management of presumptive cervical disk herniation in dogs and variables associated with treatment outcome. DESIGN Retrospective case series. ANIMALS Dogs (n=88) with presumptive cervical disk herniation. METHODS Dogs with presumptive cervical and thoracolumbar disk herniation were identified from medical records at 2 clinics and clients were mailed a questionnaire related to the success of therapy, clinical recurrence of signs, and quality of life (QOL) as interpreted by the owner. Signalment, duration and degree of neurologic dysfunction, and medication administration were determined from medical records. RESULTS Ninety-seven percent of dogs (84/87) with complete information were described as ambulatory at initial evaluation. Successful treatment was reported for 48.9% of dogs with 33% having recurrence of clinical signs and 18.1% having therapeutic failure. Bivariable logistic regression showed that non-steroidal anti-inflammatory drug (NSAID) administration was associated with success (P=.035; odds ratio [OR]=2.52). Duration of cage rest and glucocorticoid administration were not significantly associated with success or QOL. Dogs with less-severe neurologic dysfunction were more likely to have a successful outcome (OR=2.56), but this association was not significant (P=.051). CONCLUSIONS Medical management can lead to an acceptable outcome in many dogs with presumptive cervical disk herniation. Based on these data, NSAIDs should be considered as part of the therapeutic regimen. Cage rest duration and glucocorticoid administration do not appear to benefit these dogs, but this should be interpreted cautiously because of the retrospective data collection and use of client self-administered questionnaire follow-up. CLINICAL RELEVANCE These results provide insight into the success of medical management for presumptive cervical disk herniation in dogs and may allow for refinement of treatment protocols.
Resumo:
OBJECTIVE To determine the success of medical management of presumptive thoracolumbar disk herniation in dogs and the variables associated with treatment outcome. STUDY DESIGN Retrospective case series. ANIMALS Dogs (n=223) with presumptive thoracolumbar disk herniation. METHODS Medical records from 2 clinics were used to identify affected dogs, and owners were mailed a questionnaire about success of therapy, recurrence of clinical signs, and quality of life (QOL) as interpreted by the owner. Signalment, duration and degree of neurologic dysfunction, and medication administration were determined from medical records. RESULTS Eighty-three percent of dogs (185/223) were ambulatory at initial evaluation. Successful treatment was reported for 54.7% of dogs, with 30.9% having recurrence of clinical signs and 14.4% classified as therapeutic failures. From bivariable logistic regression, glucocorticoid administration was negatively associated with success (P=.008; odds ratio [OR]=.48) and QOL scores (P=.004; OR=.48). The duration of cage rest was not significantly associated with success or QOL. Nonambulatory dogs were more likely to have lower QOL scores (P=.01; OR=2.34). CONCLUSIONS Medical management can lead to an acceptable outcome in many dogs with presumptive thoracolumbar disk herniation. Cage rest duration does not seem to affect outcome and glucocorticoids may negatively impact success and QOL. The conclusions in this report should be interpreted cautiously because of the retrospective data collection and the use of client self-administered questionnaire follow-up. CLINICAL RELEVANCE These results provide an insight into the success of medical management for presumptive thoracolumbar disk herniation in dogs and may allow for refinement of treatment protocols.
Resumo:
Purpose – The purpose of this paper is to describe the tools and strategies that were employed by C/W MARS to successfully develop and implement the Digital Treasures digital repository. Design/methodology/approach – This paper outlines the planning and subsequent technical issues that arise when implementing a digitization project on the scale of the large, multi-type, automated library network. Workflow solutions addressed include synchronous online metadata record submissions from multiple library sources and the delivery of collection-level use statistics to participating library administrators. The importance of standards-based descriptive metadata and the role of project collaboration are also discussed. Findings – From the time of its initial planning, the Digital Treasures repository was fully implemented in six months. The discernable and statistically quantified online discovery and access of actual digital objects greatly assisted libraries unsure of their own staffing costs/benefits to join the repository. Originality/value – This case study may serve as a possible example of initial planning, workflow and final implementation strategies for new repositories in both the general and library consortium environment. Keywords – Digital repositories, Library networks, Data management. Paper type – Case study
Resumo:
There have been three medical malpractice insurance "crises" in the United States over a time spanning roughly the past three decades (Poisson, 2004, p. 759-760). Each crisis is characterized by a number of common features, including rapidly increasing medical malpractice insurance premiums, cancellation of existing insurance policies, and a decreased willingness of insurers to offer or renew medical malpractice insurance policies (Poisson, 2004, p. 759-760). Given the recurrent "crises," many sources argue that medical malpractice insurance coverage has become too expensive a commodity—one that many physicians simply cannot afford (U.S. Department of Health and Human Services [HHS], 2002, p. 1-2; Physician Insurers Association of America [PIAA], 2003, p. 1; Jackiw, 2004, p. 506; Glassman, 2004, p. 417; Padget, 2003, p. 216). ^ The prohibitively high cost of medical liability insurance is said to limit the geographical areas and medical specializations in which physicians are willing to practice. As a result, the high costs of medical liability insurance are ultimately said to affect whether or not people have access to health care services. ^ In an effort to control the medical liability insurance crises—and to preserve or restore peoples' access to health care—every state in the United States has passed "at least some laws designed to reduce medical malpractice premium rates" (GAO, 2003, p.5-6). More recently, however, the United States has witnessed a push to implement federal reform of the medical malpractice tort system. Accordingly, this project focuses on federal medical malpractice tort reform. This project was designed to investigate the following specific question: Do the federal medical malpractice tort reform bills which passed in the House of Representatives between 1995 and 2005 differ in respect to their principle features? To answer this question, the text of the bills, law review articles, and reports from government and private agencies were analyzed. Further, a matrix was compiled to concisely summarize the principle features of the proposed federal medical malpractice tort reform bills. Insight gleaned from this investigation and matrix compilation informs discussion about the potential ramifications of enacting federal medical malpractice tort reform legislation. ^
Resumo:
Inefficiencies during the management of healthcare waste can give rise to undesirable health effects such as transmission of infections and environmental pollution within and beyond the health facilities generating these wastes. Factors such as prevalence of diseases, conflicts, and the efflux of intellectual capacity make low income countries more susceptible to these adverse health effects. The purpose of this systematic review was to describe the effectiveness of interventions geared towards better managing the generation, collection, transport, treatment and disposal of medical waste, as they have been applied in lower and middle income countries.^ Using a systematic search strategy and evaluation of study quality, this study reviewed the literature for published studies on healthcare waste management interventions carried out in developing countries, specifically the low and lower middle income countries from year 2000 to the current year. From an initially identified set of 829 studies, only three studies ultimately met all inclusion, exclusion and high quality criteria. A multi component intervention in Syrian Arab Republic, conducted in 2007 was aimed at improving waste segregation practice in a hospital setting. There was an increased use of segregation boxes and reduced rates of sharps injury among staff as a result of the intervention. Another study, conducted in 2008, trained medical students as monitors of waste segregation practice in an Indian teaching hospital. There was improved practice in wards and laboratories but not in the intensive care units. The third study, performed in 2008 in China, consisted of modification of the components of a medical waste incinerator to improve efficiency and reduce stack emissions. Gaseous pollutants emitted, except polychlorodibenzofurans (PCDF) were below US EPA permissible exposure limits. Heavy metal residues in the fly ash remained unchanged.^ Due to the paucity of well-designed studies, there is insufficient evidence in literature to conclude on the effectiveness of interventions in low income settings. There is suggestive but insufficient evident that multi-component interventions aimed at improving waste segregation through behavior modification, provision of segregation tools and training of monitors are effective in low income settings.^
Resumo:
Electronic waste is a fairly new and largely unknown phenomenon. Accordingly, governments have only recently acknowledged electronic waste as a threat to the environment and public health. In attempting to mitigate the hazards associated with this rapidly growing toxic waste stream, governments at all levels have started to implement e-waste management programs. The legislation enacted to create these programs is based on extended producer responsibility or EPR policy. ^ EPR shifts the burden of final disposal of e-waste from the consumer or municipal solid waste system to the manufacturer of electronic equipment. Applying an EPR policy is intended to send signals up the production chain to the manufacturer. The desired outcome is to change the methods of production in order to reduce production outputs/inputs with the ultimate goal of changing product design. This thesis performs a policy analysis of the current e-waste policies at the federal and state level of government, focusing specifically on Texas e-waste policies. ^ The Texas e-waste law known, as HB 2714 or the Texas Computer TakeBack Law, requires manufacturers to provide individual consumers with a free and convenient method for returning their used computers to manufacturers. The law is based on individual producer responsibility and shared responsibility among consumer, retailers, recyclers, and the TCEQ. ^ Using a set of evaluation criteria created by the Organization for Economic Co-operation and Development, the Texas e-waste law was examined to determine its effectiveness at reducing the threat of e-waste in Texas. Based on the outcomes of the analysis certain recommendations were made for the legislature to incorporate into HB 2714. ^ The results of the policy analysis show that HB 2714 is a poorly constructed law and does not provide the desired results seen in other states with EPR policies. The TakeBack Law does little to change the collection methods of manufacturers and even less to change their production habits. If the e-waste problem is to be taken seriously, HB 2714 must be amended to reflect the proposed changes in this thesis.^
Resumo:
The study aim was to determine whether using automated side loader (ASL) trucks in higher proportions compared to other types of trucks for residential waste collection results in lower injury rates (from all causes). The primary hypothesis was that the risk of injury to workers was lower for those who work with ASL trucks than for workers who work with other types of trucks used in residential waste collection. To test this hypothesis, data were collected from one of the nation’s largest companies in the solid waste management industry. Different local operating units (i.e. facilities) in the company used different types of trucks to varying degrees, which created a special opportunity to examine refuse collection injuries and illnesses and the risk reduction potential of ASL trucks.^ The study design was ecological and analyzed end-of-year data provided by the company for calendar year 2007. During 2007, there were a total of 345 facilities which provided residential services. Each facility represented one observation.^ The dependent variable – injury and illness rate, was defined as a facility’s total case incidence rate (TCIR) recorded in accordance with federal OSHA requirements for the year 2007. The TCIR is the rate of total recordable injury and illness cases per 100 full-time workers. The independent variable, percent of ASL trucks, was calculated by dividing the number of ASL trucks by the total number of residential trucks at each facility.^ Multiple linear regression models were estimated for the impact of the percent of ASL trucks on TCIR per facility. Adjusted analyses included three covariates: median number of hours worked per week for residential workers; median number of months of work experience for residential workers; and median age of residential workers. All analyses were performed with the statistical software, Stata IC (version 11.0).^ The analyses included three approaches to classifying exposure, percent of ASL trucks. The first approach included two levels of exposure: (1) 0% and (2) >0 - <100%. The second approach included three levels of exposure: (1) 0%, (2) ≥ 1 - < 100%, and (3) 100%. The third approach included six levels of exposure to improve detection of a dose-response relationship: (1) 0%, (2) 1 to <25%, (3) 25 to <50%, (4) 50 to <75%, (5) 75 to <100%, and (6) 100%. None of the relationships between injury and illness rate and percent ASL trucks exposure levels was statistically significant (i.e., p<0.05), even after adjustment for all three covariates.^ In summary, the present study shows that there is some risk reduction impact of ASL trucks but not statistically significant. The covariates demonstrated a varied yet more modest impact on the injury and illness rate but again, none of the relationships between injury and illness rate and the covariates were statistically significant (i.e., p<0.05). However, as an ecological study, the present study also has the limitations inherent in such designs and warrants replication in an individual level cohort design. Any stronger conclusions are not suggested.^
Resumo:
Problems due to the lack of data standardization and data management have lead to work inefficiencies for the staff working with the vision data for the Lifetime Surveillance of Astronaut Health. Data has been collected over 50 years in a variety of manners and then entered into a software. The lack of communication between the electronic health record (EHR) form designer, epidemiologists, and optometrists has led to some level to confusion on the capability of the EHR system and how its forms can be designed to fit all the needs of the relevant parties. EHR form customizations or form redesigns were found to be critical for using NASA's EHR system in the most beneficial way for its patients, optometrists, and epidemiologists. In order to implement a protocol, data being collected was examined to find the differences in data collection methods. Changes were implemented through the establishment of a process improvement team (PIT). Based on the findings of the PIT, suggestions have been made to improve the current EHR system. If the suggestions are implemented correctly, this will not only improve efficiency of the staff at NASA and its contractors, but set guidelines for changes in other forms such as the vision exam forms. Because NASA is at the forefront of such research and health surveillance the impact of this management change could have a drastic improvement on the collection of and adaptability of the EHR. Accurate data collection from this 50+ year study is ongoing and is going to help current and future generations understand the implications of space flight on human health. It is imperative that the vast amount of information is documented correctly.^
Resumo:
These Data Management Plans are more comprehensive and complex than in the past. Libraries around the nation are trying to put together tools to help researchers write plans that conform to the new requirements. This session will look at some of these tools.
Resumo:
This data set contains a time series of plant height measurements (vegetative and reproductive) from the main experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In addition, data on species specific plant heights for the main experiment are available from 2002. In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. 1. Plant height was recorded, generally, twice a year just before biomass harvest (during peak standing biomass in late May and in late August). Methodologies of measuring height have varied somewhat over the years. In earlier year the streched plant height was measured, while in later years the standing height without streching the plant was measured. Vegetative height was measured either as the height of the highest leaf or as the length of the main axis of non-flowering plants. Regenerating height was measured either as the height of the highest flower on a plant or as the height of the main axis of flowering. Sampled plants were either randomly selected in the core area of plots or along transects in defined distances. For details refer to the description of individual years. Starting in 2006, also the plots of the management experiment, that altered mowing frequency and fertilized subplots (see further details in the general description of the Jena Experiment) were sampled. 2. Species specific plant height was recorded two times in 2002: in late July (vegetative height) and just before biomass harvest during peak standing biomass in late August (vegetative and regenerative height). For each plot and each sown species in the species pool, 3 plant individuals (if present) from the central area of the plots were randomly selected and used to measure vegetative height (non-flowering indviduals) and regenerative height (flowering individuals) as stretched height. Provided are the means over the three measuremnts per plant species per plot.
Resumo:
A partir de la aceptación de la biblioteca como parte del ciclo de creación, organización y diseminación del conocimiento cambió el concepto de la misma de una entidad cerrada hacia un sistema dinámico en constante interacción con su entorno. Así se la reconoció como una institución social más que como una colección de documentos. Desde entonces se percibió a la biblioteca como una entidad en la que se podía aplicar los principios de gestión. Desde entonces se utilizaron distintas herramientas de gestión para la toma de decisiones en el ámbito de las bibliotecas. Entre estas herramientas son de gran importancia en el control estadístico de procesos los gráficos de control, utilizados para medir la estabilidad de un proceso a través del tiempo. Han tenido amplia aplicación en el control estadístico de la calidad, comenzando en el ámbito industrial. Hoy su aplicación se ha extendido a una gran variedad de disciplinas incluyen empresas de servicios y unidades administrativas. Aquí se presentan a los gráficos de control como una importante herramienta de gestión aplicada a los procesos técnicos permitiendo su evaluación y el monitoreo de su desempeño a partir de la utilización de indicadores y otros datos de carácter diagnóstico