769 resultados para reporters and reporting
Resumo:
Pós-graduação em Saúde Coletiva - FMB
Resumo:
Pós-graduação em Comunicação - FAAC
Resumo:
Pós-graduação em Comunicação - FAAC
Resumo:
BACKGROUND: In Brazil little is known about adverse reactions during donation and the donor characteristics that may be associated with such events. Donors are offered snacks and fluids before donating and are required to consume a light meal after donation. For these reasons the frequency of reactions may be different than those observed in other countries. STUDY DESIGN AND METHODS: A cross-sectional study was conducted of eligible whole blood donors at three large blood centers located in Brazil between July 2007 and December 2009. Vasovagal reactions (VVRs) along with donor demographic and biometric data were collected. Reactions were defined as any presyncopal or syncopal event during the donation process. Multivariable logistic regression was performed to identify predictors of VVRs. RESULTS: Of 724,861 donor presentations, 16,129 (2.2%) VVRs were recorded. Rates varied substantially between the three centers: 53, 290, and 381 per 10,000 donations in Recife, Sao Paulo, and Belo Horizonte, respectively. Although the reaction rates varied, the donor characteristics associated with VVRs were similar (younger age [18-29 years], replacement donors, first-time donors, low estimated blood volume [EBV]). In multivariable analysis controlling for differences between the donor populations in each city younger age, first-time donor status, and lower EBV were the factors most associated with reactions. CONCLUSION: Factors associated with VVRs in other locations are also evident in Brazil. The difference in VVR rates between the three centers might be due to different procedures for identifying and reporting the reactions. Potential interventions to reduce the risk of reactions in Brazil should be considered.
Resumo:
The use of nonstandardized and inadequately validated outcome measures in atopic eczema trials is a major obstacle to practising evidence-based dermatology. The Harmonising Outcome Measures for Eczema (HOME) initiative is an international multiprofessional group dedicated to atopic eczema outcomes research. In June 2011, the HOME initiative conducted a consensus study involving 43 individuals from 10 countries, representing different stakeholders (patients, clinicians, methodologists, pharmaceutical industry) to determine core outcome domains for atopic eczema trials, to define quality criteria for atopic eczema outcome measures and to prioritize topics for atopic eczema outcomes research. Delegates were given evidence-based information, followed by structured group discussion and anonymous consensus voting. Consensus was achieved to include clinical signs, symptoms, long-term control of flares and quality of life into the core set of outcome domains for atopic eczema trials. The HOME initiative strongly recommends including and reporting these core outcome domains as primary or secondary endpoints in all future atopic eczema trials. Measures of these core outcome domains need to be valid, sensitive to change and feasible. Prioritized topics of the HOME initiative are the identification/development of the most appropriate instruments for the four core outcome domains. HOME is open to anyone with an interest in atopic eczema outcomes research.
Resumo:
To date the hospital radiological workflow is completing a transition from analog to digital technology. Since the X-rays digital detection technologies have become mature, hospitals are trading on the natural devices turnover to replace the conventional screen film devices with digital ones. The transition process is complex and involves not just the equipment replacement but also new arrangements for image transmission, display (and reporting) and storage. This work is focused on 2D digital detector’s characterization with a concern to specific clinical application; the systems features linked to the image quality are analyzed to assess the clinical performances, the conversion efficiency, and the minimum dose necessary to get an acceptable image. The first section overviews the digital detector technologies focusing on the recent and promising technological developments. The second section contains a description of the characterization methods considered in this thesis categorized in physical, psychophysical and clinical; theory, models and procedures are described as well. The third section contains a set of characterizations performed on new equipments that appears to be some of the most advanced technologies available to date. The fourth section deals with some procedures and schemes employed for quality assurance programs.
Resumo:
Therapeutisches Drug Monitoring (TDM) umfasst die Messung von Medikamentenspiegeln im Blut und stellt die Ergebnisse in Zusammenhang mit dem klinischen Erscheinungsbild der Patienten. Dabei wird angenommen, dass die Konzentrationen im Blut besser mit der Wirkung korrelieren als die Dosis. Dies gilt auch für Antidepressiva. Voraussetzung für eine Therapiesteuerung durch TDM ist die Verfügbarkeit valider Messmethoden im Labor und die korrekte Anwendung des Verfahrens in der Klinik. Ziel dieser Arbeit war es, den Einsatz von TDM für die Depressionsbehandlung zu analysieren und zu verbessern. Im ersten Schritt wurde für das neu zugelassene Antidepressivum Duloxetin eine hochleistungsflüssig-chromatographische (HPLC) Methode mit Säulenschaltung und spektrophotometrischer Detektion etabliert und an Patienten für TDM angewandt. Durch Analyse von 280 Patientenproben wurde herausgefunden, dass Duloxetin-Konzentrationen von 60 bis 120 ng/ml mit gutem klinischen Ansprechen und einem geringen Risiko für Nebenwirkungen einhergingen. Bezüglich seines Interaktionspotentials erwies sich Duloxetin im Vergleich zu anderen Antidepressiva als schwacher Inhibitor des Cytochrom P450 (CYP) Isoenzyms 2D6. Es gab keinen Hinweis auf eine klinische Relevanz. Im zweiten Schritt sollte eine Methode entwickelt werden, mit der möglichst viele unterschiedliche Antidepressiva einschließlich deren Metaboliten messbar sind. Dazu wurde eine flüssigchromatographische Methode (HPLC) mit Ultraviolettspektroskopie (UV) entwickelt, mit der die quantitative Analyse von zehn antidepressiven und zusätzlich zwei antipsychotischen Substanzen innerhalb von 25 Minuten mit ausreichender Präzision und Richtigkeit (beide über 85%) und Sensitivität erlaubte. Durch Säulenschaltung war eine automatisierte Analyse von Blutplasma oder –serum möglich. Störende Matrixbestandteile konnten auf einer Vorsäule ohne vorherige Probenaufbereitung abgetrennt werden. Das kosten- und zeiteffektive Verfahren war eine deutliche Verbesserung für die Bewältigung von Proben im Laboralltag und damit für das TDM von Antidepressiva. Durch Analyse des klinischen Einsatzes von TDM wurden eine Reihe von Anwendungsfehlern identifiziert. Es wurde deshalb versucht, die klinische Anwendung des TDM von Antidepressiva durch die Umstellung von einer weitgehend händischen Dokumentation auf eine elektronische Bearbeitungsweise zu verbessern. Im Rahmen der Arbeit wurde untersucht, welchen Effekt man mit dieser Intervention erzielen konnte. Dazu wurde eine Labor-EDV eingeführt, mit der der Prozess vom Probeneingang bis zur Mitteilung der Messergebnisse auf die Stationen elektronisch erfolgte und die Anwendung von TDM vor und nach der Umstellung untersucht. Die Umstellung fand bei den behandelnden Ärzten gute Akzeptanz. Die Labor-EDV erlaubte eine kumulative Befundabfrage und eine Darstellung des Behandlungsverlaufs jedes einzelnen Patienten inklusive vorhergehender Klinikaufenthalte. Auf die Qualität der Anwendung von TDM hatte die Implementierung des Systems jedoch nur einen geringen Einfluss. Viele Anforderungen waren vor und nach der Einführung der EDV unverändert fehlerhaft, z.B. wurden häufig Messungen vor Erreichen des Steady State angefordert. Die Geschwindigkeit der Bearbeitung der Proben war im Vergleich zur vorher händischen Ausführung unverändert, ebenso die Qualität der Analysen bezüglich Richtigkeit und Präzision. Ausgesprochene Empfehlungen hinsichtlich der Dosierungsstrategie der angeforderten Substanzen wurden häufig nicht beachtet. Verkürzt wurde allerdings die mittlere Latenz, mit der eine Dosisanpassung nach Mitteilung des Laborbefundes erfolgte. Insgesamt ist es mit dieser Arbeit gelungen, einen Beitrag zur Verbesserung des Therapeutischen Drug Monitoring von Antidepressiva zu liefern. In der klinischen Anwendung sind allerdings Interventionen notwendig, um Anwendungsfehler beim TDM von Antidepressiva zu minimieren.
Resumo:
Negli ultimi anni lo spreco alimentare ha assunto un’importanza crescente nel dibattito internazionale, politico ed accademico, nel contesto delle tematiche sulla sostenibilità dei modelli di produzione e consumo, sull’uso efficiente delle risorse e la gestione dei rifiuti. Nei prossimi anni gli Stati Membri dell’Unione Europea saranno chiamati ad adottare specifiche strategie di prevenzione degli sprechi alimentari all’interno di una cornice di riferimento comune. Tale cornice è quella che si va delineando nel corso del progetto Europeo di ricerca “FUSIONS” (7FP) che, nel 2014, ha elaborato un framework di riferimento per la definizione di “food waste” allo scopo di armonizzare le diverse metodologie di quantificazione adottate dai paesi membri. In questo scenario, ai fini della predisposizione di un Piano Nazionale di Prevenzione degli Sprechi Alimentari per l’Italia, il presente lavoro applica per la prima volta il “definitional framework” FUSIONS per l’analisi dei dati e l’identificazione dei principali flussi nei diversi anelli della filiera e svolge un estesa consultazione degli stakeholder (e della letteratura) per identificare le possibili misure di prevenzione e le priorità di azione. I risultati ottenuti evedenziano (tra le altre cose) la necessità di predisporre e promuovere a livello nazionale l’adozione di misure uniformi di quantificazione e reporting; l’importanza del coinvolgimento degli stakeholder nel contesto di una campagna nazionale di prevenzione degli sprechi alimentari; l’esigenza di garantire una adeguata copertura economica per le attività di pianificazione e implementazione delle misure di prevenzione da parte degli enti locali e di un coordinamento a livello nazionale della programmazione regionale; la necessità di una armonizzazione/semplificazione del quadro di riferimento normativo (fiscale, igienico-sanitario, procedurale) che disciplina la donazione delle eccedenze alimentari; l’urgenza di approfondire il fenomeno degli sprechi alimentari attraverso la realizzazione di studi di settore negli stadi a valle della filiera.
Resumo:
Una stampa libera e plurale è un elemento fondante di ogni sistema democratico ed è fondamentale per la creazione di un’opinione pubblica informata e in grado di esercitare controllo e pressione sulle classi dirigenti. Dal momento della loro creazione i giornali si sono imposti come un’importantissima fonte di informazione per l’opinione pubblica. La seconda metà del Novecento, inoltre, ha conosciuto innovazioni tecnologiche che hanno portato grandi cambiamenti nel ruolo della carta stampata come veicolo di trasmissione delle notizie. Partendo dalla diffusione della televisione fino ad arrivare alla rivoluzione digitale degli anni ’90 e 2000, la velocità di creazione e di trasmissione delle informazioni è aumentata esponenzialmente, i costi di produzione e di acquisizione delle notizie sono crollati e una quantità enorme di dati, che possono fornire moltissime informazioni relative alle idee e ai contenuti proposti dai diversi autori nel corso del tempo, è ora a disposizione di lettori e ricercatori. Tuttavia, anche se grazie alla rivoluzione digitale i costi materiali dei periodici si sono notevolmente ridotti, la produzione di notizie comporta altre spese e pertanto si inserisce in un contesto di mercato, sottoposto alle logiche della domanda e dell'offerta. In questo lavoro verrà analizzato il ruolo della domanda e della non perfetta razionalità dei lettori nel mercato delle notizie, partendo dall’assunto che la differenza di opinioni dei consumatori spinge le testate a regolare l’offerta di contenuti, per venire incontro alla domanda di mercato, per verificare l’applicabilità del modello utilizzato (Mullainhatan e Shleifer, 2005) al contesto italiano. A tale scopo si è analizzato il comportamento di alcuni quotidiani nazionali in occasione di due eventi che hanno profondamente interessato l'opinione pubblica italiana: il fenomeno dei flussi migratori provenienti dalla sponda sud del Mediterraneo nel mese di ottobre 2013 e l'epidemia di influenza H1N1 del 2009.
Resumo:
Background There is concern that non-inferiority trials might be deliberately designed to conceal that a new treatment is less effective than a standard treatment. In order to test this hypothesis we performed a meta-analysis of non-inferiority trials to assess the average effect of experimental treatments compared with standard treatments. Methods One hundred and seventy non-inferiority treatment trials published in 121 core clinical journals were included. The trials were identified through a search of PubMed (1991 to 20 February 2009). Combined relative risk (RR) from meta-analysis comparing experimental with standard treatments was the main outcome measure. Results The 170 trials contributed a total of 175 independent comparisons of experimental with standard treatments. The combined RR for all 175 comparisons was 0.994 [95% confidence interval (CI) 0.978–1.010] using a random-effects model and 1.002 (95% CI 0.996–1.008) using a fixed-effects model. Of the 175 comparisons, experimental treatment was considered to be non-inferior in 130 (74%). The combined RR for these 130 comparisons was 0.995 (95% CI 0.983–1.006) and the point estimate favoured the experimental treatment in 58% (n = 76) and standard treatment in 42% (n = 54). The median non-inferiority margin (RR) pre-specified by trialists was 1.31 [inter-quartile range (IQR) 1.18–1.59]. Conclusion In this meta-analysis of non-inferiority trials the average RR comparing experimental with standard treatments was close to 1. The experimental treatments that gain a verdict of non-inferiority in published trials do not appear to be systematically less effective than the standard treatments. Importantly, publication bias and bias in the design and reporting of the studies cannot be ruled out and may have skewed the study results in favour of the experimental treatments. Further studies are required to examine the importance of such bias.
Resumo:
Surveillance of wildlife health in Europe remains informal and reporting wildlife diseases is not yet coordinated among countries. At a meeting in Brussels on 15 October 2009, delegates from 25 countries provided an overview of the current status of wildlife health surveillance in Europe. This showed that every year in Europe over 18,000 wild animals are examined as part of general surveillance programmes and over 50,000 wild animals are examined in the course of targeted surveillance. The participants at the Brussels meeting agreed to set up a European network for wildlife health surveillance. The goals of this network, which was established in February 2010, are to improve procedures for the rapid exchange of information, harmonise procedures for investigation and diagnosis of wildlife diseases, share relevant expertise, and provide training opportunities for wildlife health surveillance.
Resumo:
Puppa G, Senore C, Sheahan K, Vieth M, Lugli A, Zlobec I, Pecori S, Wang L M, Langner C, Mitomi H, Nakamura T, Watanabe M, Ueno H, Chasle J, Conley S A, Herlin P, Lauwers G Y & Risio M (2012) Histopathology Diagnostic reproducibility of tumour budding in colorectal cancer: a multicentre, multinational study using virtual microscopy Aims: Despite the established prognostic relevance of tumour budding in colorectal cancer, the reproducibility of the methods reported for its assessment has not yet been determined, limiting its use and reporting in routine pathology practice. Methods and results: A morphometric system within telepathology was devised to evaluate the reproducibility of the various methods published for the assessment of tumour budding in colorectal cancer. Five methods were selected to evaluate the diagnostic reproducibility among 10 investigators, using haematoxylin and eosin (H&E) and AE1-3 cytokeratin-immunostained, whole-slide digital scans from 50 pT1-pT4 colorectal cancers. The overall interobserver agreement was fair for all methods, and increased to moderate for pT1 cancers. The intraobserver agreement was also fair for all methods and moderate for pT1 cancers. Agreement was dependent on the participants' experience with tumour budding reporting and performance time. Cytokeratin immunohistochemistry detected a higher percentage of tumour budding-positive cases with all methods compared to H&E-stained slides, but did not influence agreement levels. Conclusions: An overall fair level of diagnostic agreement for tumour budding in colorectal cancer was demonstrated, which was significantly higher in early cancer and among experienced gastrointestinal pathologists. Cytokeratin immunostaining facilitated detection of budding cancer cells, but did not result in improved interobserver agreement.
Resumo:
A marker that is strongly associated with outcome (or disease) is often assumed to be effective for classifying individuals according to their current or future outcome. However, for this to be true, the associated odds ratio must be of a magnitude rarely seen in epidemiological studies. An illustration of the relationship between odds ratios and receiver operating characteristic (ROC) curves shows, for example, that a marker with an odds ratio as high as 3 is in fact a very poor classification tool. If a marker identifies 10 percent of controls as positive (false positives) and has an odds ratio of 3, then it will only correctly identify 25 percent of cases as positive (true positives). Moreover, the authors illustrate that a single measure of association such as an odds ratio does not meaningfully describe a marker’s ability to classify subjects. Appropriate statistical methods for assessing and reporting the classification power of a marker are described. The serious pitfalls of using more traditional methods based on parameters in logistic regression models are illustrated.
Resumo:
OBJECTIVE: To review the accuracy of electrocardiography in screening for left ventricular hypertrophy in patients with hypertension. DESIGN: Systematic review of studies of test accuracy of six electrocardiographic indexes: the Sokolow-Lyon index, Cornell voltage index, Cornell product index, Gubner index, and Romhilt-Estes scores with thresholds for a positive test of > or =4 points or > or =5 points. DATA SOURCES: Electronic databases ((Pre-)Medline, Embase), reference lists of relevant studies and previous reviews, and experts. STUDY SELECTION: Two reviewers scrutinised abstracts and examined potentially eligible studies. Studies comparing the electrocardiographic index with echocardiography in hypertensive patients and reporting sufficient data were included. DATA EXTRACTION: Data on study populations, echocardiographic criteria, and methodological quality of studies were extracted. DATA SYNTHESIS: Negative likelihood ratios, which indicate to what extent the posterior odds of left ventricular hypertrophy is reduced by a negative test, were calculated. RESULTS: 21 studies and data on 5608 patients were analysed. The median prevalence of left ventricular hypertrophy was 33% (interquartile range 23-41%) in primary care settings (10 studies) and 65% (37-81%) in secondary care settings (11 studies). The median negative likelihood ratio was similar across electrocardiographic indexes, ranging from 0.85 (range 0.34-1.03) for the Romhilt-Estes score (with threshold > or =4 points) to 0.91 (0.70-1.01) for the Gubner index. Using the Romhilt-Estes score in primary care, a negative electrocardiogram result would reduce the typical pre-test probability from 33% to 31%. In secondary care the typical pre-test probability of 65% would be reduced to 63%. CONCLUSION: Electrocardiographic criteria should not be used to rule out left ventricular hypertrophy in patients with hypertension.
Resumo:
OBJECTIVE: To assess the methodology of meta-analyses published in leading general and specialist medical journals over a 10-year period. STUDY DESIGN AND SETTING: Volumes 1993-2002 of four general medicine journals and four specialist journals were searched by hand for meta-analyses including at least five controlled trials. Characteristics were assessed using a standardized questionnaire. RESULTS: A total of 272 meta-analyses, which included a median of 11 trials (range 5-195), were assessed. Most (81%) were published in general medicine journals. The median (range) number of databases searched increased from 1 (1-9) in 1993/1994 to 3.5 (1-21) in 2001/2002, P<0.0001. The proportion of meta-analyses including searches by hand (10% in 1993/1994, 25% in 2001/2002, P=0.005), searches of the grey literature (29%, 51%, P=0.010 by chi-square test), and of trial registers (10%, 32%, P=0.025) also increased. Assessments of the quality of trials also became more common (45%, 70%, P=0.008), including whether allocation of patients to treatment groups had been concealed (24%, 60%, P=0.001). The methodological and reporting quality was consistently higher in general medicine compared to specialist journals. CONCLUSION: Many meta-analyses published in leading journals have important methodological limitations. The situation has improved in recent years but considerable room for further improvements remains.