925 resultados para SYSTEMATIC-ERROR CORRECTION
Resumo:
The aim of this study was to investigate the influence of image resolution manipulation on the photogrammetric measurement of the rearfoot static angle. The study design was that of a reliability study. We evaluated 19 healthy young adults (11 females and 8 males). The photographs were taken at 1536 pixels in the greatest dimension, resized into four different resolutions (1200, 768, 600, 384 pixels) and analyzed by three equally trained examiners on a 96-pixels per inch (ppi) screen. An experienced physiotherapist marked the anatomic landmarks of rearfoot static angles on two occasions within a 1-week interval. Three different examiners had marked angles on digital pictures. The systematic error and the smallest detectable difference were calculated from the angle values between the image resolutions and times of evaluation. Different resolutions were compared by analysis of variance. Inter- and intra-examiner reliability was calculated by intra-class correlation coefficients (ICC). The rearfoot static angles obtained by the examiners in each resolution were not different (P > 0.05); however, the higher the image resolution the better the inter-examiner reliability. The intra-examiner reliability (within a 1-week interval) was considered to be unacceptable for all image resolutions (ICC range: 0.08-0.52). The whole body image of an adult with a minimum size of 768 pixels analyzed on a 96-ppi screen can provide very good inter-examiner reliability for photogrammetric measurements of rearfoot static angles (ICC range: 0.85-0.92), although the intra-examiner reliability within each resolution was not acceptable. Therefore, this method is not a proper tool for follow-up evaluations of patients within a therapeutic protocol.
Resumo:
Study IReal Wage Determination in the Swedish Engineering Industry This study uses the monopoly union model to examine the determination of real wages and in particular the effects of active labour market programmes (ALMPs) on real wages in the engineering industry. Quarterly data for the period 1970:1 to 1996:4 are used in a cointegration framework, utilising the Johansen's maximum likelihood procedure. On a basis of the Johansen (trace) test results, vector error correction (VEC) models are created in order to model the determination of real wages in the engineering industry. The estimation results support the presence of a long-run wage-raising effect to rises in the labour productivity, in the tax wedge, in the alternative real consumer wage and in real UI benefits. The estimation results also support the presence of a long-run wage-raising effect due to positive changes in the participation rates regarding ALMPs, relief jobs and labour market training. This could be interpreted as meaning that the possibility of being a participant in an ALMP increases the utility for workers of not being employed in the industry, which in turn could increase real wages in the industry in the long run. Finally, the estimation results show evidence of a long-run wage-reducing effect due to positive changes in the unemployment rate. Study IIIntersectoral Wage Linkages in Sweden The purpose of this study is to investigate whether the wage-setting in certain sectors of the Swedish economy affects the wage-setting in other sectors. The theoretical background is the Scandinavian model of inflation, which states that the wage-setting in the sectors exposed to international competition affects the wage-setting in the sheltered sectors of the economy. The Johansen maximum likelihood cointegration approach is applied to quarterly data on Swedish sector wages for the period 1980:1–2002:2. Different vector error correction (VEC) models are created, based on assumptions as to which sectors are exposed to international competition and which are not. The adaptability of wages between sectors is then tested by imposing restrictions on the estimated VEC models. Finally, Granger causality tests are performed in the different restricted/unrestricted VEC models to test for sector wage leadership. The empirical results indicate considerable adaptability in wages as between manufacturing, construction, the wholesale and retail trade, the central government sector and the municipalities and county councils sector. This is consistent with the assumptions of the Scandinavian model. Further, the empirical results indicate a low level of adaptability in wages as between the financial sector and manufacturing, and between the financial sector and the two public sectors. The Granger causality tests provide strong evidence for the presence of intersectoral wage causality, but no evidence of a wage-leading role in line with the assumptions of the Scandinavian model for any of the sectors. Study IIIWage and Price Determination in the Private Sector in Sweden The purpose of this study is to analyse wage and price determination in the private sector in Sweden during the period 1980–2003. The theoretical background is a variant of the “Imperfect competition model of inflation”, which assumes imperfect competition in the labour and product markets. According to the model wages and prices are determined as a result of a “battle of mark-ups” between trade unions and firms. The Johansen maximum likelihood cointegration approach is applied to quarterly Swedish data on consumer prices, import prices, private-sector nominal wages, private-sector labour productivity and the total unemployment rate for the period 1980:1–2003:3. The chosen cointegration rank of the estimated vector error correction (VEC) model is two. Thus, two cointegration relations are assumed: one for private-sector nominal wage determination and one for consumer price determination. The estimation results indicate that an increase of consumer prices by one per cent lifts private-sector nominal wages by 0.8 per cent. Furthermore, an increase of private-sector nominal wages by one per cent increases consumer prices by one per cent. An increase of one percentage point in the total unemployment rate reduces private-sector nominal wages by about 4.5 per cent. The long-run effects of private-sector labour productivity and import prices on consumer prices are about –1.2 and 0.3 per cent, respectively. The Rehnberg agreement during 1991–92 and the monetary policy shift in 1993 affected the determination of private-sector nominal wages, private-sector labour productivity, import prices and the total unemployment rate. The “offensive” devaluation of the Swedish krona by 16 per cent in 1982:4, and the start of a floating Swedish krona and the substantial depreciation of the krona at this time affected the determination of import prices.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.
Resumo:
Die Drei-Spektrometer-Anlage am Mainzer Institut für Kernphysik wurde um ein zusätzliches Spektrometer ergänzt, welches sich durch seine kurze Baulänge auszeichnet und deshalb Short-Orbit-Spektrometer (SOS) genannt wird. Beim nominellen Abstand des SOS vom Target (66 cm) legen die nachzuweisenden Teilchen zwischen Reaktionsort und Detektor eine mittlere Bahnlänge von 165 cm zurück. Für die schwellennahe Pionproduktion erhöht sich dadurch im Vergleich zu den großen Spektrometern die Überlebenswahrscheinlichkeit geladener Pionen mit Impuls 100 MeV/c von 15% auf 73%. Demzufolge verringert sich der systematische Fehler ("Myon-Kontamination"), etwa bei der geplanten Messung der schwachen Formfaktoren G_A(Q²) und G_P(Q²), signifikant. Den Schwerpunkt der vorliegenden Arbeit bildet die Driftkammer des SOS. Ihre niedrige Massenbelegung (0,03% X_0) zur Reduzierung der Kleinwinkelstreuung ist auf den Nachweis niederenergetischer Pionen hin optimiert. Aufgrund der neuartigen Geometrie des Detektors musste eine eigene Software zur Spurrekonstruktion, Effizienzbestimmung etc. entwickelt werden. Eine komfortable Möglichkeit zur Eichung der Driftweg-Driftzeit-Relation, die durch kubische Splines dargestellt wird, wurde implementiert. Das Auflösungsvermögen des Spurdetektors liegt in der dispersiven Ebene bei 76 µm für die Orts- und 0,23° für die Winkelkoordinate (wahrscheinlichster Fehler) sowie entsprechend in der nicht-dispersiven Ebene bei 110 µm bzw. 0,29°. Zur Rückrechnung der Detektorkoordinaten auf den Reaktionsort wurde die inverse Transfermatrix des Spektrometers bestimmt. Hierzu wurden an Protonen im ¹²C-Kern quasielastisch gestreute Elektronen verwendet, deren Startwinkel durch einen Lochkollimator definiert wurden. Daraus ergeben sich experimentelle Werte für die mittlere Winkelauflösung am Target von sigma_phi = 1,3 mrad bzw. sigma_theta = 10,6 mrad. Da die Impulseichung des SOS nur mittels quasielastischer Streuung (Zweiarmexperiment) durchgeführt werden kann, muss man den Beitrag des Protonarms zur Breite des Piks der fehlenden Masse in einer Monte-Carlo-Simulation abschätzen und herausfalten. Zunächst lässt sich nur abschätzen, dass die Impulsauflösung sicher besser als 1% ist.
Resumo:
In the thesis is presented the measurement of the neutrino velocity with the OPERA experiment in the CNGS beam, a muon neutrino beam produced at CERN. The OPERA detector observes muon neutrinos 730 km away from the source. Previous measurements of the neutrino velocity have been performed by other experiments. Since the OPERA experiment aims the direct observation of muon neutrinos oscillations into tau neutrinos, a higher energy beam is employed. This characteristic together with the higher number of interactions in the detector allows for a measurement with a much smaller statistical uncertainty. Moreover, a much more sophisticated timing system (composed by cesium clocks and GPS receivers operating in “common view mode”), and a Fast Waveform Digitizer (installed at CERN and able to measure the internal time structure of the proton pulses used for the CNGS beam), allows for a new measurement with a smaller systematic error. Theoretical models on Lorentz violating effects can be investigated by neutrino velocity measurements with terrestrial beams. The analysis has been carried out with blind method in order to guarantee the internal consistency and the goodness of each calibration measurement. The performed measurement is the most precise one done with a terrestrial neutrino beam, the statistical accuracy achieved by the OPERA measurement is about 10 ns and the systematic error is about 20 ns.
Resumo:
This thesis is focused on the study of techniques that allow to have reliable transmission of multimedia content in streaming and broadcasting applications, targeting in particular video content. The design of efficient error-control mechanisms, to enhance video transmission systems reliability, has been addressed considering cross-layer and multi-layer/multi-dimensional channel coding techniques to cope with bit errors as well as packet erasures. Mechanisms for unequal time interleaving have been designed as a viable solution to reduce the impact of errors and erasures by acting on the time diversity of the data flow, thus enhancing robustness against correlated channel impairments. In order to account for the nature of the factors which affect the physical layer channel in the evaluation of FEC schemes performances, an ad-hoc error-event modeling has been devised. In addition, the impact of error correction/protection techniques on the quality perceived by the consumers of video services applications and techniques for objective/subjective quality evaluation have been studied. The applicability and value of the proposed techniques have been tested by considering practical constraints and requirements of real system implementations.
Resumo:
In dieser Arbeit wurden umfangreiche laserspektroskopische Studien mit dem Zielrneines verbesserten Verständnisses höchst komplexer Spektren der Lanthanide und Aktinide durchgeführt. Einen Schwerpunkt bildete die Bestimmung bisher nicht oder mit unbefriedigender Genauigkeit bekannter erster Ionisationspotentiale für diese Elemente.rnHierzu wurden drei unterschiedliche experimentelle Methoden eingesetzt. Die Bestimmung des Ionisationspotentiales aus Rydbergserien wurde an den Beispielen Eisen, Mangan und Kobalt mit gemessenen Werten von IPFe = 63737, 6 ± 0, 2stat ± 0, 1syst cm−1, IPMn = 59959, 6 ± 0, 4 cm−1 beziehungsweise IPCo = 63564, 77 ± 0, 12 cm−1 zunächst erfolgreich erprobt. Die bestehenden Literaturwerte konnten in diesen Fällen bestätigt werden und bei Eisen und Kobalt die Genauigkeit etwa um einen Faktor drei bzw. acht verbessert werden. Im Falle der Lanthaniden und Aktiniden jedoch ist die Komplexität der Spektren derart hoch, dass Rydbergserien in einer Vielzahl weiterer Zustände beliebiger Konfiguration nicht oder kaum identifiziert werden können.rnUm dennoch das Ionisationspotential bestimmen zu können, wurde die verzögerte, gepulste Feldionisation wie auch das Verfahren der Isolated Core Excitation am Beispiel des Dysprosiums erprobt. Aus den so identifizierten Rydbergserien konnten Werte von IPFeld = 47899 ± 3 cm−1 beziehungsweise IPICE = 47900, 4 ± 1, 4 cm−1 bestimmt werden. Als komplementärer Ansatz, der auf ein möglichst reichhaltiges Spektrum in der Nähe des Ionisationspotentiales angewiesen ist, wurde zusätzlich die Sattelpunktsmethode erfolgreich eingesetzt. Das Ionisationspotential des Dysprosium wurde damit zu IPDy = 47901, 8±0, 3 cm−1 bestimmt, wobei am Samarium, dessen Ionisationspotential aus der Literatur mit höchster Genauigkeit bekannt ist, bestätigt werden konnte, dassrnauftretende systematische Fehler kleiner als 1 cm−1 sind. Das bisher sehr ungenau bekannte Ionisationspotential des Praseodyms wurde schließlich zu IPPr = 44120, 0 ± 0, 6 cm−1 gemessen. Hiermit wird der bisherige Literaturwert bei einer Verbesserung der Genauigkeit um zwei Größenordnungen um etwa 50 cm−1 nach oben korrigiert. Aus der Systematik der Ionisationspotentiale der Lanthaniden konnte schließlich das Ionisationspotential des radioaktiven Promethiums mit IPPm = 44985 ± 140 cm−1 vorhergesagt werden. Abschließend wurde die Laserresonanzionisation des Elements Protactinium demonstriertrnund das Ionisationspotential erstmals experimentell bestimmt. Ein Wert vonrn49000±110 cm−1 konnte aus dem charakteristischen Verhalten verschiedener Anregungsschemata gefolgert werden. Dieser Wert liegt etwa 1500 cm−1 höher als der bisherige Literaturwert, theoretische Vorhersagen weichen ebenfalls stark ab. Beide Abweichungen können über eine Betrachtung der Systematik der Ionisationspotentiale in der Aktinidenreihe hervorragend verstanden werden.
Resumo:
This thesis presents an analysis for the search of Supersymmetry with the ATLAS detector at the LHC. The final state with one lepton, several coloured particles and large missing transverse energy was chosen. Particular emphasis was placed on the optimization of the requirements for lepton identification. This optimization showed to be particularly useful when combining with multi-lepton selections. The systematic error associated with the higher order QCD diagrams in Monte Carlo production is given particular focus. Methods to verify and correct the energy measurement of hadronic showers are developed. Methods for the identification and removal of mismeasurements caused by the detector are found in the single muon and four jet environment are applied. A new detector simulation system is shown to provide good prospects for future fast Monte Carlo production. The analysis was performed for $35pb^{-1}$ and no significant deviation from the Standard Model is seen. Exclusion limits subchannel for minimal Supergravity. Previous limits set by Tevatron and LEP are extended.
Resumo:
Plasmonic nanoparticles exhibit strong light scattering efficiency due to the oscillations of their conductive electrons (plasmon), which are excited by light. For rod-shaped nanoparticles, the resonance position is highly tunable by the aspect ratio (length/width) and the sensitivity to changes in the refractive index in the local environment depends on their diameter, hence, their volume. Therefore, rod-shaped nanoparticles are highly suitable as plasmonic sensors.rnWithin this thesis, I study the formation of gold nanorods and nanorods from a gold-copper alloy using a combination of small-angle X-ray scattering and optical extinction spectroscopy. The latter represents one of the first metal alloy nanoparticle synthesis protocols for producing rod-shaped single crystalline gold-copper (AuxCu(1-x)) alloyed nanoparticles. I find that both length and width independently follow an exponential growth behavior with different time-constants, which intrinsically leads to a switch between positive and negative aspect ratio growth during the course of the synthesis. In a parameter study, I find linear relations for the rate constants as a function of [HAuCl4]/[CTAB] ratio and [HAuCl4]/[seed] ratio. Furthermore, I find a correlation of final aspect ratio and ratio of rate constants for length and width growth rate for different [AgNO3]/[HAuCl4] ratios. I identify ascorbic acid as the yield limiting species in the reaction by the use of spectroscopic monitoring and TEM. Finally, I present the use of plasmonic nanorods that absorb light at 1064nm as contrast agents for photoacoustic imaging (BMBF project Polysound). rnIn the physics part, I present my automated dark-field microscope that is capable of collecting spectra in the range of 450nm to 1750 nm. I show the characteristics of that setup for the spectra acquisition in the UV-VIS range and how I use this information to simulate measurements. I show the major noise sources of the measurements and ways to reduce the noise and how the combination of setup charactersitics and simulations of sensitivity and sensing volume can be used to select appropriate gold rods for single unlabeled protein detection. Using my setup, I show how to estimate the size of gold nano-rods directly from the plasmon linewidth measured from optical single particle spectra. Then, I use this information to reduce the distribution (between particles) of the measured plasmonic sensitivity S by 30% by correcting for the systematic error introduced from the variation in particle size. I investigate the single particle scattering of bowtie structures — structures consisting of two (mostly) equilateral triangles pointing one tip at each other. I simulate the spectra of the structures considering the oblique illumination angle in my setup, which leads to additional plasmon modes in the spectra. The simulations agree well with the measurements form a qualitative point of view.rn
Resumo:
La Digital Volume Correlation (DVC) è una tecnica di misura a tutto campo, non invasiva, che permette di misurare spostamenti e deformazioni all’interno della struttura ossea in esame. Mediante la comparazione d’immagini con provino scarico e con provino carico, ottenute attraverso sistemi di tomografia computerizzata, si può ottenere la mappa degli spostamenti per ogni direzione e la mappa di deformazione per ogni componente di deformazione. L’obiettivo di questo lavoro di tesi è la validazione della DVC, attraverso la determinazione dell’errore sistematico (accuratezza) e dell’errore casuale (precisione), in modo da poter valutare il livello di affidabilità della strumentazione. La valutazione si effettua su provini di vertebre di maiale, aumentate e non, sia a livello d’organo, sia a livello di tessuto. The Digital Volume Correlation (DVC) is a full field and contact less measurement technique that allowed estimating displacement and strain inside bone specimen. Images of the unloaded and loaded specimen were obtained from micro-CT and compared in order to obtain the displacement map and, differentiating, the strain map. The aim of this work is the validation of the approach, estimating the lack of accuracy (systematic error) and the lack of precision (random error) on different kinds of porcine vertebra, augmented and not, analysing the specimen on tissue level and on organ level.
Resumo:
Hemianopic patients make a systematic error in line bisection, showing a contra-lesional bias towards their blind side, which is the opposite of that in hemineglect patients. This error has been attributed variously to the visual field defect, to long-term strategic adaptation, or to independent effects of damage to extrastriate cortex. To determine if hemianopic bisection error can occur without the latter two factors, we studied line bisection in healthy subjects with simulated homonymous hemianopia using a gaze-contingent display, with different line-lengths, and with or without markers at both ends of the lines. Simulated homonymous hemianopia did induce a contra-lesional bisection error and this was associated with increased fixations towards the blind field. This error was found with end-marked lines and was greater with very long lines. In a second experiment we showed that eccentric fixation alone produces a similar bisection error and eliminates the effect of line-end markers. We conclude that a homonymous hemianopic field defect alone is sufficient to induce both a contra-lesional line bisection error and previously described alterations in fixation distribution, and does not require long-term adaptation or extrastriate damage.
Resumo:
Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed the CONSORT (Consolidated Standards of Reporting Trials) statement to improve the quality of reporting of RCTs. It was first published in 1996 and updated in 2001. The statement consists of a checklist and flow diagram that authors can use for reporting an RCT. Many leading medical journals and major international editorial groups have endorsed the CONSORT statement. The statement facilitates critical appraisal and interpretation of RCTs. During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. A CONSORT explanation and elaboration article was published in 2001 alongside the 2001 version of the CONSORT statement. After an expert meeting in January 2007, the CONSORT statement has been further revised and is published as the CONSORT 2010 Statement. This update improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document-intended to enhance the use, understanding, and dissemination of the CONSORT statement-has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials.
Resumo:
Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed the CONSORT (Consolidated Standards of Reporting Trials) statement to improve the quality of reporting of RCTs. It was first published in 1996 and updated in 2001. The statement consists of a checklist and flow diagram that authors can use for reporting an RCT. Many leading medical journals and major international editorial groups have endorsed the CONSORT statement. The statement facilitates critical appraisal and interpretation of RCTs. During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. A CONSORT explanation and elaboration article was published in 2001 alongside the 2001 version of the CONSORT statement. After an expert meeting in January 2007, the CONSORT statement has been further revised and is published as the CONSORT 2010 Statement. This update improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document-intended to enhance the use, understanding, and dissemination of the CONSORT statement-has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials.
Resumo:
Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed the CONSORT (Consolidated Standards of Reporting Trials) statement to improve the quality of reporting of RCTs. It was first published in 1996 and updated in 2001. The statement consists of a checklist and flow diagram that authors can use for reporting an RCT. Many leading medical journals and major international editorial groups have endorsed the CONSORT statement. The statement facilitates critical appraisal and interpretation of RCTs. During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. A CONSORT explanation and elaboration article was published in 2001 alongside the 2001 version of the CONSORT statement. After an expert meeting in January 2007, the CONSORT statement has been further revised and is published as the CONSORT 2010 Statement. This update improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document-intended to enhance the use, understanding, and dissemination of the CONSORT statement-has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials.