929 resultados para Probabilistic Error Correction
Resumo:
In this work, a new mathematical equation correction approach for overcoming spectral and transport interferences was proposed. The proposal was applied to eliminate spectral interference caused by PO molecules at the 217.0005 nm Pb line, and the transport interference caused by variations in phosphoric acid concentrations. Correction may be necessary at 217.0005 nm to account for the contribution of PO, since Atotal217.0005 nm = A Pb217.0005 nm + A PO217.0005 nm. This may be easily done by measuring other PO wavelengths (e.g. 217.0458 nm) and calculating the relative contribution of PO absorbance (A PO) to the total absorbance (Atotal) at 217.0005 nm: A Pb217.0005 nm = Atotal217.0005 nm - A PO217.0005 nm = Atotal217.0005 nm - k (A PO217.0458 nm). The correction factor k is calculated from slopes of calibration curves built up for phosphorous (P) standard solutions measured at 217.0005 and 217.0458 nm, i.e. k = (slope217.0005 nm/slope217.0458 nm). For wavelength integrated absorbance of 3 pixels, sample aspiration rate of 5.0 ml min-1, analytical curves in the 0.1 - 1.0 mg L-1 Pb range with linearity better than 0.9990 were consistently obtained. Calibration curves for P at 217.0005 and 217.0458 nm with linearity better than 0.998 were obtained. Relative standard deviations (RSD) of measurements (n = 12) in the range of 1.4 - 4.3% and 2.0 - 6.0% without and with mathematical equation correction approach were obtained respectively. The limit of detection calculated to analytical line at 217.0005 nm was 10 µg L-1 Pb. Recoveries for Pb spikes were in the 97.5 - 100% and 105 - 230% intervals with and without mathematical equation correction approach, respectively.
Resumo:
Diplomityössä tarkastellaan Loviisan ydinvoimalaitoksen todennäköisyyspohjaisen riskianalyysin tason 2 epävarmuuksia. Tason 2 riskitutkimuksissa tutkitaan ydinvoimalaitosonnettomuuksia, joiden seurauksena osa reaktorin radioaktiivisista aineista vapautuu ympäristöön. Näiden tutkimuksien päätulos on suuren päästön vuotuinen taajuus ja se on pääosin todelliseen laitoshistoriaan perustuva tilastollinen odotusarvo. Tämän odotusarvon uskottavuutta voidaan parantaa huomioimalla merkittävimmät laskentaan liittyvät epävarmuudet. Epävarmuuksia laskentaan aiheutuu muiden muassa vakavan reaktorionnettomuuden ilmiöistä, turvallisuusjärjestelmien laitteista, inhimillisistä toiminnoista sekä luotettavuusmallin määrittelemättömistä osista. Diplomityössä kuvataan, kuinka epävarmuustarkastelut integroidaan osaksi Loviisan ydinvoimalaitoksen todennäköisyyspohjaisia riskianalyysejä. Tämä toteutetaan diplomityössä kehitetyillä apuohjelmilla PRALA:lla ja PRATU:lla, joiden avulla voidaan lisätä laitoshistorian perusteella muodostetut epävarmuusparametrit osaksi riskianalyysien luotettavuusdataa. Lisäksi diplomityössä on laskettu laskentaesimerkkinä Loviisan ydinvoimalaitoksen suuren päästön vuotuisen taajuuden vaihtelua kuvaava luottamusväli. Tämä laskentaesimerkki pohjautuu pääosin konservatiivisiin epävarmuusarvioihin, ei todellisiin tilastollisiin epävarmuuksiin. Laskentaesimerkin tulosten perusteella Loviisan suuren päästön taajuudella on laaja vaihteluväli; virhekertoimeksi saatiin 8,4 nykyisillä epävarmuusparametreilla. Suuren päästön taajuuden luottamusväliä voidaan kuitenkin tulevaisuudessa supistaa, kun hyödynnetään todelliseen laitoshistoriaan perustuvia epävarmuusparametreja.
Resumo:
Tässä työssä on tarkasteltu uusien YVL-ohjeiden vikasietoisuusanalyysin vaatimuksia sekä kehitetty menetelmä, jolla vaatimusten toteutumista voidaan tarkastella todennäköisyysperusteisen riskianalyysin (PRA) avulla. Työssä on käsitelty riskianalyysin tärkeimmät osat, sekä sen tuloksena saatavia tärkeysmittoja ja näiden soveltamiskohteita. Tärkeysmittoja on käytetty myös kehitetyn menetelmän alkuarvoina. Ydinvoimalan turvallisuuden takaamiseksi tärkeimpiä turvallisuustoimintoja suorittavien järjestelmien on pystyttävä toteuttamaan tehtävänsä, vaikka mikä tahansa järjestelmän yksittäinen laite olisi toimintakyvytön ja vaikka mikä tahansa turvallisuustoimintoon vaikuttava laite olisi samanaikaisesti poissa käytöstä korjauksen tai huollon vuoksi. Tämä edellyttää, että vikasietoisuuden takaamiseksi tärkeimpien turvallisuustoimintojen varmistamisessa on käytettävä mahdollisuuksien mukaan moninkertaisuus- ja erilaisuusperiaatteisiin perustuvia järjestelmiä, joiden tulee olla toisistaan riippumattomia. Kehitetyn menetelmän ja uuden vikasietoisuuden lisäarvomitan avulla voidaan tunnistaa järjestelmien väliset riippuvuustekijät ja tarkastella vaadittujen turvallisuustekijöiden toteutumista.
Resumo:
Thermal and air conditions inside animal facilities change during the day due to the influence of the external environment. For statistical and geostatistical analyses to be representative, a large number of points spatially distributed in the facility area must be monitored. This work suggests that the time variation of environmental variables of interest for animal production, monitored within animal facility, can be modeled accurately from discrete-time records. The aim of this study was to develop a numerical method to correct the temporal variations of these environmental variables, transforming the data so that such observations are independent of the time spent during the measurement. The proposed method approached values recorded with time delays to those expected at the exact moment of interest, if the data were measured simultaneously at the moment at all points distributed spatially. The correction model for numerical environmental variables was validated for environmental air temperature parameter, and the values corrected by the method did not differ by Tukey's test at 5% significance of real values recorded by data loggers.
Resumo:
This study aimed to describe the probabilistic structure of the annual series of extreme daily rainfall (Preabs), available from the weather station of Ubatuba, State of São Paulo, Brazil (1935-2009), by using the general distribution of extreme value (GEV). The autocorrelation function, the Mann-Kendall test, and the wavelet analysis were used in order to evaluate the presence of serial correlations, trends, and periodical components. Considering the results obtained using these three statistical methods, it was possible to assume the hypothesis that this temporal series is free from persistence, trends, and periodicals components. Based on quantitative and qualitative adhesion tests, it was found that the GEV may be used in order to quantify the probabilities of the Preabs data. The best results of GEV were obtained when the parameters of this function were estimated using the method of maximum likelihood. The method of L-moments has also shown satisfactory results.
Resumo:
Traumatic diaphragmatic hernia is defined as a laceration of the diaphragm with an abdominal viscera herniation into the thorax. It is usually asymptomatic, with the exception of the cases with obstruction, strangulation, necrosis or perforation of the herniaded viscera. It is classified as acute, latent or chronic, in accordance with the evolutive period. At the latent phase, symptoms are indefinite and the radiological signals, which are suggestive of thoracic affections, are frequent and can induce a diagnosis error, leading to inadequate treatment.This article presents a case of chronic traumatic diaphragmatic hernia which was complicated by a gastricpleuralcutaneous fistula, due to an inadequate thoracic drainage. Considering that this is a chronic affection with an unquestionable surgical indication, due to the complications risk, it is essential to have a detailed diagnostic investigation, which aims at both avoiding an intempestive or inadequate therapeutics behaviour and reducing the affection morbimortality. Recently, the videolaparoscopic approach has proved to be more precise when compared to the other diagnostic methods, by direct visualization of the diaphragmatic laceration, allowing its correction by an immediate suture.
Resumo:
Tässä diplomityössä tehtiin Olkiluodon ydinvoimalaitoksella sijaitsevan käytetyn ydinpolttoaineen allasvarastointiin perustuvan välivaraston todennäköisyysperustainen ulkoisten uhkien riskianalyysi. Todennäköisyysperustainen riskianalyysi (PRA) on yleisesti käytetty riskien tunnistus- ja lähestymistapa ydinvoimalaitoksella. Työn tarkoituksena oli laatia täysin uusi ulkoisten uhkien PRA-analyysi, koska Suomessa ei ole aiemmin tehty vastaavanlaisia tämän tutkimusalueen riskitarkasteluja. Riskitarkastelun motiivina ovat myös maailmalla tapahtuneiden luonnonkatastrofien vuoksi korostunut ulkoisten uhkien rooli käytetyn ydinpolttoaineen välivarastoinnin turvallisuudessa. PRA analyysin rakenne pohjautui tutkimuksen alussa luotuun metodologiaan. Analyysi perustuu mahdollisten ulkoisten uhkien tunnistamiseen pois lukien ihmisen aikaansaamat tahalliset vahingot. Tunnistettujen ulkoisten uhkien esiintymistaajuuksien ja vahingoittamispotentiaalin perusteella ulkoiset uhat joko karsittiin pois tutkimuksessa määriteltyjen karsintakriteerien avulla tai analysoitiin tarkemmin. Tutkimustulosten perusteella voitiin todeta, että tiedot hyvin harvoin tapahtuvista ulkoisista uhista ovat epätäydellisiä. Suurinta osaa näistä hyvin harvoin tapahtuvista ulkoisista uhista ei ole koskaan esiintynyt eikä todennäköisesti koskaan tule esiintymään Olkiluodon vaikutusalueella tai edes Suomessa. Esimerkiksi salaman iskujen ja öljyaltistuksen roolit ja vaikutukset erilaisten komponenttien käytettävyyteen ovat epävarmasti tunnettuja. Tutkimuksen tuloksia voidaan pitää kokonaisuudessaan merkittävinä, koska niiden perusteella voidaan osoittaa ne ulkoiset uhat, joiden vaikutuksia olisi syytä tutkia tarkemmin. Yksityiskohtaisempi tietoisuus hyvin harvoin esiintyvistä ulkoisista uhista tarkentaisi alkutapahtumataajuuksien estimaatteja.
Resumo:
ABSTRACTObjective:to assess the impact of the shift inlet trauma patients, who underwent surgery, in-hospital mortality.Methods:a retrospective observational cohort study from November 2011 to March 2012, with data collected through electronic medical records. The following variables were statistically analyzed: age, gender, city of origin, marital status, admission to the risk classification (based on the Manchester Protocol), degree of contamination, time / admission round, admission day and hospital outcome.Results:during the study period, 563 patients injured victims underwent surgery, with a mean age of 35.5 years (± 20.7), 422 (75%) were male, with 276 (49.9%) received in the night shift and 205 (36.4%) on weekends. Patients admitted at night and on weekends had higher mortality [19 (6.9%) vs. 6 (2.2%), p=0.014, and 11 (5.4%) vs. 14 (3.9%), p=0.014, respectively]. In the multivariate analysis, independent predictors of mortality were the night admission (OR 3.15), the red risk classification (OR 4.87), and age (OR 1.17).Conclusion:the admission of night shift and weekend patients was associated with more severe and presented higher mortality rate. Admission to the night shift was an independent factor of surgical mortality in trauma patients, along with the red risk classification and age.
Resumo:
Objective: To analyze the performance of two surgical meshes of different compositions during the defect healing process of the abdominal wall of rats. Methods: thirty-three adult Wistar rats were anesthetized and subjected to removal of an area of 1.5 cm x 2 cm of the anterior abdominal wall, except for the skin; 17 animals had the defect corrected by edge-to-edge surgical suture of a mesh made of polypropylene + poliglecaprone (Group U - UltraproTM); 16 animals had the defect corrected with a surgical mesh made of polypropylene + polidioxanone + cellulose (Group P - ProceedTM). Each group was divided into two subgroups, according to the euthanasia moment (seven days or 28 days after the operation). Parameters analyzed were macroscopic (adherence), microscopic (quantification of mature and immature collagen) and tensiometric (maximum tension and maximum rupture strength). Results : there was an increase in collagen type I in the ProceedTM group from seven to 28 days, p = 0.047. Also, there was an increase in the rupture tension on both groups when comparing the two periods. There was a lower rupture tension and tissue deformity with ProceedTM mesh in seven days, becoming equal at day 28. Conclusion : the meshes retain similarities in the final result and more studies with larger numbers of animals must be carried for better assessment.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
Purpose To evaluate the precision of both two- and three-dimensional ultrasonography in determining vertebral lesion level (the first open vertebra) in patients with spina bifida. Methods This was a prospective longitudinal study comprising of fetuses with open spina bifida who were treated in the fetal medicine division of the department of obstetrics of Hospital das Clínicas of the Universidade de São Paulo between 2004 and 2013. Vertebral lesion level was established by using both two- and three-dimensional ultrasonography in 50 fetuses (two examiners in each method). The lesion level in the neonatal period was established by radiological assessment of the spine. All pregnancies were followed in our hospital prenatally, and delivery was scheduled to allow immediate postnatal surgical correction. Results Two-dimensional sonography precisely estimated the spina bifida level in 53% of the cases. The estimate error was within one vertebra in 80% of the cases, in up to two vertebrae in 89%, and in up to three vertebrae in 100%, showing a good interobserver agreement. Three-dimensional ultrasonography precisely estimated the lesion level in 50% of the cases. The estimate error was within one vertebra in 82% of the cases, in up to two vertebrae in 90%, and in up to three vertebrae in 100%, also showing good interobserver agreement. Whenever an estimate error was observed, both two- and three-dimensional ultrasonography scans tended to underestimate the true lesion level (55.3% and 62% of the cases, respectively). Conclusions No relevant difference in diagnostic performance was observed between the two- and three-dimensional ultrasonography. The use of three-dimensional ultrasonography showed no additional benefit in diagnosing the lesion level in the fetuses with spina bifida. Errors in both methods showed a tendency to underestimate lesion level.
Resumo:
This study investigated the surface hardening of steels via experimental tests using a multi-kilowatt fiber laser as the laser source. The influence of laser power and laser power density on the hardening effect was investigated. The microhardness analysis of various laser hardened steels was done. A thermodynamic model was developed to evaluate the thermal process of the surface treatment of a wide thin steel plate with a Gaussian laser beam. The effect of laser linear oscillation hardening (LLOS) of steel was examined. An as-rolled ferritic-pearlitic steel and a tempered martensitic steel with 0.37 wt% C content were hardened under various laser power levels and laser power densities. The optimum power density that produced the maximum hardness was found to be dependent on the laser power. The effect of laser power density on the produced hardness was revealed. The surface hardness, hardened depth and required laser power density were compared between the samples. Fiber laser was briefly compared with high power diode laser in hardening medium-carbon steel. Microhardness (HV0.01) test was done on seven different laser hardened steels, including rolled steel, quenched and tempered steel, soft annealed alloyed steel and conventionally through-hardened steel consisting of different carbon and alloy contents. The surface hardness and hardened depth were compared among the samples. The effect of grain size on surface hardness of ferritic-pearlitic steel and pearlitic-cementite steel was evaluated. In-grain indentation was done to measure the hardness of pearlitic and cementite structures. The macrohardness of the base material was found to be related to the microhardness of the softer phase structure. The measured microhardness values were compared with the conventional macrohardness (HV5) results. A thermodynamic model was developed to calculate the temperature cycle, Ac1 and Ac3 boundaries, homogenization time and cooling rate. The equations were numerically solved with an error of less than 10-8. The temperature distributions for various thicknesses were compared under different laser traverse speed. The lag of the was verified by experiments done on six different steels. The calculated thermal cycle and hardened depth were compared with measured data. Correction coefficients were applied to the model for AISI 4340 steel. AISI 4340 steel was hardened by laser linear oscillation hardening (LLOS). Equations were derived to calculate the overlapped width of adjacent tracks and the number of overlapped scans in the center of the scanned track. The effect of oscillation frequency on the hardened depth was investigated by microscopic evaluation and hardness measurement. The homogeneity of hardness and hardened depth with different processing parameters were investigated. The hardness profiles were compared with the results obtained with conventional single-track hardening. LLOS was proved to be well suitable for surface hardening in a relatively large rectangular area with considerable depth of hardening. Compared with conventional single-track scanning, LLOS produced notably smaller hardened depths while at 40 and 100 Hz LLOS resulted in higher hardness within a depth of about 0.6 mm.
Resumo:
Modeller för intermolekulär växelvärkan utnyttjas brett inom biologin. Analys av kontakter mellan proteiner och läkemedelsforskning representerar typiska tillämpningsområden för dylika modeller. En modell som beskriver sådana molekylära växelverkningar kan utformas med hjälp av biofysisk teori, vilket tenderar att resultera i ytterst tung beräkningsbörda även för enkla tillämpningar. Ett alternativt sätt att formulera modeller är att utnyttja stora databaser som innehåller strukturmätningar gjorda med hjälp av till exempel röntgendiffraktion. Då man använder sig av empiriska mätdata direkt, möjliggör en statistisk modell att osäkerheten och inexaktheten i datat tas till hänsyn på ett adekvat sätt, samtidigt som beräkningsbördan håller sig på en rimligare nivå jämfört med kvantmekaniska metoder som i princip borde ge de optimala resultaten. I avhandlingen utvecklades en 3D modell för numerisk undersökning av intermolekulär växelverkan baserad på Bayesiansk statistik. Modellens syfte är att åstadkomma prognoser för det hurdana eller vilka molekylstrukturer prefereras i en given kontext, d.v.s. är mer sannolika inom ramen för interaktion. Modellen testades i essentiella molekyläromgivningar - en liten molekyl vid sin bindningsplats hos ett protein och en gränsyta mellan proteinerna i ett komplex. De erhållna numeriska resultaten motsvarar väl experimentella resultat som tidigare rapporterats i litteraturen, exempelvis kvalitativa bindningsaffiniteter och kemisk kännedom av vissa aminosyrors rumsliga förmågor att utgöra bindningar. I avhandlingen gjordes ytterligare preliminära tester av den statistiska ansatsen för modellering av den centrala molekylära strukturella anpassningsbarheten. I praktiken är den utvecklade modellen ämnad som ett led i en mer omfattande analysmetod, så som en s.k. farmakofor modell. Molekyylivuorovaikutusten mallintamista hyödynnetään laajasti biologisten kysymysten tarkastelussa. Tyypillisiä esimerkkejä sovelluskohteista ovat proteiinien väliset kontaktit ja lääkesuunnittelu. Vuorovaikutuksia kuvaavan mallin lähtökohta voi olla molekyyleihin liittyvä teoria, jolloin soveltamiseen liittyvä laskenta saattaa olla erityisen raskasta, tai suuri havaintojoukko joka on saatu aikaan esimerkiksi mittaamalla rakenteita röntgendiffraktio menetelmällä. Tilastollinen malli mahdollistaa havaintoaineistossa olevan epätarkkuuden ja epävarmuuden huomioimisen, samalla pitäen laskennallisen kuorman pienempänä verrattuna periaatteessa parhaan tuloksen antavaan kvanttimekaaniseen mallinnukseen. Väitöstyössä kehitettiin bayesiläiseen tilastotieteeseen perustuva 3D malli molekyylien välisten vuorovaikutusten laskennalliseen tarkasteluun. Mallin tehtävä on tuottaa ennusteita sen suhteen, minkä tai millaisten molekyylirakenteiden väliset kompleksit ovat etusijalla, toisin sanoen todennäköisempiä, vuorovaikutustilanteessa. Työssä kehitetyn menetelmän toimivuutta testattiin käyttötarkoituksen suhteen olennaisissa molekyyliympäristöissä - pieni molekyyli sitoutumiskohdassaan proteiinissa sekä rajapinta kahden proteiinin välilllä proteiinikompleksissa. Saadut laskennalliset tulokset vastasivat hyvin vertailuun käytettyjä kirjallisuudesta saatuja kokeellisia tuloksia, kuten laadullisia sitoutumisaffiniteetteja, sekä kemiallista tietoa esimerkiksi tiettyjen aminohappojen avaruudellisesta sidoksenmuodostuksesta. Väitöstyössä myös alustavasti testattiin tilastollista lähestymistapaa tärkeän molekyylien rakenteellisen mukautuvuuden mallintamiseen. Käytännössä malli on tarkoitettu osaksi jotakin laajempaa analyysimenetelmää, kuten farmakoforimallia.
Resumo:
This article deals with a contour error controller (CEC) applied in a high speed biaxial table. It works simultaneously with the table axes controllers, helping them. In the early stages of the investigation, it was observed that its main problem is imprecision when tracking non-linear contours at high speeds. The objectives of this work are to show that this problem is caused by the lack of exactness of the contour error mathematical model and to propose modifications in it. An additional term is included, resulting in a more accurate value of the contour error, enabling the use of this type of motion controller at higher feedrate. The response results from simulated and experimental tests are compared with those of common PID and non-corrected CEC in order to analyse the effectiveness of this controller over the system. The main conclusions are that the proposed contour error mathematical model is simple, accurate, almost insensible to the feedrate and that a 20:1 reduction of the integral absolute contour error is possible.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.