989 resultados para standard on auditing
The hematology laboratory in blood doping (bd): 2014 update on the athlete biological passport (APB)
Resumo:
Introduction: Blood doping (BD) is the use of Erythropoietic Stimulating Agents (ESAs) and/or transfusion to increase aerobic performance in athletes. Direct toxicologic techniques are insufficient to unmask sophisticated doping protocols. The Hematological module of the ABP (World Anti-Doping Agency), associates decision support technology and expert assessment to indirectly detect BD hematological effects. Methods: The ABP module is based on blood parameters, under strict pre-analytical and analytical rules for collection, storage and transport at 2-12°C, internal and external QC. Accuracy, reproducibility and interlaboratory harmonization fulfill forensic standard. Blood samples are collected in competition and out-ofcompetition. Primary parameters for longitudinal monitoring are: - hemoglobin (HGB); - reticulocyte percentage (RET); - OFF score, indicator of suppressed erythropoiesis, calculated as [HGB(g/L) * 60-√RET%]. Statistical calculation predicts individual expected limits by probabilistic inference. Secondary parameters are RBC, HCT, MCHC-MCH-MCV-RDW-IFR. ABP profiles flagged as atypical are review by experts in hematology, pharmacology, sports medicine or physiology, and classified as: - normal - suspect (to target) - likely due to BD - likely due to pathology. Results: Thousands of athletes worldwide are currently monitored. Since 2010, at least 35 athletes have been sanctioned and others are prosecuted on the sole basis of abnormal ABP, with a 240% increase of positivity to direct tests for ESA, thanks to improved targeting of suspicious athletes (WADA data). Specific doping scenarios have been identified by the Experts (Table and Figure). Figure. Typical HGB and RET profiles in two highly suspicious athletes. A. Sample 2: simultaneous increases in HGB and RET (likely ESA stimulation) in a male. B. Samples 3, 6 and 7: "OFF" picture, with high HGB and low RET in a female. Sample 10: normal HGB and increased RET (ESA or blood withdrawal). Conclusions: ABP is a powerful tool for indirect doping detection, based on the recognition of specific, unphysiological changes triggered by blood doping. The effect of factors of heterogeneity, such as sex and altitude, must also be considered. Schumacher YO, et al. Drug Test Anal 2012, 4:846-853. Sottas PE, et al. Clin Chem 2011, 57:969-976.
Resumo:
One of the standard tools used to understand the processes shaping trait evolution along the branches of a phylogenetic tree is the reconstruction of ancestral states (Pagel 1999). The purpose is to estimate the values of the trait of interest for every internal node of a phylogenetic tree based on the trait values of the extant species, a topology and, depending on the method used, branch lengths and a model of trait evolution (Ronquist 2004). This approach has been used in a variety of contexts such as biogeography (e.g., Nepokroeff et al. 2003, Blackburn 2008), ecological niche evolution (e.g., Smith and Beaulieu 2009, Evans et al. 2009) and metabolic pathway evolution (e.g., Gabaldón 2003, Christin et al. 2008). Investigations of the factors affecting the accuracy with which ancestral character states can be reconstructed have focused in particular on the choice of statistical framework (Ekman et al. 2008) and the selection of the best model of evolution (Cunningham et al. 1998, Mooers et al. 1999). However, other potential biases affecting these methods, such as the effect of tree shape (Mooers 2004), taxon sampling (Salisbury and Kim 2001) as well as reconstructing traits involved in species diversification (Goldberg and Igić 2008), have also received specific attention. Most of these studies conclude that ancestral character states reconstruction is still not perfect, and that further developments are necessary to improve its accuracy (e.g., Christin et al. 2010). Here, we examine how different estimations of branch lengths affect the accuracy of ancestral character state reconstruction. In particular, we tested the effect of using time-calibrated versus molecular branch lengths and provide guidelines to select the most appropriate branch lengths to reconstruct the ancestral state of a trait.
Resumo:
BACKGROUND: Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. OBJECTIVE: To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. MATERIALS AND METHODS: Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI(vol) 4.8-7.9 mGy, DLP 37.1-178.9 mGy·cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. RESULTS: The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. CONCLUSION: Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone.
Resumo:
Repeated antimalarial treatment for febrile episodes and self-treatment are common in malaria-endemic areas. The intake of antimalarials prior to participating in an in vivo study may alter treatment outcome and affect the interpretation of both efficacy and safety outcomes. We report the findings from baseline plasma sampling of malaria patients prior to inclusion into an in vivo study in Tanzania and discuss the implications of residual concentrations of antimalarials in this setting. In an in vivo study conducted in a rural area of Tanzania in 2008, baseline plasma samples from patients reporting no antimalarial intake within the last 28 days were screened for the presence of 14 antimalarials (parent drugs or metabolites) using liquid chromatography-tandem mass spectrometry. Among the 148 patients enrolled, 110 (74.3%) had at least one antimalarial in their plasma: 80 (54.1%) had lumefantrine above the lower limit of calibration (LLC = 4 ng/mL), 7 (4.7%) desbutyl-lumefantrine (4 ng/mL), 77 (52.0%) sulfadoxine (0.5 ng/mL), 15 (10.1%) pyrimethamine (0.5 ng/mL), 16 (10.8%) quinine (2.5 ng/mL) and none chloroquine (2.5 ng/mL). The proportion of patients with detectable antimalarial drug levels prior to enrollment into the study is worrying. Indeed artemether-lumefantrine was supposed to be available only at government health facilities. Although sulfadoxine-pyrimethamine is only recommended for intermittent preventive treatment in pregnancy (IPTp), it was still widely used in public and private health facilities and sold in drug shops. Self-reporting of previous drug intake is unreliable and thus screening for the presence of antimalarial drug levels should be considered in future in vivo studies to allow for accurate assessment of treatment outcome. Furthermore, persisting sub-therapeutic drug levels of antimalarials in a population could promote the spread of drug resistance. The knowledge on drug pressure in a given population is important to monitor standard treatment policy implementation.
Resumo:
Purpose: To examine the relationship of functional measurements with structural measures. Methods: 146 eyes of 83 test subjects underwent Heidelberg Retinal Tomography (HRTIII) (disc area<2.43, mphsd<40), and perimetry testing with Octopus (SAP; Dynamic), Pulsar (PP; TOP) and Moorfields MDT (ESTA). Glaucoma was defined as progressive structural or functional loss (20 eyes). Perimetry test points were grouped into 6 sectors based on the estimated optic nerve head angle into which the associated nerve fiber bundle enters (Garway-Heath map). Perimetry summary measures (PSM) (MD SAP/ MD PP/ PTD MDT) were calculated from the average total deviation of each measured threshold from the normal for each sector. We calculated the 95% significance level of the sectorial PSM from the respective normative data. We calculated the percentage agreement with group1 (G1), healthy on HRT and within normal perimetric limits, and group 2 (G2), abnormal on HRT and outside normal perimetric limits. We also examined the relationship of PSM and rim area (RA) in those sectors classified as abnormal by MRA (Moorfields Regression Analysis) of HRT. Results: The mean age was 65 (range= [37, 89]). The global sensitivity versus specificity of each instrument in detecting glaucomatous eyes was: MDT 80% vs. 88%, SAP 80% vs. 80%, PP 70% vs. 89% and HRT 80% vs. 79%. Highest percentage agreement of HRT (respectively G1, G2, sector) with PSM were MDT (89%, 57%, nasal superior), SAP (83%, 74%, temporal superior), PP (74%, 63%, nasal superior). Globally percentage agreement (respectively G1, G2) was MDT (92%, 28%), SAP (87%, 40%) and PP (77%, 49%). Linear regression showed there was no significant trend globally associating RA and PSM. However, sectorally the supero-nasal sector had a statistically significant (p<0.001) trend with each instrument, the associated r2 coefficients are (MDT 0.38 SAP 0.56 and PP 0.39). Conclusions: There were no significant differences in global sensitivity or specificity between instruments. Structure-function relationships varied significantly between instruments and were consistently strongest supero-nasally. Further studies are required to investigate these relationships in detail.
Resumo:
Optimum experimental designs depend on the design criterion, the model andthe design region. The talk will consider the design of experiments for regressionmodels in which there is a single response with the explanatory variables lying ina simplex. One example is experiments on various compositions of glass such asthose considered by Martin, Bursnall, and Stillman (2001).Because of the highly symmetric nature of the simplex, the class of models thatare of interest, typically Scheff´e polynomials (Scheff´e 1958) are rather differentfrom those of standard regression analysis. The optimum designs are also ratherdifferent, inheriting a high degree of symmetry from the models.In the talk I will hope to discuss a variety of modes for such experiments. ThenI will discuss constrained mixture experiments, when not all the simplex is availablefor experimentation. Other important aspects include mixture experimentswith extra non-mixture factors and the blocking of mixture experiments.Much of the material is in Chapter 16 of Atkinson, Donev, and Tobias (2007).If time and my research allows, I would hope to finish with a few comments ondesign when the responses, rather than the explanatory variables, lie in a simplex.ReferencesAtkinson, A. C., A. N. Donev, and R. D. Tobias (2007). Optimum ExperimentalDesigns, with SAS. Oxford: Oxford University Press.Martin, R. J., M. C. Bursnall, and E. C. Stillman (2001). Further results onoptimal and efficient designs for constrained mixture experiments. In A. C.Atkinson, B. Bogacka, and A. Zhigljavsky (Eds.), Optimal Design 2000,pp. 225–239. Dordrecht: Kluwer.Scheff´e, H. (1958). Experiments with mixtures. Journal of the Royal StatisticalSociety, Ser. B 20, 344–360.1
Resumo:
Age adjusted incidence rates (World standard) from invasive cervical cancer in the Swiss canton of Vaud decreased from 17.7/100,000 in 1968-70 to 9.9/100,000 in 1983-85. The decline was substantial in younger middle age, but no appreciable trend was observed in women over 70. This is consistent with available interview based information on the pattern of cervical screening in the Swiss population. Although there was no organised screening programme in Switzerland, over 80% of women aged 20-44 and 65% of those aged 45-64 reported one or more screening smears over the previous 3 years, compared to only 22% of women aged 65 or over. In the last calendar period, there was an apparent increase in the incidence of invasive cervical cancer (from 2.5 to 6.1/100,000) in women aged 25-29. Although based on small absolute numbers, this is in agreement with incidence and mortality data from other countries, and may therefore confirm a change in risk factor exposure in younger women.
Resumo:
In this paper we describe the existence of financial illusion in public accountingand we comment on its effects for the future sustainability of local publicservices. We relate these features to the lack of incentives amongst publicmanagers for improving the financial reporting and thus management of publicassets. Financial illusion pays off for politicians and managers since it allowsfor larger public expenditure increases and managerial slack, these beingarguments in their utility functions. This preference is strengthen by the shorttime perspective of politically appointed public managers. Both factors runagainst public accountability. This hypothesis is tested for Spain by using anunique sample. We take data from around forty Catalan local authorities withpopulation above 20,000 for the financial years 1993-98. We build this databasis from the Catalan Auditing Office Reports in a way that it can be linkedto some other local social and economic variables in order to test ourassumptions. The results confirm that there is a statistical relationship between the financialillusion index (FI as constructed in the paper) and higher current expenditure.This reflects on important overruns and increases of the delay in payingsuppliers, as well as on a higher difficulties to face capital finance. Mechanismsfor FI creation have to do among other factors, with delays in paying suppliers(and thereafter higher future financial costs per unit of service), no adequateprovision for bad debts and lack of appropriate capital funding either forreposition or for new equipments. For this, it is crucial to monitor the way inwhich capital transfers are accounted in local public sheet balances. As a result,for most of the Municipalities we analyse, the funds for guaranteeing continuityand sustainability of public services provision are today at risk.Given managerial incentives at present in public institutions, we conclude thatpublic regulation recently enforced for assuring better information systems inlocal public management may not be enough to change the current state of affairs.
Resumo:
We explore a view of the crisis as a shock to investor sentiment that led to the collapse of abubble or pyramid scheme in financial markets. We embed this view in a standard model of thefinancial accelerator and explore its empirical and policy implications. In particular, we show howthe model can account for: (i) a gradual and protracted expansionary phase followed by a suddenand sharp recession; (ii) the connection (or lack of connection!) between financial and real economicactivity and; (iii) a fast and strong transmission of shocks across countries. We also use the modelto explore the role of fiscal policy.
Resumo:
Researchers have used stylized facts on asset prices and trading volumein stock markets (in particular, the mean reversion of asset returnsand the correlations between trading volume, price changes and pricelevels) to support theories where agents are not rational expected utilitymaximizers. This paper shows that this empirical evidence is in factconsistent with a standard infite horizon perfect information expectedutility economy where some agents face leverage constraints similar tothose found in todays financial markets. In addition, and in sharpcontrast to the theories above, we explain some qualitative differencesthat are observed in the price-volume relation on stock and on futuresmarkets. We consider a continuous-time economy where agents maximize theintegral of their discounted utility from consumption under both budgetand leverage con-straints. Building on the work by Vila and Zariphopoulou(1997), we find a closed form solution, up to a negative constant, for theequilibrium prices and demands in the region of the state space where theconstraint is non-binding. We show that, at the equilibrium, stock holdingsvolatility as well as its ratio to stock price volatility are increasingfunctions of the stock price and interpret this finding in terms of theprice-volume relation.
Resumo:
For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any data-based bandwidth, there exists a densityfor which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem of tuning the variable bandwidth in an optimal manner is ``too hard''. Moreover, from the class of counterexamples exhibited in the paper, it appears thatplacing conditions on the densities (monotonicity, convexity, smoothness) does not help.
Resumo:
Equivalence classes of normal form games are defined using the geometryof correspondences of standard equilibiurm concepts like correlated, Nash,and robust equilibrium or risk dominance and rationalizability. Resultingequivalence classes are fully characterized and compared across differentequilibrium concepts for 2 x 2 games. It is argued that the procedure canlead to broad and game-theoretically meaningful distinctions of games aswell as to alternative ways of viewing and testing equilibrium concepts.Larger games are also briefly considered.
Resumo:
Obesity has become a major worldwide challenge to public health, owing to an interaction between the Western 'obesogenic' environment and a strong genetic contribution. Recent extensive genome-wide association studies (GWASs) have identified numerous single nucleotide polymorphisms associated with obesity, but these loci together account for only a small fraction of the known heritable component. Thus, the 'common disease, common variant' hypothesis is increasingly coming under challenge. Here we report a highly penetrant form of obesity, initially observed in 31 subjects who were heterozygous for deletions of at least 593 kilobases at 16p11.2 and whose ascertainment included cognitive deficits. Nineteen similar deletions were identified from GWAS data in 16,053 individuals from eight European cohorts. These deletions were absent from healthy non-obese controls and accounted for 0.7% of our morbid obesity cases (body mass index (BMI) >or= 40 kg m(-2) or BMI standard deviation score >or= 4; P = 6.4 x 10(-8), odds ratio 43.0), demonstrating the potential importance in common disease of rare variants with strong effects. This highlights a promising strategy for identifying missing heritability in obesity and other complex traits: cohorts with extreme phenotypes are likely to be enriched for rare variants, thereby improving power for their discovery. Subsequent analysis of the loci so identified may well reveal additional rare variants that further contribute to the missing heritability, as recently reported for SIM1 (ref. 3). The most productive approach may therefore be to combine the 'power of the extreme' in small, well-phenotyped cohorts, with targeted follow-up in case-control and population cohorts.
Resumo:
AbstractText BACKGROUND: Profiling sperm DNA present on vaginal swabs taken from rape victims often contributes to identifying and incarcerating rapists. Large amounts of the victim's epithelial cells contaminate the sperm present on swabs, however, and complicate this process. The standard method for obtaining relatively pure sperm DNA from a vaginal swab is to digest the epithelial cells with Proteinase K in order to solubilize the victim's DNA, and to then physically separate the soluble DNA from the intact sperm by pelleting the sperm, removing the victim's fraction, and repeatedly washing the sperm pellet. An alternative approach that does not require washing steps is to digest with Proteinase K, pellet the sperm, remove the victim's fraction, and then digest the residual victim's DNA with a nuclease. METHODS: The nuclease approach has been commercialized in a product, the Erase Sperm Isolation Kit (PTC Labs, Columbia, MO, USA), and five crime laboratories have tested it on semen-spiked female buccal swabs in a direct comparison with their standard methods. Comparisons have also been performed on timed post-coital vaginal swabs and evidence collected from sexual assault cases. RESULTS: For the semen-spiked buccal swabs, Erase outperformed the standard methods in all five laboratories and in most cases was able to provide a clean male profile from buccal swabs spiked with only 1,500 sperm. The vaginal swabs taken after consensual sex and the evidence collected from rape victims showed a similar pattern of Erase providing superior profiles. CONCLUSIONS: In all samples tested, STR profiles of the male DNA fractions obtained with Erase were as good as or better than those obtained using the standard methods.