986 resultados para random breath tests
Resumo:
Technical or contaminated ethanol products are sometimes ingested either accidentally or on purpose. Typical misused products are black-market liquor and automotive products, e.g., windshield washer fluids. In addition to less toxic solvents, these liquids may contain the deadly methanol. Symptoms of even lethal solvent poisoning are often non-specific at the early stage. The present series of studies was carried out to develop a method for solvent intoxication breath diagnostics to speed up the diagnosis procedure conventionally based on blood tests. Especially in the case of methanol ingestion, the analysis method should be sufficiently sensitive and accurate to determine the presence of even small amounts of methanol from the mixture of ethanol and other less-toxic components. In addition to the studies on the FT-IR method, the Dräger 7110 evidential breath analyzer was examined to determine its ability to reveal a coexisting toxic solvent. An industrial Fourier transform infrared analyzer was modified for breath testing. The sample cell fittings were widened and the cell size reduced in order to get an alveolar sample directly from a single exhalation. The performance and the feasibility of the Gasmet FT-IR analyzer were tested in clinical settings and in the laboratory. Actual human breath screening studies were carried out with healthy volunteers, inebriated homeless men, emergency room patients and methanol-intoxicated patients. A number of the breath analysis results were compared to blood test results in order to approximate the blood-breath relationship. In the laboratory experiments, the analytical performance of the Gasmet FT-IR analyzer and Dräger 7110 evidential breath analyzer was evaluated by means of artificial samples resembling exhaled breath. The investigations demonstrated that a successful breath ethanol analysis by Dräger 7110 evidential breath analyzer could exclude any significant methanol intoxication. In contrast, the device did not detect very high levels of acetone, 1-propanol and 2-propanol in simulated breath. The Dräger 7110 evidential breath ethanol analyzer was not equipped to recognize the interfering component. According to the studies the Gasmet FT-IR analyzer was adequately sensitive, selective and accurate for solvent intoxication diagnostics. In addition to diagnostics, the fast breath solvent analysis proved feasible for controlling the ethanol and methanol concentration during haemodialysis treatment. Because of the simplicity of the sampling and analysis procedure, non-laboratory personnel, such as police officers or social workers, could also operate the analyzer for screening purposes.
Resumo:
A series of biodegradable polyurethanes (PUs) are synthesized from the copolymer diols prepared from L-lactide and epsilon-caprolactone (CL), 2,4-toluene diisocyanate, and 1,4-butanediol. Their thermal and mechanical properties are characterized via FTIR, DSC, and tensile tests. Their T(g)s are in the range of 28-53 degrees C. They have high modulus, tensile strength, and elongation ratio at break. With increasing CL content, the PU changes from semicrystalline to completely amorphous. Thermal mechanical analysis is used to determine their shape-memory property. When they are deformed and fixed at proper temperatures, their shape-recovery is almost complete for a tensile elongation of 150% or a compression of 2-folds. By changing the content of CL and the hard-to-soft ratio, their T(g)s and their shape-recovery temperature can be adjusted. Therefore, they may find wide applications.
Resumo:
OBJECTIVE: The diagnosis of Alzheimer's disease (AD) remains difficult. Lack of diagnostic certainty or possible distress related to a positive result from diagnostic testing could limit the application of new testing technologies. The objective of this paper is to quantify respondents' preferences for obtaining AD diagnostic tests and to estimate the perceived value of AD test information. METHODS: Discrete-choice experiment and contingent-valuation questions were administered to respondents in Germany and the United Kingdom. Choice data were analyzed by using random-parameters logit. A probit model characterized respondents who were not willing to take a test. RESULTS: Most respondents indicated a positive value for AD diagnostic test information. Respondents who indicated an interest in testing preferred brain imaging without the use of radioactive markers. German respondents had relatively lower money-equivalent values for test features compared with respondents in the United Kingdom. CONCLUSIONS: Respondents preferred less invasive diagnostic procedures and tests with higher accuracy and expressed a willingness to pay up to €700 to receive a less invasive test with the highest accuracy.
Resumo:
A series of ultra-lightweight digital true random number generators (TRNGs) are presented. These TRNGs are based on the observation that, when a circuit switches from a metastable state to a bi-stable state, the resulting state may be random. Four such circuits with low hardware cost are presented: one uses an XOR gate; one uses a lookup table; one uses a multiplexer and an inverter; and one uses four transistors. The three TRNGs based on the first three circuits are implemented on a field programmable gate array and successfully pass the DIEHARD RNG tests and the National Institute of Standard and Technology (NIST) RNG tests. To the best of the authors' knowledge, the proposed TRNG designs are the most lightweight among existing TRNGs.
Resumo:
This paper examines the finite sample properties of three testing regimes for the null hypothesis of a panel unit root against stationary alternatives in the presence of cross-sectional correlation. The regimes of Bai and Ng (2004), Moon and Perron (2004) and Pesaran (2007) are assessed in the presence of multiple factors and also other non-standard situations. The behaviour of some information criteria used to determine the number of factors in a panel is examined and new information criteria with improved properties in small-N panels proposed. An application to the efficient markets hypothesis is also provided. The null hypothesis of a panel random walk is not rejected by any of the tests, supporting the efficient markets hypothesis in the financial services sector of the Australian Stock Exchange.
Resumo:
This article applies the panel stationarity test with a break proposed by Hadri and Rao (2008) to examine whether 14 macroeconomic variables of OECD countries can be best represented as random walk or stationary fluctuations around a deterministic trend. In contrast to previous studies, based essentially on visual inspection of the break type or just applying the most general break model, we use a model selection procedure based on BIC. We do this for each time series so that heterogeneous break models are allowed for in the panel. Our results suggest, overwhelmingly, that if we account for a structural break, cross-sectional dependence and choose the break models to be congruent with the data, then the null of stationarity cannot be rejected for all the 14 macroeconomic variables examined in this article. This is in sharp contrast with the results obtained by Hurlin (2004), using the same data but a different methodology.
Resumo:
True random number generation is crucial in hardware security applications. Proposed is a voltage-controlled true random number generator that is inherently field-programmable. This facilitates increased entropy as a randomness source because there is more than one configuration state which lends itself to more compact and low-power architectures. It is evaluated through electrical characterisation and statistically through industry-standard randomness tests. To the best of the author's knowledge, it is one of the most efficient designs to date with respect to hardware design metrics.
Resumo:
We report the results of general practitioners' views on Helicobacter pylori-associated dyspepsia and use of screening tests in the community. The use of office serology tests in screening is of concern as independent validation in specialist units has been disappointing.
Resumo:
Les modèles de séries chronologiques avec variances conditionnellement hétéroscédastiques sont devenus quasi incontournables afin de modéliser les séries chronologiques dans le contexte des données financières. Dans beaucoup d'applications, vérifier l'existence d'une relation entre deux séries chronologiques représente un enjeu important. Dans ce mémoire, nous généralisons dans plusieurs directions et dans un cadre multivarié, la procédure dévéloppée par Cheung et Ng (1996) conçue pour examiner la causalité en variance dans le cas de deux séries univariées. Reposant sur le travail de El Himdi et Roy (1997) et Duchesne (2004), nous proposons un test basé sur les matrices de corrélation croisée des résidus standardisés carrés et des produits croisés de ces résidus. Sous l'hypothèse nulle de l'absence de causalité en variance, nous établissons que les statistiques de test convergent en distribution vers des variables aléatoires khi-carrées. Dans une deuxième approche, nous définissons comme dans Ling et Li (1997) une transformation des résidus pour chaque série résiduelle vectorielle. Les statistiques de test sont construites à partir des corrélations croisées de ces résidus transformés. Dans les deux approches, des statistiques de test pour les délais individuels sont proposées ainsi que des tests de type portemanteau. Cette méthodologie est également utilisée pour déterminer la direction de la causalité en variance. Les résultats de simulation montrent que les tests proposés offrent des propriétés empiriques satisfaisantes. Une application avec des données réelles est également présentée afin d'illustrer les méthodes
Resumo:
Scholastic Aptitude Test (SAT) se trata de una prueba estandarizada usada frecuentemente para valorar los conocimientos adquiridos durante la enseñanza secundaria por los estudiantes que deseen acceder a una educación superior en EE.UU. Esta publicación proporciona la información y las estrategias necesarias para maximizar la puntuación de la prueba del SAT en historia. Enseña a pensar como los redactores de la prueba, y a practicar con la materia que se pondrá en el examen para poder estudiar con mayor eficacia. Se hace una revisión de las épocas históricas que van a aparecer en la prueba y facilita con explicaciones detalladas técnicas para aplicar los conocimientos aprendidos en resolver cuestiones específicas complicadas. Incluye cuatro ensayos prácticos con preguntas de opción múltiple de una hora de duración cada una: dos pruebas de historia de Estados Unidos desde la aprobación de la Constitución hasta la actualidad, y dos pruebas para historia universal.
Resumo:
Scholastic Aptitude Test (SAT) se trata de una prueba estandarizada usada frecuentemente para valorar los conocimientos adquiridos durante la enseñanza secundaria por los estudiantes que deseen acceder a una educación superior en EE.UU. Esta publicación proporciona la información y las estrategias necesarias para maximizar la puntuación de la prueba del SAT en matemáticas. Enseña a pensar como los redactores de la prueba, y a practicar con la materia que se pondrá en el examen para poder estudiar con mayor eficacia. Se hace una revisión de los conceptos principales desde el álgebra básica a la geometría, trigonometría y la estadística que van a aparecer en la prueba y facilita con explicaciones detalladas estrategias para aplicar los conocimientos aprendidos en resolver cuestiones específicas complicadas. Incluye cuatro ensayos prácticos con cincuenta preguntas de selección múltiple de una hora de duración cada uno.
Resumo:
La present tesi proposa una metodología per a la simulació probabilística de la fallada de la matriu en materials compòsits reforçats amb fibres de carboni, basant-se en l'anàlisi de la distribució aleatòria de les fibres. En els primers capítols es revisa l'estat de l'art sobre modelització matemàtica de materials aleatoris, càlcul de propietats efectives i criteris de fallada transversal en materials compòsits. El primer pas en la metodologia proposada és la definició de la determinació del tamany mínim d'un Element de Volum Representatiu Estadístic (SRVE) . Aquesta determinació es du a terme analitzant el volum de fibra, les propietats elàstiques efectives, la condició de Hill, els estadístics de les components de tensió i defromació, la funció de densitat de probabilitat i les funcions estadístiques de distància entre fibres de models d'elements de la microestructura, de diferent tamany. Un cop s'ha determinat aquest tamany mínim, es comparen un model periòdic i un model aleatori, per constatar la magnitud de les diferències que s'hi observen. Es defineix, també, una metodologia per a l'anàlisi estadístic de la distribució de la fibra en el compòsit, a partir d'imatges digitals de la secció transversal. Aquest anàlisi s'aplica a quatre materials diferents. Finalment, es proposa un mètode computacional de dues escales per a simular la fallada transversal de làmines unidireccionals, que permet obtenir funcions de densitat de probabilitat per a les variables mecàniques. Es descriuen algunes aplicacions i possibilitats d'aquest mètode i es comparen els resultats obtinguts de la simulació amb valors experimentals.
Resumo:
When missing data occur in studies designed to compare the accuracy of diagnostic tests, a common, though naive, practice is to base the comparison of sensitivity, specificity, as well as of positive and negative predictive values on some subset of the data that fits into methods implemented in standard statistical packages. Such methods are usually valid only under the strong missing completely at random (MCAR) assumption and may generate biased and less precise estimates. We review some models that use the dependence structure of the completely observed cases to incorporate the information of the partially categorized observations into the analysis and show how they may be fitted via a two-stage hybrid process involving maximum likelihood in the first stage and weighted least squares in the second. We indicate how computational subroutines written in R may be used to fit the proposed models and illustrate the different analysis strategies with observational data collected to compare the accuracy of three distinct non-invasive diagnostic methods for endometriosis. The results indicate that even when the MCAR assumption is plausible, the naive partial analyses should be avoided.
Resumo:
We consider a random tree and introduce a metric in the space of trees to define the ""mean tree"" as the tree minimizing the average distance to the random tree. When the resulting metric space is compact we have laws of large numbers and central limit theorems for sequence of independent identically distributed random trees. As application we propose tests to check if two samples of random trees have the same law.
Resumo:
Although the asymptotic distributions of the likelihood ratio for testing hypotheses of null variance components in linear mixed models derived by Stram and Lee [1994. Variance components testing in longitudinal mixed effects model. Biometrics 50, 1171-1177] are valid, their proof is based on the work of Self and Liang [1987. Asymptotic properties of maximum likelihood estimators and likelihood tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610] which requires identically distributed random variables, an assumption not always valid in longitudinal data problems. We use the less restrictive results of Vu and Zhou [1997. Generalization of likelihood ratio tests under nonstandard conditions. Ann. Statist. 25, 897-916] to prove that the proposed mixture of chi-squared distributions is the actual asymptotic distribution of such likelihood ratios used as test statistics for null variance components in models with one or two random effects. We also consider a limited simulation study to evaluate the appropriateness of the asymptotic distribution of such likelihood ratios in moderately sized samples. (C) 2008 Elsevier B.V. All rights reserved.