145 resultados para Material testing
Resumo:
INTRODUCTION: Quantitative sensory testing (QST) is widely used in human research to investigate the integrity of the sensory function in patients with pain of neuropathic origin, or other causes such as low back pain. Reliability of QST has been evaluated on both sides of the face, hands and feet as well as on the trunk (Th3-L3). In order to apply these tests on other body-parts such as the lower lumbar spine, it is important first to establish reliability on healthy individuals. The aim of this study was to investigate intra-rater reliability of thermal QST in healthy adults, on two sites within the L5 dermatome of the lumbar spine and lower extremity. METHODS: Test-retest reliability of thermal QST was determined at the L5-level of the lumbar spine and in the same dermatome on the lower extremity in 30 healthy persons under 40 years of age. Results were analyzed using descriptive statistics and intraclass correlation coefficient (ICC). Values were compared to normative data, using Z-transformation. RESULTS: Mean intraindividual differences were small for cold and warm detection thresholds but larger for pain thresholds. ICC values showed excellent reliability for warm detection and heat pain threshold, good-to-excellent reliability for cold pain threshold and fair-to-excellent reliability for cold detection threshold. ICC had large ranges of confidence interval (95%). CONCLUSION: In healthy adults, thermal QST on the lumbar spine and lower extremity demonstrated fair-to-excellent test-retest reliability.
Resumo:
OBJECTIVE: The purpose of the present study was to submit the same materials that were tested in the round robin wear test of 2002/2003 to the Alabama wear method. METHODS: Nine restorative materials, seven composites (belleGlass, Chromasit, Estenia, Heliomolar, SureFil, Targis, Tetric Ceram) an amalgam (Amalcap) and a ceramic (IPS Empress) have been submitted to the Alabama wear method for localized and generalized wear. The test centre did not know which brand they were testing. Both volumetric and vertical loss had been determined with an optical sensor. After completion of the wear test, the raw data were sent to IVOCLAR for further analysis. The statistical analysis of the data included logarithmic transformation of the data, the calculation of relative ranks of each material within each test centre, measures of agreement between methods, the discrimination power and coefficient of variation of each method as well as measures of the consistency and global performance for each material. RESULTS: Relative ranks of the materials varied tremendously between the test centres. When all materials were taken into account and the test methods compared with each other, only ACTA agreed reasonably well with two other methods, i.e. OHSU and ZURICH. On the other hand, MUNICH did not agree with the other methods at all. The ZURICH method showed the lowest discrimination power, ACTA, IVOCLAR and ALABAMA localized the highest. Material-wise, the best global performance was achieved by the leucite reinforced ceramic material Empress, which was clearly ahead of belleGlass, SureFil and Estenia. In contrast, Heliomolar, Tetric Ceram and especially Chromasit demonstrated a poor global performance. The best consistency was achieved by SureFil, Tetric Ceram and Chromasit, whereas the consistency of Amalcap and Heliomolar was poor. When comparing the laboratory data with clinical data, a significant agreement was found for the IVOCLAR and ALABAMA generalized wear method. SIGNIFICANCE: As the different wear simulator settings measure different wear mechanisms, it seems reasonable to combine at least two different wear settings to assess the wear resistance of a new material.
Resumo:
The aims of this study were to determine whether responses in myocardial blood flow (MBF) to the cold pressor testing (CPT) method noninvasively with PET correlate with an established and validated index of flow-dependent coronary vasomotion on quantitative angiography. METHODS: Fifty-six patients (57 +/- 6 y; 16 with hypertension, 10 with hypercholesterolemia, 8 smokers, and 22 without coronary risk factors) with normal coronary angiograms were studied. Biplanar end-diastolic images of a selected proximal segment of the left anterior descending artery (LAD) (n = 27) or left circumflex artery (LCx) (n = 29) were evaluated with quantitative coronary angiography in order to determine the CPT-induced changes of epicardial luminal area (LA, mm(2)). Within 20 d of coronary angiography, MBF in the LAD, LCx, and right coronary artery territory was measured with (13)N-ammonia and PET at baseline and during CPT. RESULTS: CPT induced on both study days comparable percent changes in the rate x pressure product (%DeltaRPP, 37% +/- 13% and 40% +/- 17%; P = not significant [NS]). For the entire study group, the epicardial LA decreased from 5.07 +/- 1.02 to 4.88 +/- 1.04 mm(2) (DeltaLA, -0.20 +/- 0.89 mm(2)) or by -2.19% +/- 17%, while MBF in the corresponding epicardial vessel segment increased from 0.76 +/- 0.16 to 1.03 +/- 0.33 mL x min(-1) x g(-1) (DeltaMBF, 0.27 +/- 0.25 mL x min(-1) x g(-1)) or 36% +/- 31% (P <or= 0.0001). However, in normal controls without coronary risk factors (n = 22), the epicardial LA increased from 5.01 +/- 1.07 to 5.88 +/- 0.89 mm(2) (19.06% +/- 8.9%) and MBF increased from 0.77 +/- 0.16 to 1.34 +/- 0.34 mL x min(-1) x g(-1) (74.08% +/- 23.5%) during CPT, whereas patients with coronary risk factors (n = 34) revealed a decrease of epicardial LA from 5.13 +/- 1.48 to 4.24 +/- 1.12 mm(2) (-15.94% +/- 12.2%) and a diminished MBF increase (from 0.76 +/- 0.20 to 0.83 +/- 0.25 mL x min(-1) x g(-1) or 10.91% +/- 19.8%) as compared with controls (P < 0.0001, respectively), despite comparable changes in the RPP (P = NS). In addition, there was a significant correlation (r = 0.87; P <or= 0.0001) between CPT-related percent changes in LA on quantitative angiography and in MBF as measured with PET. CONCLUSION: The observed close correlation between an angiographically established parameter of flow-dependent and, most likely, endothelium-mediated coronary vasomotion and PET-measured MBF further supports the validity and value of MBF responses to CPT as a noninvasively available index of coronary circulatory function.
Resumo:
An ammonium chloride procedure was used to prepare a bacterial pellet from positive blood cultures, which was used for direct inoculation of VITEK 2 cards. Correct identification reached 99% for Enterobacteriaceae and 74% for staphylococci. For antibiotic susceptibility testing, very major and major errors were 0.1 and 0.3% for Enterobacteriaceae, and 0.7 and 0.1% for staphylococci, respectively. Thus, bacterial pellets prepared with ammonium chloride allow direct inoculation of VITEK cards with excellent accuracy for Enterobacteriaceae and a lower accuracy for staphylococci.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
PURPOSE: To select and propose a set of knowledge, attitudes, and skills essential for the care of adolescents; to encourage the development of adolescent health multidisciplinary networks; and to set up training programs in as many European countries as possible. METHODS: The curriculum was developed by 16 physicians from 11 European countries with various professional specializations. In line with modern guidelines in medical education, it is a modular, flexible instrument which covers the main teaching areas in the field, such as basic skills (i.e. setting, rights and confidentiality, gender and cultural issues) as well as specific themes (i.e. sexual and reproductive health, eating disorders, chronic conditions). It consists of 17 thematic modules, each containing detailed objectives, learning approaches, examples, and evaluation methods. RESULT: Two international one-week summer schools were used to assess the feasibility and appropriateness of the curriculum. The overall evaluation was good, with most of the items surpassing three on a four-point Likert scale. However, it pointed to several aspects (process and content) which will need to be refined in the future, such as an increase in interactive sessions (role playing), and a better mix of clinical and public health issues.
Resumo:
Carbon isotope ratio (CIR) analysis has been routinely and successfully applied to doping control analysis for many years to uncover the misuse of endogenous steroids such as testosterone. Over the years, several challenges and limitations of this approach became apparent, e.g., the influence of inadequate chromatographic separation on CIR values or the emergence of steroid preparations comprising identical CIRs as endogenous steroids. While the latter has been addressed recently by the implementation of hydrogen isotope ratios (HIR), an improved sample preparation for CIR avoiding co-eluting compounds is presented herein together with newly established reference values of those endogenous steroids being relevant for doping controls. From the fraction of glucuronidated steroids 5β-pregnane-3α,20α-diol, 5α-androst-16-en-3α-ol, 3α-Hydroxy-5β-androstane-11,17-dione, 3α-hydroxy-5α-androstan-17-one (ANDRO), 3α-hydroxy-5β-androstan-17-one (ETIO), 3β-hydroxy-androst-5-en-17-one (DHEA), 5α- and 5β-androstane-3α,17β-diol (5aDIOL and 5bDIOL), 17β-hydroxy-androst-4-en-3-one and 17α-hydroxy-androst-4-en-3-one were included. In addition, sulfate conjugates of ANDRO, ETIO, DHEA, 3β-hydroxy-5α-androstan-17-one plus 17α- and androst-5-ene-3β,17β-diol were considered and analyzed after acidic solvolysis. The results obtained for the reference population encompassing n = 67 males and females confirmed earlier findings regarding factors influencing endogenous CIR. Variations in sample preparation influenced CIR measurements especially for 5aDIOL and 5bDIOL, the most valuable steroidal analytes for the detection of testosterone misuse. Earlier investigations on the HIR of the same reference population enabled the evaluation of combined measurements of CIR and HIR and its usefulness regarding both steroid metabolism studies and doping control analysis. The combination of both stable isotopes would allow for lower reference limits providing the same statistical power and certainty to distinguish between the endo- or exogenous origin of a urinary steroid.
Resumo:
Recently, pharmaceutical industry developed a new class of therapeutics called Selective Androgen Receptor Modulator (SARM) to substitute the synthetic anabolic drugs used in medical treatments. Since the beginning of the anti-doping testing in sports in the 1970s, steroids have been the most frequently detected drugs mainly used for their anabolic properties. The major advantage of SARMs is the reduced androgenic activities which are the main source of side effects following anabolic agents' administration. In 2010, the Swiss laboratory for doping analyses reported the first case of SARMs abuse during in-competition testing. The analytical steps leading to this finding are described in this paper. Screening and confirmation results were obtained based on liquid chromatography tandem mass spectrometry (LC-MS/MS) analyses. Additional information regarding the SARM S-4 metabolism was investigated by ultra high-pressure liquid chromatography coupled to quadrupole time-of-flight mass spectrometer (UHPLC-QTOF-MS).
Resumo:
There is a need for more efficient methods giving insight into the complex mechanisms of neurotoxicity. Testing strategies including in vitro methods have been proposed to comply with this requirement. With the present study we aimed to develop a novel in vitro approach which mimics in vivo complexity, detects neurotoxicity comprehensively, and provides mechanistic insight. For this purpose we combined rat primary re-aggregating brain cell cultures with a mass spectrometry (MS)-based metabolomics approach. For the proof of principle we treated developing re-aggregating brain cell cultures for 48h with the neurotoxicant methyl mercury chloride (0.1-100muM) and the brain stimulant caffeine (1-100muM) and acquired cellular metabolic profiles. To detect toxicant-induced metabolic alterations the profiles were analysed using commercial software which revealed patterns in the multi-parametric dataset by principal component analyses (PCA), and recognised the most significantly altered metabolites. PCA revealed concentration-dependent cluster formations for methyl mercury chloride (0.1-1muM), and treatment-dependent cluster formations for caffeine (1-100muM) at sub-cytotoxic concentrations. Four relevant metabolites responsible for the concentration-dependent alterations following methyl mercury chloride treatment could be identified using MS-MS fragmentation analysis. These were gamma-aminobutyric acid, choline, glutamine, creatine and spermine. Their respective mass ion intensities demonstrated metabolic alterations in line with the literature and suggest that the metabolites could be biomarkers for mechanisms of neurotoxicity or neuroprotection. In addition, we evaluated whether the approach could identify neurotoxic potential by testing eight compounds which have target organ toxicity in the liver, kidney or brain at sub-cytotoxic concentrations. PCA revealed cluster formations largely dependent on target organ toxicity indicating possible potential for the development of a neurotoxicity prediction model. With such results it could be useful to perform a validation study to determine the reliability, relevance and applicability of this approach to neurotoxicity screening. Thus, for the first time we show the benefits and utility of in vitro metabolomics to comprehensively detect neurotoxicity and to discover new biomarkers.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Isotopic analyses on bulk carbonates are considered a useful tool for palaeoclimatic reconstruction assuming calcite precipitation occurring at oxygen isotope equilibrium with local water and detrital carbonate input being absent or insignificant. We present results from Lake Neuchatel (western Switzerland) that demonstrate equilibrium precipitation of calcite, except during high productivity periods, and the presence of detrital and resuspended calcite. Mineralogy, geochemistry and stable isotope values of Lake Neuchatel trap sediments and adjacent rivers suspension were studied. Mineralogy of suspended matter in the major inflowing rivers documents an important contribution of detrital carbonates, predominantly calcite with minor amounts of dolomite and ankerite. Using mineralogical data, the quantity of allochthonous calcite can be estimated by comparing the ratio ankerite + dolomite/calcite + ankerite + dolomite in the inflowing rivers and in the traps. Material taken from sediment traps shows an evolution from practically pure endogenic calcite in summer (10-20% detrital material) to higher percentages of detrital material in winter (up to 20-40%). Reflecting these mineralogical variations, delta(13)C and delta(18)O values of calcite from sediment traps are more negative in summer than in winter times. Since no significant variations in isotopic composition of lake water were detected over one year, factors controlling oxygen isotopic composition of calcite in sediment traps are the precipitation temperature, and the percentage of resuspended and detrital calcite. Samples taken close to the river inflow generally have higher delta values than the others, confirming detrital influence. SEM and isotopic studies on different size fractions (<2, 2-6, 6-20, 20-60, >60 mu m) of winter and summer samples allowed the recognition of resuspension and to separate new endogenic calcite from detrital calcite. Fractions >60 and (2 mu m have the highest percentage of detritus, Fractions 2-6 and 6-20 mu m are typical for the new endogenic calcite in summer, as given by calculations assuming isotopic equilibrium with local water. In winter such fractions show similar values than in summer, indicating resuspension. Using the isotopic composition of sediment traps material and of different size fractions, as well as the isotopic composition of lake water, the water temperature measurements and mineralogy, we re-evaluated the bulk carbonate potential for palaeoclimatic reconstruction in the presence of detrital and re-suspended calcite. This re-evaluation leads to the following conclusion: (1) the endogenic signal can be amplified by applying a particle-size separation, once the size of endogenic calcite is known from SEM study; (2) resuspended calcite does not alter the endogenic signal, but it lowers the time resolution; (3) detrital input decreases at increasing distances from the source, and it modifies the isotopic signal only when very abundant; (4) influence of detrital calcite on bulk sediment isotopic composition can be calculated. (C) 1998 Elsevier Science B.V. All rights reserved.
Resumo:
Histological subtyping and grading by malignancy are the cornerstones of the World Health Organization (WHO) classification of tumors of the central nervous system. They shall provide clinicians with guidance as to the course of disease to be expected and the choices of treatment to be made. Nonetheless, patients with histologically identical tumors may have very different outcomes, notably in patients with astrocytic and oligodendroglial gliomas of WHO grades II and III. In gliomas of adulthood, 3 molecular markers have undergone extensive studies in recent years: 1p/19q chromosomal codeletion, O(6)-methylguanine methyltransferase (MGMT) promoter methylation, and mutations of isocitrate dehydrogenase (IDH) 1 and 2. However, the assessment of these molecular markers has so far not been implemented in clinical routine because of the lack of therapeutic implications. In fact, these markers were considered to be prognostic irrespective of whether patients were receiving radiotherapy (RT), chemotherapy, or both (1p/19q, IDH1/2), or of limited value because testing is too complex and no chemotherapy alternative to temozolomide was available (MGMT). In 2012, this situation has changed: long-term follow-up of the Radiation Therapy Oncology Group 9402 and European Organisation for Research and Treatment of Cancer 26951 trials demonstrated an overall survival benefit from the addition to RT of chemotherapy with procarbazine/CCNU/vincristine confined to patients with anaplastic oligodendroglial tumors with (vs without) 1p/19q codeletion. Furthermore, in elderly glioblastoma patients, the NOA-08 and the Nordic trial of RT alone versus temozolomide alone demonstrated a profound impact of MGMT promoter methylation on outcome by therapy and thus established MGMT as a predictive biomarker in this patient population. These recent results call for the routine implementation of 1p/19q and MGMT testing at least in subpopulations of malignant glioma patients and represent an encouraging step toward the development of personalized therapeutic approaches in neuro-oncology.