971 resultados para Immunologic Tests -- methods
Resumo:
Environmental decay in porous masonry materials, such as brick and mortar, is a widespread problem concerning both new and historic masonry structures. The decay mechanisms are quite complex dependng upon several interconnected parameters and from the interaction with the specific micro-climate. Materials undergo aesthetical and substantial changes in character but while many studies have been carried out, the mechanical aspect has been largely understudied while it bears true importance from the structural viewpoint. A quantitative assessment of the masonry material degradation and how it affects the load-bearing capacity of masonry structures appears missing. The research work carried out, limiting the attention to brick masonry addresses this issue through an experimental laboratory approach via different integrated testing procedures, both non-destructive and mechanical, together with monitoring methods. Attention was focused on transport of moisture and salts and on the damaging effects caused by the crystallization of two different salts, sodium chloride and sodium sulphate. Many series of masonry specimens, very different in size and purposes were used to track the damage process since its beginning and to monitor its evolution over a number of years Athe same time suitable testing techniques, non-destructive, mini-invasive, analytical, of monitoring, were validated for these purposes. The specimens were exposed to different aggressive agents (in terms of type of salt, of brine concentration, of artificial vs. open-air natural ageing, …), tested by different means (qualitative vs. quantitative, non destructive vs. mechanical testing, punctual vs. wide areas, …), and had different size (1-, 2-, 3-header thick walls, full-scale walls vs. small size specimens, brick columns and triplets vs. small walls, masonry specimens vs. single units of brick and mortar prisms, …). Different advanced testing methods and novel monitoring techniques were applied in an integrated holistic approach, for quantitative assessment of masonry health state.
Resumo:
Die Flachwassergleichungen (SWE) sind ein hyperbolisches System von Bilanzgleichungen, die adäquate Approximationen an groß-skalige Strömungen der Ozeane, Flüsse und der Atmosphäre liefern. Dabei werden Masse und Impuls erhalten. Wir unterscheiden zwei charakteristische Geschwindigkeiten: die Advektionsgeschwindigkeit, d.h. die Geschwindigkeit des Massentransports, und die Geschwindigkeit von Schwerewellen, d.h. die Geschwindigkeit der Oberflächenwellen, die Energie und Impuls tragen. Die Froude-Zahl ist eine Kennzahl und ist durch das Verhältnis der Referenzadvektionsgeschwindigkeit zu der Referenzgeschwindigkeit der Schwerewellen gegeben. Für die oben genannten Anwendungen ist sie typischerweise sehr klein, z.B. 0.01. Zeit-explizite Finite-Volume-Verfahren werden am öftersten zur numerischen Berechnung hyperbolischer Bilanzgleichungen benutzt. Daher muss die CFL-Stabilitätsbedingung eingehalten werden und das Zeitinkrement ist ungefähr proportional zu der Froude-Zahl. Deswegen entsteht bei kleinen Froude-Zahlen, etwa kleiner als 0.2, ein hoher Rechenaufwand. Ferner sind die numerischen Lösungen dissipativ. Es ist allgemein bekannt, dass die Lösungen der SWE gegen die Lösungen der Seegleichungen/ Froude-Zahl Null SWE für Froude-Zahl gegen Null konvergieren, falls adäquate Bedingungen erfüllt sind. In diesem Grenzwertprozess ändern die Gleichungen ihren Typ von hyperbolisch zu hyperbolisch.-elliptisch. Ferner kann bei kleinen Froude-Zahlen die Konvergenzordnung sinken oder das numerische Verfahren zusammenbrechen. Insbesondere wurde bei zeit-expliziten Verfahren falsches asymptotisches Verhalten (bzgl. der Froude-Zahl) beobachtet, das diese Effekte verursachen könnte.Ozeanographische und atmosphärische Strömungen sind typischerweise kleine Störungen eines unterliegenden Equilibriumzustandes. Wir möchten, dass numerische Verfahren für Bilanzgleichungen gewisse Equilibriumzustände exakt erhalten, sonst können künstliche Strömungen vom Verfahren erzeugt werden. Daher ist die Quelltermapproximation essentiell. Numerische Verfahren die Equilibriumzustände erhalten heißen ausbalanciert.rnrnIn der vorliegenden Arbeit spalten wir die SWE in einen steifen, linearen und einen nicht-steifen Teil, um die starke Einschränkung der Zeitschritte durch die CFL-Bedingung zu umgehen. Der steife Teil wird implizit und der nicht-steife explizit approximiert. Dazu verwenden wir IMEX (implicit-explicit) Runge-Kutta und IMEX Mehrschritt-Zeitdiskretisierungen. Die Raumdiskretisierung erfolgt mittels der Finite-Volumen-Methode. Der steife Teil wird mit Hilfe von finiter Differenzen oder au eine acht mehrdimensional Art und Weise approximniert. Zur mehrdimensionalen Approximation verwenden wir approximative Evolutionsoperatoren, die alle unendlich viele Informationsausbreitungsrichtungen berücksichtigen. Die expliziten Terme werden mit gewöhnlichen numerischen Flüssen approximiert. Daher erhalten wir eine Stabilitätsbedingung analog zu einer rein advektiven Strömung, d.h. das Zeitinkrement vergrößert um den Faktor Kehrwert der Froude-Zahl. Die in dieser Arbeit hergeleiteten Verfahren sind asymptotisch erhaltend und ausbalanciert. Die asymptotischer Erhaltung stellt sicher, dass numerische Lösung das "korrekte" asymptotische Verhalten bezüglich kleiner Froude-Zahlen besitzt. Wir präsentieren Verfahren erster und zweiter Ordnung. Numerische Resultate bestätigen die Konvergenzordnung, so wie Stabilität, Ausbalanciertheit und die asymptotische Erhaltung. Insbesondere beobachten wir bei machen Verfahren, dass die Konvergenzordnung fast unabhängig von der Froude-Zahl ist.
Resumo:
This thesis is aimed to assess similarities and mismatches between the outputs from two independent methods for the cloud cover quantification and classification based on quite different physical basis. One of them is the SAFNWC software package designed to process radiance data acquired by the SEVIRI sensor in the VIS/IR. The other is the MWCC algorithm, which uses the brightness temperatures acquired by the AMSU-B and MHS sensors in their channels centered in the MW water vapour absorption band. At a first stage their cloud detection capability has been tested, by comparing the Cloud Masks they produced. These showed a good agreement between two methods, although some critical situations stand out. The MWCC, in effect, fails to reveal clouds which according to SAFNWC are fractional, cirrus, very low and high opaque clouds. In the second stage of the inter-comparison the pixels classified as cloudy according to both softwares have been. The overall observed tendency of the MWCC method, is an overestimation of the lower cloud classes. Viceversa, the more the cloud top height grows up, the more the MWCC not reveal a certain cloud portion, rather detected by means of the SAFNWC tool. This is what also emerges from a series of tests carried out by using the cloud top height information in order to evaluate the height ranges in which each MWCC category is defined. Therefore, although the involved methods intend to provide the same kind of information, in reality they return quite different details on the same atmospheric column. The SAFNWC retrieval being very sensitive to the top temperature of a cloud, brings the actual level reached by this. The MWCC, by exploiting the capability of the microwaves, is able to give an information about the levels that are located more deeply within the atmospheric column.
Resumo:
The revision hip arthroplasty is a surgical procedure, consisting in the reconstruction of the hip joint through the replacement of the damaged hip prosthesis. Several factors may give raise to the failure of the artificial device: aseptic loosening, infection and dislocation represent the principal causes of failure worldwide. The main effect is the raise of bone defects in the region closest to the prosthesis that weaken the bone structure for the biological fixation of the new artificial hip. For this reason bone reconstruction is necessary before the surgical revision operation. This work is born by the necessity to test the effects of bone reconstruction due to particular bone defects in the acetabulum, after the hip prosthesis revision. In order to perform biomechanical in vitro tests on hip prosthesis implanted in human pelvis or hemipelvis a practical definition of a reference frame for these kind of bone specimens is required. The aim of the current study is to create a repeatable protocol to align hemipelvic samples in the testing machine, that relies on a reference system based on anatomical landmarks on the human pelvis. In chapter 1 a general overview of the human pelvic bone is presented: anatomy, bone structure, loads and the principal devices for hip joint replacement. The purpose of chapters 2 is to identify the most common causes of the revision hip arthroplasty, analysing data from the most reliable orthopaedic registries in the world. Chapter 3 presents an overview of the most used classifications for acetabular bone defects and fractures and the most common techniques for acetabular and bone reconstruction. After a critical review of the scientific literature about reference frames for human pelvis, in chapter 4, the definition of a new reference frame is proposed. Based on this reference frame, the alignment protocol for the human hemipelvis is presented as well as the statistical analysis that confirm the good repeatability of the method.
Resumo:
It is well known that the early initiation of a specific antiinfective therapy is crucial to reduce the mortality in severe infection. Procedures culturing pathogens are the diagnostic gold standard in such diseases. However, these methods yield results earliest between 24 to 48 hours. Therefore, severe infections such as sepsis need to be treated with an empirical antimicrobial therapy, which is ineffective in an unknown fraction of these patients. Today's microbiological point of care tests are pathogen specific and therefore not appropriate for an infection with a variety of possible pathogens. Molecular nucleic acid diagnostics such as polymerase chain reaction (PCR) allow the identification of pathogens and resistances. These methods are used routinely to speed up the analysis of positive blood cultures. The newest PCR based system allows the identification of the 25 most frequent sepsis pathogens by PCR in parallel without previous culture in less than 6 hours. Thereby, these systems might shorten the time of possibly insufficient antiinfective therapy. However, these extensive tools are not suitable as point of care diagnostics. Miniaturization and automating of the nucleic acid based method is pending, as well as an increase of detectable pathogens and resistance genes by these methods. It is assumed that molecular PCR techniques will have an increasing impact on microbiological diagnostics in the future.
Resumo:
This study evaluated the operator variability of different finishing and polishing techniques. After placing 120 composite restorations (Tetric EvoCeram) in plexiglassmolds, the surface of the specimens was roughened in a standardized manner. Twelve operators with different experience levels polished the specimens using the following finishing/polishing procedures: method 1 (40 ?m diamond [40D], 15 ?m diamond [15D], 42 ?m silicon carbide polisher [42S], 6 ?m silicon carbide polisher [6S] and Occlubrush [O]); method 2 (40D, 42S, 6S and O); method 3 (40D, 42S, 6S and PoGo); method 4 (40D, 42S and PoGo) and method 5 (40D, 42S and O). The mean surface roughness (Ra) was measured with a profilometer. Differences between the methods were analyzed with non-parametric ANOVA and pairwise Wilcoxon signed rank tests (?=0.05). All the restorations were qualitatively assessed using SEM. Methods 3 and 4 showed the best polishing results and method 5 demonstrated the poorest. Method 5 was also most dependent on the skills of the operator. Except for method 5, all of the tested procedures reached a clinically acceptable surface polish of Ra?0.2 ?m. Polishing procedures can be simplified without increasing variability between operators and without jeopardizing polishing results.
Resumo:
Summary The first part of this review examined ISO approval requirements and in vitro testing. In the second part, non-standardized test methods for composite materials are presented and discussed. Physical tests are primarily described. Analyses of surface gloss and alterations, as well as aging simulations of dental materials are presented. Again, the importance of laboratory tests in determining clinical outcomes is evaluated. Differences in the measurement protocols of the various testing institutes and how these differences can in?uence the results are also discussed. Because there is no standardization of test protocols, the values determined by different institutes cannot be directly compared. However, the ranking of the tested materials should be the same if a valid protocol is applied by different institutes. The modulus of elasticity, the expansion after water sorption, and the polishability of the material are all clinically relevant, whereas factors measured by other test protocols may have no clinical correlation. The handling properties of the materials are highly dependent on operators' preferences. Therefore, no standard values can be given.
Resumo:
Plutonium is present in the environment as a consequence of atmospheric nuclear tests, nuclear weapons production and industrial releases over the past 50 years. To study temporal trends, a high resolution Pu record was obtained by analyzing 52 discrete samples of an alpine firn/ice core from Colle Gnifetti (Monte Rosa, 4450 m a.s.l.), dating from 1945 to 1990. The 239Pu signal was recorded directly, without decontamination or preconcentration steps, using an Inductively Coupled Plasma - Sector Field Mass Spectrometer (ICP-SFMS) equipped with an high efficiency sample introduction system, thus requiring much less sample preparation than previously reported methods. The 239Pu profile reflects the three main periods of atmospheric nuclear weapons testing: the earliest peak lasted from 1954/55 to 1958 and was caused by the first testing period reaching a maximum in 1958. Despite a temporary halt of testing in 1959/60, the Pu concentration decreased only by half with respect to the 1958 peak due to long atmospheric residence times. In 1961/62 Pu concentrations rapidly increased reaching a maximum in 1963, which was about 40% more intense than the 1958 peak. After the signing of the "Limited Test Ban Treaty" between USA and USSR in 1964, Pu deposition decreased very sharply reaching a minimum in 1967. The third period (1967-1975) is characterized by irregular Pu concentrations with smaller peaks (about 20-30% of the 1964 peak) which might be related to the deposition of Saharan dust contaminated by the French nuclear tests of the 1960s. The data presented are in very good agreement with Pu profiles previously obtained from the Col du Dome ice core (by multi-collector ICP-MS) and Belukha ice core (by Accelerator Mass Spectrometry, AMS). Although a semi-quantitative method was employed here, the results are quantitatively comparable to previously published results.
Resumo:
The aim of this in vitro study was to assess the agreement among four techniques used as gold standard for the validation of methods for occlusal caries detection. Sixty-five human permanent molars were selected and one site in each occlusal surface was chosen as the test site. The teeth were cut and prepared according to each technique: stereomicroscopy without coloring (1), dye enhancement with rhodamine B (2) and fuchsine/acetic light green (3), and semi-quantitative microradiography (4). Digital photographs from each prepared tooth were assessed by three examiners for caries extension. Weighted kappa, as well as Friedman's test with multiple comparisons, was performed to compare all techniques and verify statistical significant differences. Results: kappa values varied from 0.62 to 0.78, the latter being found by both dye enhancement methods. Friedman's test showed statistical significant difference (P < 0.001) and multiple comparison identified these differences among all techniques, except between both dye enhancement methods (rhodamine B and fuchsine/acetic light green). Cross-tabulation showed that the stereomicroscopy overscored the lesions. Both dye enhancement methods showed a good agreement, while stereomicroscopy overscored the lesions. Furthermore, the outcome of caries diagnostic tests may be influenced by the validation method applied. Dye enhancement methods seem to be reliable as gold standard methods.
Resumo:
OBJECTIVE: To compare regimens consisting of either efavirenz or nevirapine and two or more nucleoside reverse transcriptase inhibitors (NRTIs) among HIV-infected, antiretroviral-naive, and AIDS-free individuals with respect to clinical, immunologic, and virologic outcomes. DESIGN: Prospective studies of HIV-infected individuals in Europe and the US included in the HIV-CAUSAL Collaboration. METHODS: Antiretroviral therapy-naive and AIDS-free individuals were followed from the time they started an NRTI, efavirenz or nevirapine, classified as following one or both types of regimens at baseline, and censored when they started an ineligible drug or at 6 months if their regimen was not yet complete. We estimated the 'intention-to-treat' effect for nevirapine versus efavirenz regimens on clinical, immunologic, and virologic outcomes. Our models included baseline covariates and adjusted for potential bias introduced by censoring via inverse probability weighting. RESULTS: A total of 15 336 individuals initiated an efavirenz regimen (274 deaths, 774 AIDS-defining illnesses) and 8129 individuals initiated a nevirapine regimen (203 deaths, 441 AIDS-defining illnesses). The intention-to-treat hazard ratios [95% confidence interval (CI)] for nevirapine versus efavirenz regimens were 1.59 (1.27, 1.98) for death and 1.28 (1.09, 1.50) for AIDS-defining illness. Individuals on nevirapine regimens experienced a smaller 12-month increase in CD4 cell count by 11.49 cells/mul and were 52% more likely to have virologic failure at 12 months as those on efavirenz regimens. CONCLUSIONS: Our intention-to-treat estimates are consistent with a lower mortality, a lower incidence of AIDS-defining illness, a larger 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for efavirenz compared with nevirapine.
Resumo:
The application of luminescence dating to young volcanic sediments has been first investigated over three decades ago, but it was only with the technical innovations of the last decade that such analyses became viable. While current analytical procedures show promise for dating late Quaternary volcanic events, most efforts have been aimed at unconsolidated volcanic tephra. Investigations into direct dating of lava flows or of non-heated volcanoclastics like phreatic explosion layers, however, remain scarce. These volcanic deposits are of common occurrence and represent important chrono- and volcanostratigraphic markers. Their age determination is therefore of great importance in volcanologic, tectonic, geomorphological and climate studies. In this article, we propose the use of phreatic explosion deposits and xenolithic inclusions in lava flows as target materials for luminescence dating applications. The main focus is on the crucial criterion whether it is probable that such materials experience complete luminescence signal resetting during the volcanic event to be dated. This is argued based on the findings from existing literature, model calculations and laboratory tests.
Resumo:
BACKGROUND: We sought to characterize the impact that hepatitis C virus (HCV) infection has on CD4 cells during the first 48 weeks of antiretroviral therapy (ART) in previously ART-naive human immunodeficiency virus (HIV)-infected patients. METHODS: The HIV/AIDS Drug Treatment Programme at the British Columbia Centre for Excellence in HIV/AIDS distributes all ART in this Canadian province. Eligible individuals were those whose first-ever ART included 2 nucleoside reverse transcriptase inhibitors and either a protease inhibitor or a nonnucleoside reverse transcriptase inhibitor and who had a documented positive result for HCV antibody testing. Outcomes were binary events (time to an increase of > or = 75 CD4 cells/mm3 or an increase of > or = 10% in the percentage of CD4 cells in the total T cell population [CD4 cell fraction]) and continuous repeated measures. Statistical analyses used parametric and nonparametric methods, including multivariate mixed-effects linear regression analysis and Cox proportional hazards analysis. RESULTS: Of 1186 eligible patients, 606 (51%) were positive and 580 (49%) were negative for HCV antibodies. HCV antibody-positive patients were slower to have an absolute (P<.001) and a fraction (P = .02) CD4 cell event. In adjusted Cox proportional hazards analysis (controlling for age, sex, baseline absolute CD4 cell count, baseline pVL, type of ART initiated, AIDS diagnosis at baseline, adherence to ART regimen, and number of CD4 cell measurements), HCV antibody-positive patients were less likely to have an absolute CD4 cell event (adjusted hazard ratio [AHR], 0.84 [95% confidence interval [CI], 0.72-0.98]) and somewhat less likely to have a CD4 cell fraction event (AHR, 0.89 [95% CI, 0.70-1.14]) than HCV antibody-negative patients. In multivariate mixed-effects linear regression analysis, HCV antibody-negative patients had increases of an average of 75 cells in the absolute CD4 cell count and 4.4% in the CD4 cell fraction, compared with 20 cells and 1.1% in HCV antibody-positive patients, during the first 48 weeks of ART, after adjustment for time-updated pVL, number of CD4 cell measurements, and other factors. CONCLUSION: HCV antibody-positive HIV-infected patients may have an altered immunologic response to ART.
Resumo:
There are numerous statistical methods for quantitative trait linkage analysis in human studies. An ideal such method would have high power to detect genetic loci contributing to the trait, would be robust to non-normality in the phenotype distribution, would be appropriate for general pedigrees, would allow the incorporation of environmental covariates, and would be appropriate in the presence of selective sampling. We recently described a general framework for quantitative trait linkage analysis, based on generalized estimating equations, for which many current methods are special cases. This procedure is appropriate for general pedigrees and easily accommodates environmental covariates. In this paper, we use computer simulations to investigate the power robustness of a variety of linkage test statistics built upon our general framework. We also propose two novel test statistics that take account of higher moments of the phenotype distribution, in order to accommodate non-normality. These new linkage tests are shown to have high power and to be robust to non-normality. While we have not yet examined the performance of our procedures in the context of selective sampling via computer simulations, the proposed tests satisfy all of the other qualities of an ideal quantitative trait linkage analysis method.
Resumo:
In evaluating the accuracy of diagnosis tests, it is common to apply two imperfect tests jointly or sequentially to a study population. In a recent meta-analysis of the accuracy of microsatellite instability testing (MSI) and traditional mutation analysis (MUT) in predicting germline mutations of the mismatch repair (MMR) genes, a Bayesian approach (Chen, Watson, and Parmigiani 2005) was proposed to handle missing data resulting from partial testing and the lack of a gold standard. In this paper, we demonstrate an improved estimation of the sensitivities and specificities of MSI and MUT by using a nonlinear mixed model and a Bayesian hierarchical model, both of which account for the heterogeneity across studies through study-specific random effects. The methods can be used to estimate the accuracy of two imperfect diagnostic tests in other meta-analyses when the prevalence of disease, the sensitivities and/or the specificities of diagnostic tests are heterogeneous among studies. Furthermore, simulation studies have demonstrated the importance of carefully selecting appropriate random effects on the estimation of diagnostic accuracy measurements in this scenario.