91 resultados para door-to-needle time
Resumo:
QUESTIONS UNDER STUDY: Our aim was to identify the barriers young men face to consult a health professional when they encounter sexual dysfunctions and where they turn to, if so, for answers. METHODS: We conducted an exploratory qualitative research including 12 young men aged 16-20 years old seen in two focus groups. Discussions were triggered through vignettes about sexual dysfunction. RESULTS: Young men preferred not to talk about sexual dysfunction problems with anyone and to solve them alone as it is considered an intimate and embarrassing subject which can negatively impact their masculinity. Confidentiality appeared to be the most important criterion in disclosing an intimate subject to a health professional. Participants raised the problem of males' accessibility to services and lack of reason to consult. Two criteria to address the problem were if it was long-lasting or considered as physical. The Internet was unanimously considered as an initial solution to solve a problem, which could guide them to a face-to-face consultation if necessary. CONCLUSIONS: Results suggest that Internet-based tools should be developed to become an easy access door to sexual health services for young men. Wherever they consult and for whatever problem, sexual health must be on the agenda.
Resumo:
Background: Atazanavir boosted with ritonavir (ATV/r) and efavirenz (EFV) are both recommended as first-line therapies for HIV-infected patients. We compared the 2 therapies for virologic efficacy and immune recovery. Methods: We included all treatment-naïve patients in the Swiss HIV Cohort Study starting therapy after May 2003 with either ATV/r or EFV and a backbone of tenofovir and either emtricitabine or lamivudine. We used Cox models to assess time to virologic failure and repeated measures models to assess the change in CD4 cell counts over time. All models were fit as marginal structural models using both point of treatment and censoring weights. Intent-to-treat and various as-treated analyses were carried out: In the latter, patients were censored at their last recorded measurement if they changed therapy or if they were no longer adherent to therapy. Results: Patients starting EFV (n = 1,097) and ATV/r (n = 384) were followed for a median of 35 and 37 months, respectively. During follow-up, 51% patients on EFV and 33% patients on ATV/r remained adherent and made no change to their first-line therapy. Although intent-to-treat analyses suggest virologic failure was more likely with ATV/r, there was no evidence for this disadvantage in patients who adhered to first-line therapy. Patients starting ATV/r had a greater increase in CD4 cell count during the first year of therapy, but this advantage disappeared after one year. Conclusions: In this observational study, there was no good evidence of any intrinsic advantage for one therapy over the other, consistent with earlier clinical trials. Differences between therapies may arise in a clinical setting because of differences in adherence to therapy.
Resumo:
BACKGROUND: Socioeconomic status is thought to have a significant influence on stroke incidence, risk factors and outcome. Its influence on acute stroke severity, stroke mechanisms, and acute recanalisation treatment is less known. METHODS: Over a 4-year period, all ischaemic stroke patients admitted within 24 h were entered prospectively in a stroke registry. Data included insurance status, demographics, risk factors, time to hospital arrival, initial stroke severity (NIHSS), etiology, use of acute treatments, short-term outcome (modified Rankin Scale, mRS). Private insured patients (PI) were compared with basic insured patients (BI). RESULTS: Of 1062 consecutive acute ischaemic stroke patients, 203 had PI and 859 had BI. They were 585 men and 477 women. Both populations were similar in age, cardiovascular risk factors and preventive medications. The onset to admission time, thrombolysis rate, and stroke etiology according to TOAST classification were not different between PI and BI. Mean NIHSS at admission was significantly higher for BI. Good outcome (mRS ≤ 2) at 7 days and 3 months was more frequent in PI than in BI. CONCLUSION: We found better outcome and lesser stroke severity on admission in patients with higher socioeconomic status in an acute stroke population. The reason for milder strokes in patients with better socioeconomic status in a universal health care system needs to be explained.
Resumo:
The urinary steroid profile is constituted by anabolic androgenic steroids, including testosterone and its relatives, that are extensively metabolized into phase II sulfated or glucuronidated steroids. The use of liquid chromatography coupled to mass spectrometry (LC-MS) is an issue for the direct analysis of conjugated steroids, which can be used as urinary markers of exogenous steroid administration in doping analysis, without hydrolysis of the conjugated moiety. In this study, a sensitive and selective ultra high-pressure liquid chromatography coupled to quadrupole time-of-flight mass spectrometer (UHPLC-QTOF-MS) method was developed to quantify major urinary metabolites simultaneously after testosterone intake. The sample preparation of the urine (1 mL) was performed by solid-phase extraction on Oasis HLB sorbent using a 96-well plate format. The conjugated steroids were analyzed by UHPLC-QTOF-MS(E) with a single-gradient elution of 36 min (including re-equilibration time) in the negative electrospray ionization mode. MS(E) analysis involved parallel alternating acquisitions of both low- and high-collision energy functions. The method was validated and applied to samples collected from a clinical study performed with a group of healthy human volunteers who had taken testosterone, which were compared with samples from a placebo group. Quantitative results were also compared to GC-MS and LC-MS/MS measurements, and the correlations between data were found appropriate. The acquisition of full mass spectra over the entire mass range with QTOF mass analyzers gives promise of the opportunity to extend the steroid profile to a higher number of conjugated steroids.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
BACKGROUND: The aim of this study was to explore the predictive value of longitudinal self-reported adherence data on viral rebound. METHODS: Individuals in the Swiss HIV Cohort Study on combined antiretroviral therapy (cART) with RNA <50 copies/ml over the previous 3 months and who were interviewed about adherence at least once prior to 1 March 2007 were eligible. Adherence was defined in terms of missed doses of cART (0, 1, 2 or >2) in the previous 28 days. Viral rebound was defined as RNA >500 copies/ml. Cox regression models with time-independent and -dependent covariates were used to evaluate time to viral rebound. RESULTS: A total of 2,664 individuals and 15,530 visits were included. Across all visits, missing doses were reported as follows: 1 dose 14.7%, 2 doses 5.1%, >2 doses 3.8% taking <95% of doses 4.5% and missing > or =2 consecutive doses 3.2%. In total, 308 (11.6%) patients experienced viral rebound. After controlling for confounding variables, self-reported non-adherence remained significantly associated with the rate of occurrence of viral rebound (compared with zero missed doses: 1 dose, hazard ratio [HR] 1.03, 95% confidence interval [CI] 0.72-1.48; 2 doses, HR 2.17, 95% CI 1.46-3.25; >2 doses, HR 3.66, 95% CI 2.50-5.34). Several variables significantly associated with an increased risk of viral rebound irrespective of adherence were identified: being on a protease inhibitor or triple nucleoside regimen (compared with a non-nucleoside reverse transcriptase inhibitor), >5 previous cART regimens, seeing a less-experienced physician, taking co-medication, and a shorter time virally suppressed. CONCLUSIONS: A simple self-report adherence questionnaire repeatedly administered provides a sensitive measure of non-adherence that predicts viral rebound.
Resumo:
The Late Triassic and Jurassic platform and the oceanic complexes in Evvoia, Greece, share a complementary plate-tectonic evolution. Shallow marine carbonate deposition responded to changing rates of subsidence and uplift, whilst the adjacent ocean underwent spreading, and then convergence, collision and finally obduction over the platform complex. Late Triassic ocean spreading correlated with platform subsidence and the formation of a long-persisting peritidal passive-margin platform. Incipient drowning occurred from the Sinemurian to the late Middle Jurassic. This subsidence correlated with intra-oceanic subduction and plate convergence that led to supra-subduction calc-alkaline magmatism and the formation of a primitive volcanic arc. During the Middle Jurassic, plate collision caused arc uplift above the carbonate compensation depth (CCD) in the oceanic realm, and related thrust-faulting, on the platform, led to sub-aerial exposures. Patch-reefs developed there during the Late Oxfordian to Kimmeridgian. Advanced oceanic nappe-loading caused platform drowning below the CCD during the Tithonian, which is documented by intercalations of reefal turbidites with non-carbonate radiolarites. Radiolarites and bypass-turbidites, consisting of siliciclastic greywacke, terminate the platform succession beneath the emplaced oceanic nappe during late Tithonian to Valanginian time.
Resumo:
BACKGROUND AND PURPOSE: Onset-to-reperfusion time (ORT) has recently emerged as an essential prognostic factor in acute ischemic stroke therapy. Although favorable outcome is associated with reduced ORT, it remains unclear whether intracranial bleeding depends on ORT. We therefore sought to determine whether ORT influenced the risk and volume of intracerebral hemorrhage (ICH) after combined intravenous and intra-arterial therapy. METHODS: Based on our prospective registry, we included 157 consecutive acute ischemic stroke patients successfully recanalized with combined intravenous and intra-arterial therapy between April 2007 and October 2011. Primary outcome was any ICH within 24 hours posttreatment. Secondary outcomes included occurrence of symptomatic ICH (sICH) and ICH volume measured with the ABC/2. RESULTS: Any ICH occurred in 26% of the study sample (n=33). sICH occurred in 5.5% (n=7). Median ICH volume was 0.8 mL. ORT was increased in patients with ICH (median=260 minutes; interquartile range=230-306) compared with patients without ICH (median=226 minutes; interquartile range=200-281; P=0.008). In the setting of sICH, ORT reached a median of 300 minutes (interquartile range=276-401; P=0.004). The difference remained significant after adjustment for potential confounding factors (adjusted P=0.045 for ICH; adjusted P=0.002 for sICH). There was no correlation between ICH volume and ORT (r=0.16; P=0.33). CONCLUSIONS: ORT influences the rate but not the volume of ICH and appears to be a critical predictor of symptomatic hemorrhage after successful combined intravenous and intra-arterial therapy. To minimize the risk of bleeding, revascularization should be achieved within 4.5 hours of stroke onset.
Resumo:
BACKGROUND: Little information is available on resistance to anti-malarial drugs in the Solomon Islands (SI). The analysis of single nucleotide polymorphisms (SNPs) in drug resistance associated parasite genes is a potential alternative to classical time- and resource-consuming in vivo studies to monitor drug resistance. Mutations in pfmdr1 and pfcrt were shown to indicate chloroquine (CQ) resistance, mutations in pfdhfr and pfdhps indicate sulphadoxine-pyrimethamine (SP) resistance, and mutations in pfATPase6 indicate resistance to artemisinin derivatives. METHODS: The relationship between the rate of treatment failure among 25 symptomatic Plasmodium falciparum-infected patients presenting at the clinic and the pattern of resistance-associated SNPs in P. falciparum infecting 76 asymptomatic individuals from the surrounding population was investigated. The study was conducted in the SI in 2004. Patients presenting at a local clinic with microscopically confirmed P. falciparum malaria were recruited and treated with CQ+SP. Rates of treatment failure were estimated during a 28-day follow-up period. In parallel, a DNA microarray technology was used to analyse mutations associated with CQ, SP, and artemisinin derivative resistance among samples from the asymptomatic community. Mutation and haplotype frequencies were determined, as well as the multiplicity of infection. RESULTS: The in vivo study showed an efficacy of 88% for CQ+SP to treat P. falciparum infections. DNA microarray analyses indicated a low diversity in the parasite population with one major haplotype present in 98.7% of the cases. It was composed of fixed mutations at position 86 in pfmdr1, positions 72, 75, 76, 220, 326 and 356 in pfcrt, and positions 59 and 108 in pfdhfr. No mutation was observed in pfdhps or in pfATPase6. The mean multiplicity of infection was 1.39. CONCLUSION: This work provides the first insight into drug resistance markers of P. falciparum in the SI. The obtained results indicated the presence of a very homogenous P. falciparum population circulating in the community. Although CQ+SP could still clear most infections, seven fixed mutations associated with CQ resistance and two fixed mutations related to SP resistance were observed. Whether the absence of mutations in pfATPase6 indicates the efficacy of artemisinin derivatives remains to be proven.
Resumo:
This short perspective explores some ways in which new genomic methodologies impact the study of endocrine signaling. Emphasis is put on the impact of studying species which are not molecular biology models. This opens the door to using knowledge molecular endocrinology in areas of biology as distant as conservation biology, as well as enriching endocrinology with information from biodiversity and natural variation.
Resumo:
OBJECTIVE: To develop and validate a simple, integer-based score to predict functional outcome in acute ischemic stroke (AIS) using variables readily available after emergency room admission. METHODS: Logistic regression was performed in the derivation cohort of previously independent patients with AIS (Acute Stroke Registry and Analysis of Lausanne [ASTRAL]) to identify predictors of unfavorable outcome (3-month modified Rankin Scale score >2). An integer-based point-scoring system for each covariate of the fitted multivariate model was generated by their β-coefficients; the overall score was calculated as the sum of the weighted scores. The model was validated internally using a 2-fold cross-validation technique and externally in 2 independent cohorts (Athens and Vienna Stroke Registries). RESULTS: Age (A), severity of stroke (S) measured by admission NIH Stroke Scale score, stroke onset to admission time (T), range of visual fields (R), acute glucose (A), and level of consciousness (L) were identified as independent predictors of unfavorable outcome in 1,645 patients in ASTRAL. Their β-coefficients were multiplied by 4 and rounded to the closest integer to generate the score. The area under the receiver operating characteristic curve (AUC) of the score in the ASTRAL cohort was 0.850. The score was well calibrated in the derivation (p = 0.43) and validation cohorts (0.22 [Athens, n = 1,659] and 0.49 [Vienna, n = 653]). AUCs were 0.937 (Athens), 0.771 (Vienna), and 0.902 (when pooled). An ASTRAL score of 31 indicates a 50% likelihood of unfavorable outcome. CONCLUSIONS: The ASTRAL score is a simple integer-based score to predict functional outcome using 6 readily available items at hospital admission. It performed well in double external validation and may be a useful tool for clinical practice and stroke research.
Resumo:
The end-Permian mass extinction greatly affected the sedimentary record, but the sedimentary response was not limited to the Permian-Triassic boundary interval. This transformation extended to sedimentation that spanned the entire Early Triassic. Calcimicrobialites play an important role throughout this time interval, and at least four main events of anomalous carbonate deposition can be shown. A post-extinction calcimicrobial unit occurs above the extensive Permian skeletal carbonate platform exposed in the Taurus Mountains (southern Turkey), in south Armenia, north-west north and Central Iran along the Zagros Mountains. The calcimicrobial unit formed during the flooding of the platform that took place during the earliest Triassic. A similar calcimicrobialite formed during late Griesbachian to Dienerian time atop the shallow Permian skeletal carbonate platform largely exposed in south China. A third event occurred during the Early Olenekian on the first Mesozoic isolated pelagic plateau (Baid seamount, Oman Mountains). Here the change in carbonate sedimentation is reflected in the occurrence of thrombolites and carbonate seafloor fans. Near the end of Early Triassic time, unusual carbonate deposition is recorded both on an isolated pelagic plateau of the Western Tethys (Halstatt limestone of Dobrogea, Romania) and on the eastern Panthalassa margin of the western United States. In the western United States, the event is represented by stromatolites and thrombolites in the Virgin Limestone of the Moenkopi Formation and by seafloor fans in the middle and upper members of the Union Wash Formation. These unusual episodes of anomalous carbonate deposition illustrate a fundamental change in sedimentation that occurred in the aftermath of the end-Permian mass extinction.
Resumo:
OBJECTIVES: To evaluate the performance of the INTERMED questionnaire score, alone or combined with other criteria, in predicting return to work after a multidisciplinary rehabilitation program in patients with non-specific chronic low back pain. METHODS: The INTERMED questionnaire is a biopsychosocial assessment and clinical classification tool that separates heterogeneous populations into subgroups according to case complexity. We studied 88 patients with chronic low back pain who followed an intensive multidisciplinary rehabilitation program on an outpatient basis. Before the program, we recorded the INTERMED score, radiological abnormalities, subjective pain severity, and sick leave duration. Associations between these variables and return to full-time work within 3 months after the end of the program were evaluated using one-sided Fisher tests and univariate logistic regression followed by multivariate logistic regression. RESULTS: The univariate analysis showed a significant association between the INTERMED score and return to work (P<0.001; odds ratio, 0.90; 95% confidence interval, 0.86-0.96). In the multivariate analysis, prediction was best when the INTERMED score and sick leave duration were used in combination (P=0.03; odds ratio, 0.48; 95% confidence interval, 0.25-0.93). CONCLUSION: The INTERMED questionnaire is useful for evaluating patients with chronic low back pain. It could be used to improve the selection of patients for intensive multidisciplinary programs, thereby improving the quality of care, while reducing healthcare costs.
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
A photonic system has been developed that enables sensitive quantitative determination of reactive oxygen species (ROS) - mainly hydrogen peroxide (H2O2) - in aerosol samples such as airborne nanoparticles and exhaled air from patients. The detection principle relies on the amplification of the absorbance under multiple scattering conditions due to optical path lengthening [1] and [2]. In this study, the presence of cellulose membrane that acts as random medium into the glass optical cell considerably improved the sensitivity of the detection based on colorimetric FOX assay (FeII/orange xylenol). Despite the loss of assay volume (cellulose occupies 75% of cell volume) the limit of detection is enhanced by one order of magnitude reaching the value of 9 nM (H2O2 equivalents). Spectral analysis is performed automatically with a periodicity of 5 to 15 s, giving rise to real-time ROS measurements. Moreover, the elution of air sample into the collection chamber via a micro-diffuser (impinger) enables quantitative determination of ROS contained in or generated from airborne samples. As proof-of-concept the photonic ROS detection system was used in the determination of both ROS generated from traffic pollution and ROS contained in the exhaled breath as lung inflammation biomarkers.