928 resultados para standard deviation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: Neurophysiological monitoring aims to improve the safety of pedicle screw placement, but few quantitative studies assess specificity and sensitivity. In this study, screw placement within the pedicle is measured (post-op CT scan, horizontal and vertical distance from the screw edge to the surface of the pedicle) and correlated with intraoperative neurophysiological stimulation thresholds. METHODS: A single surgeon placed 68 thoracic and 136 lumbar screws in 30 consecutive patients during instrumented fusion under EMG control. The female to male ratio was 1.6 and the average age was 61.3 years (SD 17.7). Radiological measurements, blinded to stimulation threshold, were done on reformatted CT reconstructions using OsiriX software. A standard deviation of the screw position of 2.8 mm was determined from pilot measurements, and a 1 mm of screw-pedicle edge distance was considered as a difference of interest (standardised difference of 0.35) leading to a power of the study of 75 % (significance level 0.05). RESULTS: Correct placement and stimulation thresholds above 10 mA were found in 71 % of screws. Twenty-two percent of screws caused cortical breach, 80 % of these had stimulation thresholds above 10 mA (sensitivity 20 %, specificity 90 %). True prediction of correct position of the screw was more frequent for lumbar than for thoracic screws. CONCLUSION: A screw stimulation threshold of >10 mA does not indicate correct pedicle screw placement. A hypothesised gradual decrease of screw stimulation thresholds was not observed as screw placement approaches the nerve root. Aside from a robust threshold of 2 mA indicating direct contact with nervous tissue, a secondary threshold appears to depend on patients' pathology and surgical conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

ABSTRACT: INTRODUCTION: Lipoprotein-associated phospholipase A2 (Lp-PLA2) is a circulating enzyme with pro-inflammatory and oxidative activities associated with cardiovascular disease and ischemic stroke. While high plasma Lp-PLA2 activity was reported as a risk factor for dementia in the Rotterdam study, no association between Lp-PLA2 mass and dementia or Alzheimer's disease (AD) was detected in the Framingham study. The objectives of the current study were to explore the relationship of plasma Lp-PLA2 activity with cognitive diagnoses (AD, amnestic mild cognitive impairment (aMCI), and cognitively healthy subjects), cardiovascular markers, cerebrospinal fluid (CSF) markers of AD, and apolipoprotein E (APOE) genotype. METHODS: Subjects with mild AD (n = 78) and aMCI (n = 59) were recruited from the Memory Clinic, University Hospital, Basel, Switzerland; cognitively healthy subjects (n = 66) were recruited from the community. Subjects underwent standardised medical, neurological, neuropsychological, imaging, genetic, blood and CSF evaluation. Differences in Lp-PLA2 activity between the cognitive diagnosis groups were tested with ANOVA and in multiple linear regression models with adjustment for covariates. Associations between Lp-PLA2 and markers of cardiovascular disease and AD were explored with Spearman's correlation coefficients. RESULTS: There was no significant difference in plasma Lp-PLA2 activity between AD (197.1 (standard deviation, SD 38.4) nmol/min/ml) and controls (195.4 (SD 41.9)). Gender, statin use and low-density lipoprotein cholesterol (LDL) were independently associated with Lp-PLA2 activity in multiple regression models. Lp-PLA2 activity was correlated with LDL and inversely correlated with high-density lipoprotein (HDL). AD subjects with APOE-ε4 had higher Lp-PLA2 activity (207.9 (SD 41.2)) than AD subjects lacking APOE-ε4 (181.6 (SD 26.0), P = 0.003) although this was attenuated by adjustment for LDL (P = 0.09). No strong correlations were detected for Lp-PLA2 activity and CSF markers of AD. CONCLUSION: Plasma Lp-PLA2 was not associated with a diagnosis of AD or aMCI in this cross-sectional study. The main clinical correlates of Lp-PLA2 activity in AD, aMCI and cognitively healthy subjects were variables associated with lipid metabolism.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The high molecular weight and low concentration of brain glycogen render its noninvasive quantification challenging. Therefore, the precision increase of the quantification by localized (13) C MR at 9.4 to 14.1 T was investigated. Signal-to-noise ratio increased by 66%, slightly offset by a T(1) increase of 332 ± 15 to 521 ± 34 ms. Isotopic enrichment after long-term (13) C administration was comparable (≈ 40%) as was the nominal linewidth of glycogen C1 (≈ 50 Hz). Among the factors that contributed to the 66% observed increase in signal-to-noise ratio, the T(1) relaxation time impacted the effective signal-to-noise ratio by only 10% at a repetition time = 1 s. The signal-to-noise ratio increase together with the larger spectral dispersion at 14.1 T resulted in a better defined baseline, which allowed for more accurate fitting. Quantified glycogen concentrations were 5.8 ± 0.9 mM at 9.4 T and 6.0 ± 0.4 mM at 14.1 T; the decreased standard deviation demonstrates the compounded effect of increased magnetization and improved baseline on the precision of glycogen quantification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Variable advisory speed limit (VASL) systems could be effective at both urban and rural work zones, at both uncongested and congested sites. At uncongested urban work zones, the average speeds with VASL were lower than without VASL. But the standard deviation of speeds with VASL was higher. The increase in standard deviation may be due to the advisory nature of VASL. The speed limit compliance with VASL was about eight times greater than without VASL. At the congested sites, the VASL were effective in making drivers slow down gradually as they approached the work zone, reducing any sudden changes in speeds. Mobility-wise the use of VASL resulted in a decrease in average queue length, throughput, number of stops, and an increase in travel time. Several surrogate safety measures also demonstrated the benefits of VASL in congested work zones. VASL deployments in rural work zones resulted in reductions in mean speed, speed variance, and 85th percentile speeds downstream of the VASL sign. The study makes the following recommendations based on the case studies investigated: 1. The use of VASL is recommended for uncongested work zones to achieve better speed compliance and lower speeds. Greater enforcement of regulatory speed limits could help to decrease the standard deviation in speeds; 2. The use of VASL to complement the static speed limits in rural work zones is beneficial even if the VASL is only used to display the static speed limits. It leads to safer traffic conditions by encouraging traffic to slow down gradually and by reminding traffic of the reduced speed limit. A well-designed VASL algorithm, like the P5 algorithm developed in this study, can significantly improve the mobility and safety conditions in congested work zones. The use of simulation is recommended for optimizing the VASL algorithms before field deployment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Improving safety at nighttime work zones is important because of the extra visibility concerns. The deployment of sequential lights is an innovative method for improving driver recognition of lane closures and work zone tapers. Sequential lights are wireless warning lights that flash in a sequence to clearly delineate the taper at work zones. The effectiveness of sequential lights was investigated using controlled field studies. Traffic parameters were collected at the same field site with and without the deployment of sequential lights. Three surrogate performance measures were used to determine the impact of sequential lights on safety. These measures were the speeds of approaching vehicles, the number of late taper merges and the locations where vehicles merged into open lane from the closed lane. In addition, an economic analysis was conducted to monetize the benefits and costs of deploying sequential lights at nighttime work zones. The results of this study indicates that sequential warning lights had a net positive effect in reducing the speeds of approaching vehicles, enhancing driver compliance, and preventing passenger cars, trucks and vehicles at rural work zones from late taper merges. Statistically significant decreases of 2.21 mph mean speed and 1 mph 85% speed resulted with sequential lights. The shift in the cumulative speed distributions to the left (i.e. speed decrease) was also found to be statistically significant using the Mann-Whitney and Kolmogorov-Smirnov tests. But a statistically significant increase of 0.91 mph in the speed standard deviation also resulted with sequential lights. With sequential lights, the percentage of vehicles that merged earlier increased from 53.49% to 65.36%. A benefit-cost ratio of around 5 or 10 resulted from this analysis of Missouri nighttime work zones and historical crash data. The two different benefitcost ratios reflect two different ways of computing labor costs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: We aimed to determine the smallest changes in health-related quality of life (HRQoL) scores in the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire core 30 and the Brain Cancer Module (QLQ-BN20), which could be considered as clinically meaningful in brain cancer patients. Materials and methods: World Health Organisation performance status (PS) and mini-mental state examination (MMSE) were used as clinical anchors appropriate to related subscales to determine the minimal clinically important differences (MCIDs) in HRQoL change scores (range 0-100) in the QLQ-C30 and QLQ-BN20. A threshold of 0.2 standard deviation (SD) (small effect) was used to exclude anchor-based MCID estimates considered too small to inform interpretation. RESULTS: Based on PS, our findings support the following integer estimates of the MCID for improvement and deterioration, respectively: physical (6, 9), role (14, 12), and cognitive functioning (8, 8); global health status (7, 4*), fatigue (12, 9), and motor dysfunction (4*, 5). Anchoring with MMSE, cognitive functioning MCID estimates for improvement and deterioration were (11, 2*) and for communication deficit were (9, 7). Estimates with asterisks were <0.2 SD and were excluded from our MCID range of 5-14. CONCLUSION: These estimates can help clinicians evaluate changes in HRQoL over time, assess the value of a health care intervention and can be useful in determining sample sizes in designing future clinical trials.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present state-of-the-art dual-wavelength digital holographic microscopy (DHM) measurement on a calibrated 8.9 nm high chromium thin step sample and demonstrate sub-nanometer axial accuracy. By using a modified DHM reference calibrated hologram (RCH) reconstruction method, a temporal averaging procedure and a specific dual-wavelength DHM arrangement, it is shown that specimen topography can be measured with an accuracy, defined as the axial standard deviation, reduced to at least 0.9 nm. Indeed for the first time to the best of our knowledge, it is reported that averaging each of the two wavefronts recorded with real-time dual-wavelength DHM can provide up to 30% spatial noise reduction for the given configuration. Moreover, the presented experimental configuration achieves a temporal stability below 0.8 nm, thus paving the way to Angström range for dual-wavelength DHM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article evaluates the results of portal vein (PV) stent placement in patients with malignant extrinsic lesions stenosing or obstructing the PV and causing symptomatic PV hypertension (PVHT). Fourteen patients with bile duct cancer (n = 7), pancreatic adenocarcinoma (n = 4), or another cancer (n = 3) underwent percutaneous transhepatic portal venous stent placement because of gastroesophageal or jejunal varices (n = 9), ascites (n = 7), and/or thrombocytopenia (n = 2). Concurrent tumoral obstruction of the main bile duct was treated via the transhepatic route in the same session in four patients. Changes in portal venous pressure, complications, stent patency, and survival were evaluated. Mean +/- standard deviation (SD) gradient of portal venous pressure decreased significantly immediately after stent placement from 11.2 mmHg +/- 4.6 to 1.1 mmHg +/- 1.0 (P < 0.00001). Three patients had minor complications, and one developed a liver abscess. During a mean +/- SD follow-up of 134.4 +/- 123.3 days, portal stents remained patent in 11 patients (78.6%); stent occlusion occurred in 3 patients, 2 of whom had undergone previous major hepatectomy. After stent placement, PVHT symptoms were relieved in four (57.1%) of seven patients who died (mean survival, 97 +/- 71.2 days), and relieved in six (85.7%) of seven patients still alive at the end of follow-up (mean follow-up, 171.7 +/- 153.5 days). Stent placement in the PV is feasible and relatively safe. It helped to relieve PVHT symptoms in a single session.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characterization and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is combined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS instrumental error is small enough to enable detection of precursory displacements of millimetric magnitude. This consists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Displacement measurement are improved considerably by applying Nearest Neighbour (NN) averaging, which reduces the error (1¿) up to a factor of 6. This technique was applied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumental error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by applying the NN averaging method. These results show that millimetric displacements prior to failure can be detected using TLS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The role of busulfan (Bu) metabolites in the adverse events seen during hematopoietic stem cell transplantation and in drug interactions is not explored. Lack of availability of established analytical methods limits our understanding in this area. The present work describes a novel gas chromatography-tandem mass spectrometric assay for the analysis of sulfolane (Su) in plasma of patients receiving high-dose Bu. Su and Bu were extracted from a single 100 μL plasma sample by liquid-liquid extraction. Bu was separately derivatized with 2,3,5,6-tetrafluorothiophenolfluorinated agent. Mass spectrometric detection of the analytes was performed in the selected reaction monitoring mode on a triple quadrupole instrument after electronic impact ionization. Bu and Su were analyzed with separate chromatographic programs, lasting 5 min each. The assay for Su was found to be linear in the concentration range of 20-400 ng/mL. The method has satisfactory sensitivity (lower limit of quantification, 20 ng/mL) and precision (relative standard deviation less than 15 %) for all the concentrations tested with a good trueness (100 ± 5 %). This method was applied to measure Su from pediatric patients with samples collected 4 h after dose 1 (n = 46), before dose 7 (n = 56), and after dose 9 (n = 54) infusions of Bu. Su (mean ± SD) was detectable in plasma of patients 4 h after dose 1, and higher levels were observed after dose 9 (249.9 ± 123.4 ng/mL). This method may be used in clinical studies investigating the role of Su on adverse events and drug interactions associated with Bu therapy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: To examine the efficacy and safety of Baerveldt shunt (BS) implantation compared to combined phacoemulsification and Baerveldt shunt implantation (PBS). This study was designed to detect a difference in IOP reduction of 20% (~4mmHg) between groups with 90% power. Methods: Sixty patients with medically uncontrolled glaucoma, prospectively underwent either or BS implantation with phacoemulsification (Group PBS; n=30) or BS implantation alone (group BS; n=30, pseudophakic eyes only). Groups were matched for age, glaucoma subtype and length of follow-up. Pre and post-operative measures recorded included patient demographics, visual acuity, IOP, number of glaucoma medications (GMs) and all complications. Success was defined as IOP≤21mmHg and 20% reduction in IOP from baseline with or without GMs. Results: Age of PBS and BS groups was 61 vs 62 years respectively (p=0.72*). There were no significant differences in preoperative baseline characteristics: PBS vs PB, mean IOP =25.5mmHg (standard deviation (SD); ±10.3mmHg) vs 26.1mmHg (SD ±10.6mmHg), p=0.81*; mean GMs=3.0 (SD ±1.1) vs 3.1 (SD ±1.0), p=0.83*; mean VA=0.3 vs 0.3, p=0.89*. At year one there were no significant differences observed between groups in post-operative IOP, GMs or VA, mean IOP =14.1mmHg (SD ±5.4mmHg) vs 11.5 mmHg (SD ±4.2mmHg), p=0.12*; mean GMs=1.6 (SD ±1.4) vs 1.1 (SD ±1.1), p=0.23*; mean VA=0.5 vs 0.4, p=0.46*. Complication rates were similar between the two groups (7% vs 14%). Success rate was lower in eyes with PBS (71%) than with BS (88%), however this did not reach statistical significance (p=0.95, log-rank test). * two-sample t-test Conclusions: There were no significant differences at year one in success or complication rates between PBS and BS groups suggesting that simultaneous phacoemulsification does not have a marked (difference of >4mmHg) effect on tube function. IOP reduction and success were less in the PBS group, a larger sample (n=120) would be required to investigate if there is a 10% difference in IOP reduction between groups, however it is unclear if this would be a clinically significant difference to justify separate surgeries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is evidence across several species for genetic control of phenotypic variation of complex traits, such that the variance among phenotypes is genotype dependent. Understanding genetic control of variability is important in evolutionary biology, agricultural selection programmes and human medicine, yet for complex traits, no individual genetic variants associated with variance, as opposed to the mean, have been identified. Here we perform a meta-analysis of genome-wide association studies of phenotypic variation using ∼170,000 samples on height and body mass index (BMI) in human populations. We report evidence that the single nucleotide polymorphism (SNP) rs7202116 at the FTO gene locus, which is known to be associated with obesity (as measured by mean BMI for each rs7202116 genotype), is also associated with phenotypic variability. We show that the results are not due to scale effects or other artefacts, and find no other experiment-wise significant evidence for effects on variability, either at loci other than FTO for BMI or at any locus for height. The difference in variance for BMI among individuals with opposite homozygous genotypes at the FTO locus is approximately 7%, corresponding to a difference of ∼0.5 kilograms in the standard deviation of weight. Our results indicate that genetic variants can be discovered that are associated with variability, and that between-person variability in obesity can partly be explained by the genotype at the FTO locus. The results are consistent with reported FTO by environment interactions for BMI, possibly mediated by DNA methylation. Our BMI results for other SNPs and our height results for all SNPs suggest that most genetic variants, including those that influence mean height or mean BMI, are not associated with phenotypic variance, or that their effects on variability are too small to detect even with samples sizes greater than 100,000.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this paper was to describe the radiation and energy balance, during the lettuce (Lactuca sativa, L. cv. Verônica) crop cycle inside a polyethylene greenhouse. The radiation and energy balance was made inside a tunnel greenhouse with polyethylene cover (100 mum) and in an external area, both areas with 35 m². Global, reflected and net radiation, soil heat flux and air temperature (dry and humid) were measured during the crop cycle. A Datalogger, which operated at 1 Hz frequency, storing 5 minutes averages was utilized. The global (K¯) and reflected (K­) radiations showed that the average transmission of global radiation (K¯in / K¯ex) was almost constant, near to 79.59%, while the average ratio of reflected radiation (K­in / K­ex) was 69.21% with 8.47% standard-deviation. The normalized curves of short-wave net radiation, in relation to the global radiation (K*/ K¯), found for both environments, were almost constant at the beginning of cycle; this relation decreased in the final stage of culture. The normalized relation (Rn/ K¯) was bigger in the external area, about 12%, when the green culture covered the soil surface. The long-wave radiation balance average (L*) was bigger outside, about 50%. The energy balance, estimated in terms of vertical fluxes, showed that, for the external area, in average, 83.07% of total net radiation was converted in latent heat evaporation (LE), and 18% in soil heat flux (G), and 9.96% in sensible heat (H), while inside of the greenhouse, 58.71% of total net radiation was converted in LE, 42.68% in H, and 28.79% in G.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

INTRODUCTION: Respiratory therapy is a keystone of the treatment for cystic fibrosis (CF) lung disease, but it is time consuming. OBJECTIVES: We aimed to assess the total time spent on respiratory therapy, including chest physiotherapy (CPT) and physical activity (PA), as well as inhalation therapy (IT) and maintenance of materials (MM) to rationalise and optimise treatment. METHODS: A cross-sectional prospective study in a paediatric CF cohort. A questionnaire was developed to look at the time spent on respiratory care over 3 months. Enrolled in this study are all CF patients aged from 6 to 16 years (the exclusion criterion was lung transplantation). RESULTS: Of the 40 enrolled patients, 22 participated (13 boys and 9 girls), with a mean age of 11 years. The patients spent approximately 19.46 h per week (standard deviation ± 7.53, 8.00-35.25 h) on therapy: CPT (30.58%), IT (15.11%), PA (50%) and MM (4.32%), without statistical significance between sexes. CONCLUSION: In our cohort, CF patients spent an average of nearly 20 h a week in respiratory therapy, within a wide range of between 8 h to almost 36 h a week. PA consumes almost half of the time. Physicians have to take into consideration the burden of the treatment, to optimise the therapy.