950 resultados para Lead-time reduction
Resumo:
We have reported that ingesting a meal immediately after exercise increased skeletal muscle accretion and less adipose tissue accumulation in rats employed in a 10 week resistance exercise program. We hypothesized that a possible increase in the resting metabolic rate (RMR) as a result of the larger skeletal muscle mass might be responsible for the less adipose deposition. Therefore, the effect of the timing of a protein supplement after resistance exercise on body composition and the RMR was investigated in 17 slightly overweight men. The subjects participated in a 12-week weight reduction program consisting of mild energy restriction (17% energy intake reduction) and a light resistance exercise using a pair of dumbbells (3-5 kg). The subjects were assigned to two groups. Group S ingested a protein supplement (10 g protein, 7 g carbohydrate, 3.3 g fat and one-third of recommended daily allowance (RDA) of vitamins and minerals) immediately after exercise. Group C did not ingest the supplement. Daily intake of both energy and protein was equal between the two groups and the protein intake met the RDA. After 12 weeks, the bodyweight, skinfold thickness, girth of waist and hip and percentage bodyfat significantly decreased in the both groups, however, no significant differences were observed between the groups. The fat-free mass significantly decreased in C, whereas its decrease in S was not significant. The RMR and post-meal total energy output significantly increased in S, while these variables did not change in C. In addition, the urinary nitrogen excretion tended to increase in C but not in S. These results suggest that the RMR increase observed in S might be associated with an increase in body protein synthesis.
Resumo:
Time-resolved imaging is carried out to study the dynamics of the laser-induced forward transfer of an aqueous solution at different laser fluences. The transfer mechanisms are elucidated, and directly correlated with the material deposited at the analyzed irradiation conditions. It is found that there exists a fluence range in which regular and well-defined droplets are deposited. In this case, laser pulse energy absorption results in the formation of a plasma, which expansion originates a cavitation bubble in the liquid. After the further expansion and collapse of the bubble, a long and uniform jet is developed, which advances at a constant velocity until it reaches the receptor substrate. On the other hand, for lower fluences no material is deposited. In this case, although a jet can be also generated, it recoils before reaching the substrate. For higher fluences, splashing is observed on the receptor substrate due to the bursting of the cavitation bubble. Finally, a discussion of the possible mechanisms which lead to such singular dynamics is also provided.
Resumo:
Abstract Electrical stimulation is a new way to treat digestive disorders such as constipation. Colonic propulsive activity can be triggered by battery operated devices. This study aimed to demonstrate the effect of direct electrical colonic stimulation on mean transit time in a chronic porcine model. The impact of stimulation and implanted material on the colonic wall was also assessed. Three pairs of electrodes were implanted into the caecal wall of 12 anaesthetized pigs. Reference colonic transit time was determined by radiopaque markers for each pig before implantation. It was repeated 4 weeks after implantation with sham stimulation and 5 weeks after implantation with electrical stimulation. Aboral sequential trains of 1-ms pulse width (10 V; 120 Hz) were applied twice daily for 6 days, using an external battery operated stimulator. For each course of markers, a mean value was computed from transit times obtained from individual pig. Microscopic examination of the caecum was routinely performed after animal sacrifice. A reduction of mean transit time was observed after electrical stimulation (19 +/- 13 h; mean +/- SD) when compared to reference (34 +/- 7 h; P = 0.045) and mean transit time after sham stimulation (36 +/- 9 h; P = 0.035). Histological examination revealed minimal chronic inflammation around the electrodes. Colonic transit time measured in a chronic porcine model is reduced by direct sequential electrical stimulation. Minimal tissue lesion is elicited by stimulation or implanted material. Electrical colonic stimulation could be a promising approach to treat specific disorders of the large bowel.
Resumo:
Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management
Resumo:
ABSTRACT Increasing attention has recently been given to sweet sorghum as a renewable raw material for ethanol production, mainly because its cultivation can be fully mechanized. However, the intensive use of agricultural machinery causes soil structural degradation, especially when performed under inadequate conditions of soil moisture. The aims of this study were to evaluate the physical quality of aLatossolo Vermelho Distroférrico (Oxisol) under compaction and its components on sweet sorghum yield forsecond cropsowing in the Brazilian Cerrado (Brazilian tropical savanna). The experiment was conducted in a randomized block design, in a split plot arrangement, with four replications. Five levels of soil compaction were tested from the passing of a tractor at the following traffic intensities: 0 (absence of additional compaction), 1, 2, 7, and 15 passes over the same spot. The subplots consisted of three different sowing times of sweet sorghum during the off-season of 2013 (20/01, 17/02, and 16/03). Soil physical quality was measured through the least limiting water range (LLWR) and soil water limitation; crop yield and technological parameters were also measured. Monitoring of soil water contents indicated a reduction in the frequency of water content in the soil within the limits of the LLWR (Fwithin) as agricultural traffic increased (T0 = T1 = T2>T7>T15), and crop yield is directly associated with soil water content. The crop sown in January had higher industrial quality; however, there was stalk yield reduction when bulk density was greater than 1.26 Mg m-3, with a maximum yield of 50 Mg ha-1 in this sowing time. Cultivation of sweet sorghum as a second crop is a promising alternative, but care should be taken in cultivation under conditions of pronounced climatic risks, due to low stalk yield.
Resumo:
The usual assumption when considering investment grants is that grant payments are automatic when investments are undertaken. However, evidence from case studies shows that there can exist some time lag until funds are received by granted firms. In this paper the effects of delays in grant payments on the optimal investment policy of the firm are analyzed. It is shown how these delays lead not only to a higher financing cost but to an effective reduction in the investment grant rate, and in some cases, how benefits from investment grants could be canceled due to interactions with tax effects.
Resumo:
STUDY OBJECTIVE: Acute pain is the most frequent complaint in emergency department (ED) admissions, but its management is often neglected, placing patients at risk of oligoanalgesia. We evaluate the effect of the implementation of guidelines for pain management in ED patients with pain at admission or anytime during their stay in our ED. METHODS: This prospective pre-post intervention cohort study included data collection both before and after guideline implementation. Consecutive adult patients admitted with acute pain from any cause or with pain at any time after admission were enrolled. The quality of pain management was evaluated according to information in the ED medical records by using a standardized collection form, and its impact on patients was recorded with a questionnaire at discharge. RESULTS: Two hundred forty-nine and 192 patients were included during pre- and postintervention periods. Pain was documented in 61% and 76% of nurse and physician notes, respectively, versus 78% and 85% after the intervention (difference 17%/9%; 95% confidence interval [CI] 8% to 26%/2% to 17%, respectively). Administration of analgesia increased from 40% to 63% (difference 23%; 95% CI 13% to 32%) and of morphine from 10% to 27% (difference 17%; 95% CI 10% to 24%). Mean doses of intravenous morphine increased from 2.4 mg (95% CI 1.9 to 2.9 mg) to 4.6 mg (95% CI 3.9 to 5.3 mg); administration of nonsteroidal antiinflammatory drugs and acetaminophen increased as well. There was a greater reduction of visual analogue scale score after intervention: 2.1 cm (95% CI 1.7 to 2.4 cm) versus 2.9 cm (95% CI 2.5 to 3.3 cm), which was associated with improved patient satisfaction. CONCLUSION: Education program and guidelines implementation for pain management lead to improved pain management, analgesia, and patient satisfaction in the ED.
Resumo:
The present study was designed to explore the thermogenic effect of thyroid hormone administration and the resulting changes in nitrogen homeostasis. Normal male volunteers (n = 7) received thyroxin during 6 weeks. The first 3-week period served to suppress endogenous thyroid secretion (180 micrograms T4/day). This dose was doubled for the next 3 weeks. Sleeping energy expenditure (respiratory chamber) and BMR (hood) were measured by indirect calorimetry, under standardized conditions. Sleeping heart rate was continuously recorded and urine was collected during this 12-hour period to assess nitrogen excretion. The changes in energy expenditure, heart rate and nitrogen balance were then related to the excess thyroxin administered. After 3 weeks of treatment, serum TSH level fell to 0.15 mU/L, indicating an almost complete inhibition of the pituitary-thyroid axis. During this phase of treatment there was an increase in sleeping EE and sleeping heart rate, which increased further by doubling the T4 dose (delta EE: +8.5 +/- 2.3%, delta heart rate +16.1 +/- 2.2%). The T4 dose, which is currently used as a substitutive dose, lead to a borderline hyperthyroid state, with an increase in EE and heart rate. Exogenous T4 administration provoked a significant increase in urinary nitrogen excretion averaging 40%. It is concluded that T4 provokes an important stimulation of EE, which is mostly mediated by an excess protein oxidation.
Resumo:
Canine distemper virus (CDV), a mobillivirus related to measles virus causes a chronic progressive demyelinating disease, associated with persistence of the virus in the central nervous system (CNS). CNS persistence of morbilliviruses has been associated with cell-to-cell spread, thereby limiting immune detection. The mechanism of cell-to-cell spread remains uncertain. In the present study we studied viral spread comparing a cytolytic (non-persistent) and a persistent CDV strain in cell cultures. Cytolytic CDV spread in a compact concentric manner with extensive cell fusion and destruction of the monolayer. Persistent CDV exhibited a heterogeneous cell-to-cell pattern of spread without cell fusion and 100-fold reduction of infectious viral titers in supernatants as compared to the cytolytic strain. Ultrastructurally, low infectious titers correlated with limited budding of persistent CDV as compared to the cytolytic strain, which shed large numbers of viral particles. The pattern of heterogeneous cell-to-cell viral spread can be explained by low production of infectious viral particles in only few areas of the cell membrane. In this way persistent CDV only spreads to a small proportion of the cells surrounding an infected one. Our studies suggest that both cell-to-cell spread and limited production of infectious virus are related to reduced expression of fusogenic complexes in the cell membrane. Such complexes consist of a synergistic configuration of the attachment (H) and fusion (F) proteins on the cell surface. F und H proteins exhibited a marked degree of colocalization in cytolytic CDV infection but not in persistent CDV as seen by confocal laser microscopy. In addition, analysis of CDV F protein expression using vaccinia constructs of both strains revealed an additional large fraction of uncleaved fusion protein in the persistent strain. This suggests that the paucity of active fusion complexes is due to restricted intracellular processing of the viral fusion protein.
Resumo:
A medical and scientific multidisciplinary consensus meeting was held from 29 to 30 November 2013 on Anti-Doping in Sport at the Home of FIFA in Zurich, Switzerland, to create a roadmap for the implementation of the 2015 World Anti-Doping Code. The consensus statement and accompanying papers set out the priorities for the antidoping community in research, science and medicine. The participants achieved consensus on a strategy for the implementation of the 2015 World Anti-Doping Code. Key components of this strategy include: (1) sport-specific risk assessment, (2) prevalence measurement, (3) sport-specific test distribution plans, (4) storage and reanalysis, (5) analytical challenges, (6) forensic intelligence, (7) psychological approach to optimise the most deterrent effect, (8) the Athlete Biological Passport (ABP) and confounding factors, (9) data management system (Anti-Doping Administration & Management System (ADAMS), (10) education, (11) research needs and necessary advances, (12) inadvertent doping and (13) management and ethics: biological data. True implementation of the 2015 World Anti-Doping Code will depend largely on the ability to align thinking around these core concepts and strategies. FIFA, jointly with all other engaged International Federations of sports (Ifs), the International Olympic Committee (IOC) and World Anti-Doping Agency (WADA), are ideally placed to lead transformational change with the unwavering support of the wider antidoping community. The outcome of the consensus meeting was the creation of the ad hoc Working Group charged with the responsibility of moving this agenda forward.
Resumo:
Background : Numerous studies have shown that immune cells infiltrate the spinal cord after peripheral nerve injury and that they play a major contribution to sensory hypersensitivity in rodents. In particular, the role of monocyte-derived cells and T lymphocytes seems to be prominent in this process. This exciting new perspective in research on neuropathic pain opens many different areas of work, including the understanding of the function of these cells and how they impact on neural function. However, no systematic description of the time course or cell types that characterize this infiltration has been published yet, although this seems to be the rational first step of an overall understanding of the phenomenon. Objective : Describe the time course and cell characteristics of T lymphocyte infiltration in the spinal cord in the Spared Nerve Injury (SNI) model of neuropathic pain in rats. Methods : Collect of lumbar spinal cords of rats at days 2, 7, 21 and 40 after SNI or sham operation (n=4). Immunofluorescence detecting different proteins of T cell subgroups (CD2+CD4+, CD2+CD8+, Th1 markers, Th2 markers, Th17 markers). Quantification of the infiltration rate of the different subgroups. Expected results : First, we expect to see an infiltration of T cells in the spinal cord ipsilateral to nerve injury, higher in SNI rats than in sham animals. Second, we anticipate that different subtypes of T cells penetrate at different time points. Finally, the number of T lymphocytes are expected to decrease at the latest time point, showing a resolution of the process underlying their infiltrating the spinal cord in the first place. Impact : A systematic description of the infiltration of T cells in the spinal cord after peripheral nerve injury is needed to have a better understanding of the role of immune cells in neuropathic pain. The time course that we want to establish will provide the scientific community with new perspectives. First, it will confirm that T cells do indeed infiltrate the spinal cord after SNI in rats. Second, the type of T cells infiltrating at different time points will give clues about their function, in particular their inflammatory or anti-inflammatory profile. From there on, other studies could be lead, investigating the functional side of the specific subtypes put to light by us. Ultimately, this could lead to the discovery of new drugs targeting T cells or their infiltration, in the hope of improving neuropathic pain.
Resumo:
This research project was directed at laboratory and field evaluation of sodium montmorillonite clay (Bentonite) as a dust palliative for limestone surfaced secondary roads. It was postulated that the electrically charged surfaces (negative) of the clay particles could interact with the charged surfaces (positive) of the limestone and act as a bonding agent to agglomerate fine (-#200) particulates, and also to bond the fine particulates to larger (+#200) limestone particles. One mile test roads were constructed in Tama, Appanoose, and Hancock counties in Iowa using Bentonite treatment levels (by weight of aggregate) ranging from 3.0 to 12.0%. Construction was accomplished by adding dry Bentonite to the surfacing material and then dry road mixing. The soda ash/water solution (dispersing agent) was spray applied and the treated surfacing material wet mixed by motor graders to a consistency of 2 to 3 inch slump concrete. Two motor graders working in tandem provided rapid mixing. Following wet mixing the material was surface spread and compacted by local traffic. Quantitative and qualitative periodic evaluations and testing of the test roads was conducted with respect to dust generation, crust development, roughness, and braking characteristics. As the Bentonite treatment level increased dust generation decreased. From a cost/benefit standpoint, an optimum level of treatment is about 8% (by weight of aggregate). For roads with light traffic, one application at this treatment level resulted in a 60-70% average dust reduction in the first season, 40-50% in the second season, and 20-30% in the third season. Crust development was rated at two times better than untreated control sections. No discernible trend was evident with respect to roughness. There was no evident difference in any of the test sections with respect to braking distance and braking handling characteristics, under wet surface conditions compared to the control sections. Chloride treatments are more effective in dust reduction in the short term (3-4 months). Bentonite treatment is capable of dust reduction over the long term (2-3 seasons). Normal maintenance blading operations can be used on Bentonite treated areas. Soda ash dispersed Bentonite treatment is estimated to be more than twice as cost effective per percent dust reduction than conventional chloride treatments, with respect to time. However, the disadvantage is that there is not the initial dramatic reduction in dust generation as with the chloride treatment. Although dust is reduced significantly after treatment there is still dust being generated. Video evidence indicates that the dust cloud in the Bentonite treated sections does not rise as high, or spread as wide as the cloud in the untreated section. It also settles faster than the cloud in the untreated section. This is considered important for driving safety of following traffic, and for nuisance dust invasion of residences and residential areas. The Bentonite appears to be functioning as a bonding agent.
Resumo:
Kansas State University, with funding from the Kansas Department of Transportation (KDOT), has developed a computerized reduction system for profilograms produced by mechanical profilographs. The commercial version of the system (ProScan (trademark)) is marketed by Devore Systems, Inc. The system consists of an IBM Compatible PC 486SX33 computer or better, Epson LQ-570 printer, a Logitech Scanman 32 hand scanner system, a paper transport unit, and the ProScan software. The Scanner is not adaptable to IBM computers with the micro channel architecture. The Iowa DOT Transportation Centers could realize the following advantages by using ProScan: (1) Save about 5 to 8 staff hours of reduction and reporting time per Transportation Center per week for a Materials Technician 3 or 4 (the time savings would come during the busiest part of the season); (2) Reduce errors in reduction, transfer, and typing of profile values; (3) Increase the accuracy of the monitor results; and (4) Allow rapid evaluation of contractor traces when tolerance limits between monitor and certified results are exceeded.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.
Resumo:
Winter weather in Iowa is often unpredictable and can have an adverse impact on traffic flow. The Iowa Department of Transportation (Iowa DOT) attempts to lessen the impact of winter weather events on traffic speeds with various proactive maintenance operations. In order to assess the performance of these maintenance operations, it would be beneficial to develop a model for expected speed reduction based on weather variables and normal maintenance schedules. Such a model would allow the Iowa DOT to identify situations in which speed reductions were much greater than or less than would be expected for a given set of storm conditions, and make modifications to improve efficiency and effectiveness. The objective of this work was to predict speed changes relative to baseline speed under normal conditions, based on nominal maintenance schedules and winter weather covariates (snow type, temperature, and wind speed), as measured by roadside weather stations. This allows for an assessment of the impact of winter weather covariates on traffic speed changes, and estimation of the effect of regular maintenance passes. The researchers chose events from Adair County, Iowa and fit a linear model incorporating the covariates mentioned previously. A Bayesian analysis was conducted to estimate the values of the parameters of this model. Specifically, the analysis produces a distribution for the parameter value that represents the impact of maintenance on traffic speeds. The effect of maintenance is not a constant, but rather a value that the researchers have some uncertainty about and this distribution represents what they know about the effects of maintenance. Similarly, examinations of the distributions for the effects of winter weather covariates are possible. Plots of observed and expected traffic speed changes allow a visual assessment of the model fit. Future work involves expanding this model to incorporate many events at multiple locations. This would allow for assessment of the impact of winter weather maintenance across various situations, and eventually identify locations and times in which maintenance could be improved.