838 resultados para Step length
Resumo:
Résumé Les glissements de terrain représentent un des principaux risques naturels dans les régions montagneuses. En Suisse, chaque année les glissements de terrains causent des dégâts qui affectent les infrastructures et ont des coûts financiers importants. Une bonne compréhension des mécanismes des glissements peut permettre d'atténuer leur impact. Celle-ci passe notamment par la connaissance de la structure interne du glissement, la détermination de son volume et de son ou ses plans de glissement. Dans un glissement de terrain, la désorganisation et la présence de fractures dans le matériel déplacé engendre un changement des paramètres physiques et en particulier une diminution des vitesses de propagation des ondes sismiques ainsi que de la densité du matériel. Les méthodes sismiques sont de ce fait bien adaptées à l'étude des glissements de terrain. Parmi les méthodes sismiques, l'analyse de la dispersion des ondes de surface est une méthode simple à mettre en oeuvre. Elle présente l'avantage d'estimer les variations des vitesses de cisaillement avec la profondeur sans avoir spécifiquement recours à l'utilisation d'une source d'onde S et de géophones horizontaux. Sa mise en oeuvre en trois étapes implique la mesure de la dispersion des ondes de surface sur des réseaux étendus, la détermination des courbes de dispersion pour finir par l'inversion de ces courbes. Les modèles de vitesse obtenus à partir de cette procédure ne sont valides que lorsque les milieux explorés ne présentent pas de variations latérales. En pratique cette hypothèse est rarement vérifiée, notamment pour un glissement de terrain dans lequel les couches remaniées sont susceptibles de présenter de fortes hétérogénéités latérales. Pour évaluer la possibilité de déterminer des courbes de dispersion à partir de réseaux de faible extension des mesures testes ont été effectuées sur un site (Arnex, VD) équipé d'un forage. Un profil sismique de 190 m de long a été implanté dans une vallée creusée dans du calcaire et remplie par des dépôts glacio-lacustres d'une trentaine de mètres d'épaisseur. Les données acquises le long de ce profil ont confirmé que la présence de variations latérales sous le réseau de géophones affecte l'allure des courbes de dispersion jusqu'à parfois empêcher leur détermination. Pour utiliser l'analyse de la dispersion des ondes de surface sur des sites présentant des variations latérales, notre approche consiste à déterminer les courbes de dispersions pour une série de réseaux de faible extension, à inverser chacune des courbes et à interpoler les différents modèles de vitesse obtenus. Le choix de la position ainsi que de l'extension des différents réseaux de géophones est important. Il tient compte de la localisation des hétérogénéités détectées à partir de l'analyse de sismique réfraction, mais également d'anomalies d'amplitudes observées sur des cartes qui représentent dans le domaine position de tir - position du récepteur, l'amplitude mesurée pour différentes fréquences. La procédure proposée par Lin et Lin (2007) s'est avérée être une méthode efficace permettant de déterminer des courbes de dispersion à partir de réseaux de faible extension. Elle consiste à construire à partir d'un réseau de géophones et de plusieurs positions de tir un enregistrement temps-déports qui tient compte d'une large gamme de distances source-récepteur. Au moment d'assembler les différentes données une correction de phase est appliquée pour tenir compte des hétérogénéités situées entre les différents points de tir. Pour évaluer cette correction nous suggérons de calculer pour deux tir successif la densité spectrale croisée des traces de même offset: Sur le site d'Arnex, 22 courbes de dispersions ont été déterminées pour de réseaux de géophones de 10 m d'extension. Nous avons également profité du forage pour acquérir un profil de sismique verticale en ondes S. Le modèle de vitesse S déduit de l'interprétation du profil de sismique verticale est utilisé comme information à priori lors l'inversion des différentes courbes de dispersion. Finalement, le modèle en deux dimension qui a été établi grâce à l'analyse de la dispersion des ondes de surface met en évidence une structure tabulaire à trois couches dont les limites coïncident bien avec les limites lithologiques observées dans le forage. Dans celui-ci des argiles limoneuses associées à une vitesse de propagation des ondes S de l'ordre de 175 m/s surmontent vers 9 m de profondeur des dépôts de moraine argilo-sableuse caractérisés par des vitesses de propagation des ondes S de l'ordre de 300 m/s jusqu'à 14 m de profondeur et supérieur ou égal à 400 m/s entre 14 et 20 m de profondeur. Le glissement de la Grande Combe (Ballaigues, VD) se produit à l'intérieur du remplissage quaternaire d'une combe creusée dans des calcaires Portlandien. Comme dans le cas du site d'Arnex les dépôts quaternaires correspondent à des dépôts glacio-lacustres. Dans la partie supérieure la surface de glissement a été localisée à une vingtaine de mètres de profondeur au niveau de l'interface qui sépare des dépôts de moraine jurassienne et des dépôts glacio-lacustres. Au pied du glissement 14 courbes de dispersions ont été déterminées sur des réseaux de 10 m d'extension le long d'un profil de 144 m. Les courbes obtenues sont discontinues et définies pour un domaine de fréquence de 7 à 35 Hz. Grâce à l'utilisation de distances source-récepteur entre 8 et 72 m, 2 à 4 modes de propagation ont été identifiés pour chacune des courbes. Lors de l'inversion des courbes de dispersion la prise en compte des différents modes de propagation a permis d'étendre la profondeur d'investigation jusqu'à une vingtaine de mètres de profondeur. Le modèle en deux dimensions permet de distinguer 4 couches (Vs1 < 175 m/s, 175 m/s < Vs2 < 225 m/s, 225 m/s < Vs3 < 400 m/s et Vs4 >.400 m/s) qui présentent des variations d'épaisseur. Des profils de sismiques réflexion en ondes S acquis avec une source construite dans le cadre de ce travail, complètent et corroborent le modèle établi à partir de l'analyse de la dispersion des ondes de surface. Un réflecteur localisé entre 5 et 10 m de profondeur et associé à une vitesse de sommation de 180 m/s souligne notamment la géométrie de l'interface qui sépare la deuxième de la troisième couche du modèle établi à partir de l'analyse de la dispersion des ondes de surface. Abstract Landslides are one of the main natural hazards in mountainous regions. In Switzerland, landslides cause damages every year that impact infrastructures and have important financial costs. In depth understanding of sliding mechanisms may help limiting their impact. In particular, this can be achieved through a better knowledge of the internal structure of the landslide, the determination of its volume and its sliding surface or surfaces In a landslide, the disorganization and the presence of fractures in the displaced material generate a change of the physical parameters and in particular a decrease of the seismic velocities and of the material density. Therefoe, seismic methods are well adapted to the study of landslides. Among seismic methods, surface-wave dispersion analysis is a easy to implement. Through it, shearwave velocity variations with depth can be estimated without having to resort to an S-wave source and to horizontal geophones. Its 3-step implementation implies measurement of surface-wave dispersion with long arrays, determination of the dispersion curves and finally inversion of these curves. Velocity models obtained through this approach are only valid when the investigated medium does not include lateral variations. In practice, this assumption is seldom correct, in particular for landslides in which reshaped layers likely include strong lateral heterogeneities. To assess the possibility of determining dispersion curves from short array lengths we carried out tests measurements on a site (Arnex, VD) that includes a borehole. A 190 m long seismic profile was acquired in a valley carved into limestone and filled with 30 m of glacio-lacustrine sediments. The data acquired along this profile confirmed that the presence of lateral variations under the geophone array influences the dispersion-curve shape so much that it sometimes preventes the dispersion curves determination. Our approach to use the analysis of surface-wave dispersion on sites that include lateral variations consists in obtaining dispersion curves for a series of short length arrays; inverting each so obtained curve and interpolating the different obtained velocity model. The choice of the location as well as the geophone array length is important. It takes into account the location of the heterogeneities that are revealed by the seismic refraction interpretation of the data but also, the location of signal amplitude anomalies observed on maps that represent, for a given frequency, the measured amplitude in the shot position - receiver position domain. The procedure proposed by Lin and Lin (2007) turned out to be an efficient one to determine dispersion curves using short extension arrays. It consists in building a time-offset from an array of geophones with a wide offset range by gathering seismograms acquired with different source-to-receiver offsets. When assembling the different data, a phase correction is applied in order to reduce static phase error induced by lateral variation. To evaluate this correction, we suggest to calculate, for two successive shots, the cross power spectral density of common offset traces. On the Arnex site, 22 curves were determined with 10m in length geophone-arrays. We also took advantage of the borehole to acquire a S-wave vertical seismic profile. The S-wave velocity depth model derived from the vertical seismic profile interpretation is used as prior information in the inversion of the dispersion-curves. Finally a 2D velocity model was established from the analysis of the different dispersion curves. It reveals a 3-layer structure in good agreement with the observed lithologies in the borehole. In it a clay layer with a shear-wave of 175 m/s shear-wave velocity overlies a clayey-sandy till layer at 9 m depth that is characterized down to 14 m by a 300 m/s S-wave velocity; these deposits have a S-wave velocity of 400 m/s between depths of 14 to 20 m. The La Grand Combe landslide (Ballaigues, VD) occurs inside the Quaternary filling of a valley carved into Portlandien limestone. As at the Arnex site, the Quaternary deposits correspond to glaciolacustrine sediments. In the upper part of the landslide, the sliding surface is located at a depth of about 20 m that coincides with the discontinuity between Jurassian till and glacio-lacustrine deposits. At the toe of the landslide, we defined 14 dispersion curves along a 144 m long profile using 10 m long geophone arrays. The obtained curves are discontinuous and defined within a frequency range of 7 to 35 Hz. The use of a wide range of offsets (from 8 to 72 m) enabled us to determine 2 to 4 mode of propagation for each dispersion curve. Taking these higher modes into consideration for dispersion curve inversion allowed us to reach an investigation depth of about 20 m. A four layer 2D model was derived (Vs1< 175 m/s, 175 m/s <Vs2< 225 m/s, 225 m/s < Vs3 < 400 m/s, Vs4> 400 m/s) with variable layer thicknesses. S-wave seismic reflection profiles acquired with a source built as part of this work complete and the velocity model revealed by surface-wave analysis. In particular, reflector at a depth of 5 to 10 m associated with a 180 m/s stacking velocity image the geometry of the discontinuity between the second and third layer of the model derived from the surface-wave dispersion analysis.
Resumo:
Diagnosis Related Groups (DRG) are frequently used to standardize the comparison of consumption variables, such as length of stay (LOS). In order to be reliable, this comparison must control for the presence of outliers, i.e. values far removed from the pattern set by the majority of the data. Indeed, outliers can distort the usual statistical summaries, such as means and variances. A common practice is to trim LOS values according to various empirical rules, but there is little theoretical support for choosing between alternative procedures. This pilot study explores the possibility of describing LOS distributions with parametric models which provide the necessary framework for the use of robust methods.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Identification of post-translational modifications of proteins in biological samples often requires access to preanalytical purification and concentration methods. In the purification step high or low molecular weight substances can be removed by size exclusion filters, and high abundant proteins can be removed, or low abundant proteins can be enriched, by specific capturing tools. In this paper is described the experience and results obtained with a recently emerged and easy-to-use affinity purification kit for enrichment of the low amounts of EPO found in urine and plasma specimens. The kit can be used as a pre-step in the EPO doping control procedure, as an alternative to the commonly used ultrafiltration, for detecting aberrantly glycosylated isoforms. The commercially available affinity purification kit contains small disposable anti-EPO monolith columns (6 ?L volume, Ø7 mm, length 0.15 mm) together with all required buffers. A 24-channel vacuum manifold was used for simultaneous processing of samples. The column concentrated EPO from 20 mL urine down to 55 ?L eluate with a concentration factor of 240 times, while roughly 99.7% of non-relevant urine proteins were removed. The recoveries of Neorecormon (epoetin beta), and the EPO analogues Aranesp and Mircera applied to buffer were high, 76%, 67% and 57%, respectively. The recovery of endogenous EPO from human urine was 65%. High recoveries were also obtained when purifying human, mouse and equine EPO from serum, and human EPO from cerebrospinal fluid. Evaluation with the accredited EPO doping control method based on isoelectric focusing (IEF) showed that the affinity purification procedure did not change the isoform distribution for rhEPO, Aranesp, Mircera or endogenous EPO. The kit should be particularly useful for applications in which it is essential to avoid carry-over effects, a problem commonly encountered with conventional particle-based affinity columns. The encouraging results with EPO propose that similar affinity monoliths, with the appropriate antibodies, should constitute useful tools for general applications in sample preparation, not only for doping control of EPO and other hormones such as growth hormone and insulin but also for the study of post-translational modifications of other low abundance proteins in biological and clinical research, and for sample preparation prior to in vitro diagnostics.
Resumo:
Micas are commonly used in Ar-40/Ar-39 thermochronological studies of variably deformed rocks yet the physical basis by which deformation may affect radiogenic argon retention in mica is poorly constrained. This study examines the relationship between deformation and deformation-induced microstructures on radiogenic argon retention in muscovite, A combination of furnace step-heating and high-spatial resolution in situ UV-laser ablation Ar-40/Ar-39 analyses are reported for deformed muscovites sampled from a granitic pegmatite vein within the Siviez-Mischabel Nappe, western Swiss Alps (Penninic domain, Brianconnais unit). The pegmatite forms part of the Variscan (similar to 350 Ma) Alpine basement and exhibits a prominent Alpine S-C fabric including numerous mica `fish' that developed under greenschist facies metamorphic conditions, during the dominant Tertiary Alpine tectonic phase of nappe emplacement. Furnace step-heating of milligram quantities of separated muscovite grains yields an Ar-40/Ar-39 age spectrum with two distinct staircase segments but without any statistical plateau, consistent with a previous study from the same area. A single (3 X 5 mm) muscovite porphyroclast (fish) was investigated by in situ UV-laser ablation. A histogram plot of 170 individual Ar-40/Ar-39 UV-laser ablation ages exhibit a range from 115 to 387 Ma with modes at approximately 340 and 260 Ma. A variogram statistical treatment of the (40)Ad/Ar-39 results reveals ages correlated with two directions; a highly correlated direction at 310 degrees and a lesser correlation at 0 degrees relative to the sense of shearing. Using the highly correlated direction a statistically generated (Kriging method) age contour map of the Ar-40/Ar-39 data reveals a series of elongated contours subparallel to the C-surfaces which where formed during Tertiary nappe emplacement. Similar data distributions and slightly younger apparent ages are recognized in a smaller mica fish. The observed intragrain age variations are interpreted to reflect the partial loss of radiogenic argon during Alpine (similar to 35 Ma) greenschist facies metamorphism. One-dirnensional diffusion modelling results are consistent with the idea that the zones of youngest apparent age represent incipient shear band development within the mica porphyroclasts, thus providing a network of fast diffusion pathways. During Alpine greenschist facies metamorphism the incipient shear bands enhanced the intragrain loss of radiogenic argon. The structurally controlled intragrain age variations observed in this investigation imply that deformation has a direct control on the effective length scale for argon diffusion, which is consistent with the heterogeneous nature of deformation. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
BACKGROUND: Higher nighttime blood pressure (BP) and the loss of nocturnal dipping of BP are associated with an increased risk for cardiovascular events. However, the determinants of the loss of nocturnal BP dipping are only beginning to be understood. We investigated whether different indicators of physical activity were associated with the loss of nocturnal dipping of BP. METHODS: We conducted a cross-sectional study of 103 patients referred for 24-hour ambulatory monitoring of BP. We measured these patients' step count (SC), active energy expenditure (AEE), and total energy expenditure simultaneously, using actigraphs. RESULTS: In our study population of 103 patients, most of whom were hypertensive, SC and AEE were associated with nighttime systolic BP in univariate (SC, r = -0.28, P < 0.01; AEE, r = -0.20, P = 0.046) and multivariate linear regression analyses (SC, coefficient beta = -5.37, P < 0.001; AEE, coefficient beta = -0.24, P < 0.01). Step count was associated with both systolic (r = 0.23, P = 0.018) and diastolic (r = 0.20, P = 0.045) BP dipping. Nighttime systolic BP decreased progressively across the categories of sedentary, moderately active, and active participants (125mm Hg, 116mm Hg, 112mm Hg, respectively; P = 0.002). The degree of BP dipping of BP increased progressively across the same three categories of activity (respectively 8.9%, 14.6%, and 18.6%, P = 0.002, for systolic BP and respectively 12.8%, 18.1%, and 22.2%, P = 0.006, for diastolic BP). CONCLUSIONS: Step count is continuously associated with nighttime systolic BP and with the degree of BP dipping independently of 24-hour mean BP. The combined use of an actigraph for measuring indicators of physical activity and a device for 24-hour measurement of ambulatory BP may help identify patients at increased risk for cardiovascular events in whom increased physical activity toward higher target levels may be recommended.
Resumo:
This paper resolves three empirical puzzles in outsourcing by formalizing the adaptationcost of long-term performance contracts. Side-trading with a new partner alongside a long-term contract (to exploit an adaptation-requiring investment) is usually less effective than switching to the new partner when the contract expires. So long-term contracts that prevent holdup of specific investments may induce holdup of adaptation investments. Contract length therefore trades of specific and adaptation investments. Length should increase with the importance and specificity of self-investments, and decrease with the importance of adaptation investments for which side-trading is ineffective. My general model also shows how optimal length falls with cross-investments and wasteful investments.
Resumo:
When dealing with the design of service networks, such as healthand EMS services, banking or distributed ticket selling services, thelocation of service centers has a strong influence on the congestion ateach of them, and consequently, on the quality of service. In this paper,several models are presented to consider service congestion. The firstmodel addresses the issue of the location of the least number of single--servercenters such that all the population is served within a standard distance,and nobody stands in line for a time longer than a given time--limit, or withmore than a predetermined number of other clients. We then formulateseveral maximal coverage models, with one or more servers per service center.A new heuristic is developed to solve the models and tested in a 30--nodesnetwork.
Resumo:
A fast and reliable assay for the identification of dermatophyte fungi and nondermatophyte fungi (NDF) in onychomycosis is essential, since NDF are especially difficult to cure using standard treatment. Diagnosis is usually based on both direct microscopic examination of nail scrapings and macroscopic and microscopic identification of the infectious fungus in culture assays. In the last decade, PCR assays have been developed for the direct detection of fungi in nail samples. In this study, we describe a PCR-terminal restriction fragment length polymorphism (TRFLP) assay to directly and routinely identify the infecting fungi in nails. Fungal DNA was easily extracted using a commercial kit after dissolving nail fragments in an Na(2)S solution. Trichophyton spp., as well as 12 NDF, could be unambiguously identified by the specific restriction fragment size of 5'-end-labeled amplified 28S DNA. This assay enables the distinction of different fungal infectious agents and their identification in mixed infections. Infectious agents could be identified in 74% (162/219) of cases in which the culture results were negative. The PCR-TRFLP assay described here is simple and reliable. Furthermore, it has the possibility to be automated and thus routinely applied to the rapid diagnosis of a large number of clinical specimens in dermatology laboratories.
Resumo:
Laboratory studies were conducted to compare rostrum length morphology of mandible serration and area of food and salivary canals of Dichelops melacanthus (Dallas) (Dm), Euschistus heros (F.) (Eh), Nezara viridula (L.) (Nv), and Piezodorus guildinii (Westwood) (Pg) (Heteroptera: Pentatomidae). Nv showed the longest (5.9 mm) and Pg the shortest (3.5 mm) rostrum length; Dm and Eh were intermediate. Length and width of mandible tip areas holding serration was bigger for Nv (106.0 and 30.2 µm, respectively) and smaller for Pg (71.1 and 23.7 µm), with all species having four central teeth and three pairs of lateral teeth. The inner mandible surface showed squamous texture. Cross-section of food and salivary canals (Fc and Sc) indicated greater area for Nv and Dm compared to Eh and Pg; however, the ratio Fc/Sc, yielded the highest relative area for Pg.
Resumo:
Specialized glucosensing neurons are present in the hypothalamus, some of which neighbor the median eminence, where the blood-brain barrier has been reported leaky. A leaky blood-brain barrier implies high tissue glucose levels and obviates a role for endothelial glucose transporters in the control of hypothalamic glucose concentration, important in understanding the mechanisms of glucose sensing We therefore addressed the question of blood-brain barrier integrity at the hypothalamus for glucose transport by examining the brain tissue-to-plasma glucose ratio in the hypothalamus relative to other brain regions. We also examined glycogenolysis in hypothalamus because its occurrence is unlikely in the potential absence of a hypothalamus-blood interface. Across all regions the concentration of glucose was comparable at a given plasma glucose concentration and was a near linear function of plasma glucose. At steady-state, hypothalamic glucose concentration was similar to the extracellular hypothalamic glucose concentration reported by others. Hypothalamic glycogen fell at a rate of approximately 1.5 micromol/g/h and remained present in substantial amounts. We conclude for the hypothalamus, a putative primary site of brain glucose sensing that: the rate-limiting step for glucose transport into brain cells is at the blood-hypothalamus interface, and that glycogenolysis is consistent with a substantial blood -to- intracellular glucose concentration gradient.
Resumo:
OBJECTIVE: To provide an update to the original Surviving Sepsis Campaign clinical management guidelines, "Surviving Sepsis Campaign guidelines for management of severe sepsis and septic shock," published in 2004. DESIGN: Modified Delphi method with a consensus conference of 55 international experts, several subsequent meetings of subgroups and key individuals, teleconferences, and electronic-based discussion among subgroups and among the entire committee. This process was conducted independently of any industry funding. METHODS: We used the GRADE system to guide assessment of quality of evidence from high (A) to very low (D) and to determine the strength of recommendations. A strong recommendation indicates that an intervention's desirable effects clearly outweigh its undesirable effects (risk, burden, cost), or clearly do not. Weak recommendations indicate that the tradeoff between desirable and undesirable effects is less clear. The grade of strong or weak is considered of greater clinical importance than a difference in letter level of quality of evidence. In areas without complete agreement, a formal process of resolution was developed and applied. Recommendations are grouped into those directly targeting severe sepsis, recommendations targeting general care of the critically ill patient that are considered high priority in severe sepsis, and pediatric considerations. RESULTS: Key recommendations, listed by category, include: early goal-directed resuscitation of the septic patient during the first 6 hrs after recognition (1C); blood cultures prior to antibiotic therapy (1C); imaging studies performed promptly to confirm potential source of infection (1C); administration of broad-spectrum antibiotic therapy within 1 hr of diagnosis of septic shock (1B) and severe sepsis without septic shock (1D); reassessment of antibiotic therapy with microbiology and clinical data to narrow coverage, when appropriate (1C); a usual 7-10 days of antibiotic therapy guided by clinical response (1D); source control with attention to the balance of risks and benefits of the chosen method (1C); administration of either crystalloid or colloid fluid resuscitation (1B); fluid challenge to restore mean circulating filling pressure (1C); reduction in rate of fluid administration with rising filing pressures and no improvement in tissue perfusion (1D); vasopressor preference for norepinephrine or dopamine to maintain an initial target of mean arterial pressure > or = 65 mm Hg (1C); dobutamine inotropic therapy when cardiac output remains low despite fluid resuscitation and combined inotropic/vasopressor therapy (1C); stress-dose steroid therapy given only in septic shock after blood pressure is identified to be poorly responsive to fluid and vasopressor therapy (2C); recombinant activated protein C in patients with severe sepsis and clinical assessment of high risk for death (2B except 2C for post-operative patients). In the absence of tissue hypoperfusion, coronary artery disease, or acute hemorrhage, target a hemoglobin of 7-9 g/dL (1B); a low tidal volume (1B) and limitation of inspiratory plateau pressure strategy (1C) for acute lung injury (ALI)/acute respiratory distress syndrome (ARDS); application of at least a minimal amount of positive end-expiratory pressure in acute lung injury (1C); head of bed elevation in mechanically ventilated patients unless contraindicated (1B); avoiding routine use of pulmonary artery catheters in ALI/ARDS (1A); to decrease days of mechanical ventilation and ICU length of stay, a conservative fluid strategy for patients with established ALI/ARDS who are not in shock (1C); protocols for weaning and sedation/analgesia (1B); using either intermittent bolus sedation or continuous infusion sedation with daily interruptions or lightening (1B); avoidance of neuromuscular blockers, if at all possible (1B); institution of glycemic control (1B) targeting a blood glucose < 150 mg/dL after initial stabilization ( 2C ); equivalency of continuous veno-veno hemofiltration or intermittent hemodialysis (2B); prophylaxis for deep vein thrombosis (1A); use of stress ulcer prophylaxis to prevent upper GI bleeding using H2 blockers (1A) or proton pump inhibitors (1B); and consideration of limitation of support where appropriate (1D). Recommendations specific to pediatric severe sepsis include: greater use of physical examination therapeutic end points (2C); dopamine as the first drug of choice for hypotension (2C); steroids only in children with suspected or proven adrenal insufficiency (2C); a recommendation against the use of recombinant activated protein C in children (1B). CONCLUSION: There was strong agreement among a large cohort of international experts regarding many level 1 recommendations for the best current care of patients with severe sepsis. Evidenced-based recommendations regarding the acute management of sepsis and septic shock are the first step toward improved outcomes for this important group of critically ill patients.
Resumo:
Genetic structure of populations of Pissodes castaneus (De Geer) (Coleoptera, Curculionidae) using amplified fragment length polymorphism. The objective of this study was to determine the genetic structure of populations of Pissodes castaneus from different areas and on different species of Pinus using the PCR-AFLP technique. Twenty samples were analyzed, representing 19 populations from Brazil and one from Florence, Italy, which is the region of origin of P. castaneus. The four combinations of primers generated a total of 367 fragments of DNA, and 100% of polymorphic loci, indicating high degree of molecular polymorphism. The dendrogram did not reveal trends for grouping the populations in relation to origin. The low genetic similarity (0.11 between the most distant groups) and genetic distances of 0.13 and 0.44 for 10 out of the 20 samples may indicate several founding events or multiple introductions of heterogeneous strains into Brazil. The allelic fixation index (Fst) was 0.3851, considered high, and the number of migrants (Nm) was 0.3991, indicating low gene flow among populations. The highest genetic distances were between the population from Irani, SC and Cambará do Sul, RS and Bituruna, PR, indicating an independent founding event or a particular allelic fixation in the former location. The high genetic diversity among populations points out that the populations are genetically heterogeneous with a diverse gene pool in the surveyed areas, what makes them to respond differently to control measures.