222 resultados para Nanometric ranges
Resumo:
The effect of copper (Cu) filtration on image quality and dose in different digital X-ray systems was investigated. Two computed radiography systems and one digital radiography detector were used. Three different polymethylmethacrylate blocks simulated the pediatric body. The effect of Cu filters of 0.1, 0.2, and 0.3 mm thickness on the entrance surface dose (ESD) and the corresponding effective doses (EDs) were measured at tube voltages of 60, 66, and 73 kV. Image quality was evaluated in a contrast-detail phantom with an automated analyzer software. Cu filters of 0.1, 0.2, and 0.3 mm thickness decreased the ESD by 25-32%, 32-39%, and 40-44%, respectively, the ranges depending on the respective tube voltages. There was no consistent decline in image quality due to increasing Cu filtration. The estimated ED of anterior-posterior (AP) chest projections was reduced by up to 23%. No relevant reduction in the ED was noted in AP radiographs of the abdomen and pelvis or in posterior-anterior radiographs of the chest. Cu filtration reduces the ESD, but generally does not reduce the effective dose. Cu filters can help protect radiosensitive superficial organs, such as the mammary glands in AP chest projections.
Resumo:
Aim We test for the congruence between allele-based range boundaries (break zones) in silicicolous alpine plants and species-based break zones in the silicicolous flora of the European Alps. We also ask whether such break zones coincide with areas of large elevational variation.Location The European Alps.Methods On a regular grid laid across the entire Alps, we determined areas of allele- and species-based break zones using respective clustering algorithms, identifying discontinuities in cluster distributions (breaks), and quantifying integrated break densities (break zones). Discontinuities were identified based on the intra-specific genetic variation of 12 species and on the floristic distribution data from 239 species, respectively. Coincidence between the two types of break zones was tested using Spearman's correlation. Break zone densities were also regressed on topographical complexity to test for the effect of elevational variation.Results We found that two main break zones in the distribution of alleles and species were significantly correlated. Furthermore, we show that these break zones are in topographically complex regions, characterized by massive elevational ranges owing to high mountains and deep glacial valleys. We detected a third break zone in the distribution of species in the eastern Alps, which is not correlated with topographic complexity, and which is also not evident from allelic distribution patterns. Species with the potential for long-distance dispersal tended to show larger distribution ranges than short-distance dispersers.Main conclusions We suggest that the history of Pleistocene glaciations is the main driver of the congruence between allele-based and species-based distribution patterns, because occurrences of both species and alleles were subject to the same processes (such as extinction, migration and drift) that shaped the distributions of species and genetic lineages. Large elevational ranges have had a profound effect as a dispersal barrier for alleles during post-glacial immigration. Because plant species, unlike alleles, cannot spread via pollen but only via seed, and thus disperse less effectively, we conclude that species break zones are maintained over longer time spans and reflect more ancient patterns than allele break zones.Conny Thiel-Egenter and Nadir Alvarez contributed equally to this paper and are considered joint first authors.
Resumo:
In the International Olympic Committee (IOC) accredited laboratories, specific methods have been developed to detect anabolic steroids in athletes' urine. The technique of choice to achieve this is gas-chromatography coupled with mass spectrometry (GC-MS). In order to improve the efficiency of anti-doping programmes, the laboratories have defined new analytical strategies. The final sensitivity of the analytical procedure can be improved by choosing new technologies for use in detection, such as tandem mass spectrometry (MS-MS) or high resolution mass spectrometry (HRMS). A better sample preparation using immuno-affinity chromatography (IAC) is also a good tool for improving sensitivity. These techniques are suitable for the detection of synthetic anabolic steroids whose structure is not found naturally in the human body. The more and more evident use, on a large scale, of substances chemically similar to the endogenous steroids obliges both the laboratory and the sports authorities to use the steroid profile of the athlete in comparison with reference ranges from a population or with intraindividual reference values.
Resumo:
Bisphosphonate related osteonecrosis of the jaw (BRONJ) is defined as exposed necrotic bone appearing in the jaws of patients treated by systemic IV or oral BPs never irradiated in the head and neck area and that has persisted for more than 8 weeks. More than 90% of cases of osteonecrosis of the jaw have been in patients with cancer who received IV-BPs. The estimate of cumulative incidence of BRONJ in cancer patients with IV-BPs ranges from 0.8% to 18.6%. The pathogenesis of BRONJ appeared related to the potent osteoblast-inhibiting properties of BPs which act by blocking osteoclast recruitment, decreasing osteoclast activity and promoting osteoclast apoptosis. Dental extractions are the most potent local risk factor. Cancer patients wearing a denture could also be at increased risk of BRONJ. Non-healing mucosal breaches caused by dentures could be a portal for the oral flora to access bone, while the oral mucosa of patients on IV-BPs could also be defective. Whether periodontal disease is a risk factor for BRONJ remains controversial. Preventive measures are fundamental. Nevertheless, some teams have questioned its cost-effectiveness. The perceived limitations of surgical therapy of BRONJ led to the restriction of aggressive surgery to symptomatic patients with stage 3 BRONJ. The evidence-based literature on BRONJ is growing but there are still many controversial aspects.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Résumé: Ce travail d'histoire comparée de la littérature et de la sociologie s'interroge sur l'émergence de la notion de type dans les pratiques de description du monde social au cours des années 1820-1860. A partir de l'analyse des opérations de schématisation engagées par Honoré Balzac dans La Comédie humaine et par Frédéric Le Play dans Les Ouvriers européens, soit deux oeuvres que tout semble éloigner du point de vue de leurs ambitions, de leur forme, et de la trajectoire de leur auteur, mais qui toutes deux placent cependant la typification au centre de leur dispositif, il s'est agi de produire une histoire de l'imagination typologique, et des ontologies, sociales ou non, qui lui furent associées. Aussi le corpus intègre-t-il des textes d'histoire naturelle, de sciences médicales, d'histoire, de chimie, de géologie, de métallurgie, et, bien évidemment, les genres du roman sentimental, du roman historique et de la littérature panoramique, ainsi que les enquêtes ouvrières et la statistique. Abstract: This work offers a compared history of literature and sociology in France between 1820 and 1860. During that period, the notion of type appears in the literary and sociological descriptions of social reality, and becomes more and more central in the apprehension of the differenciations among classes, communities or groups. Based on the analysis of Honoré Balzac's La Comédie humaine and Frédéric Le Play's Les Ouvriers européens, this study shows that these two series of novels and of workers' monographies put typification at the center of their descriptive ambition. More broadly, it proposes a history of the uses of a typological imagination and of the ontologies, above all social, that were underlying them. That is why the texts also taken into account in this study ranges from natural history, medical sciences, history, chemistry geology and metallurgy, to the sentimental novel, the historical novel and the panoramic literature, as well as social inquiries and statistics.
Resumo:
BACKGROUND: Early virological failure of antiretroviral therapy associated with the selection of drug-resistant human immunodeficiency virus type 1 in treatment-naive patients is very critical, because virological failure significantly increases the risk of subsequent failures. Therefore, we evaluated the possible role of minority quasispecies of drug-resistant human immunodeficiency virus type 1, which are undetectable at baseline by population sequencing, with regard to early virological failure. METHODS: We studied 4 patients who experienced early virological failure of a first-line regimen of lamivudine, tenofovir, and either efavirenz or nevirapine and 18 control patients undergoing similar treatment without virological failure. The key mutations K65R, K103N, Y181C, M184V, and M184I in the reverse transcriptase were quantified by allele-specific real-time polymerase chain reaction performed on plasma samples before and during early virological treatment failure. RESULTS: Before treatment, none of the viruses showed any evidence of drug resistance in the standard genotype analysis. Minority quasispecies with either the M184V mutation or the M184I mutation were detected in 3 of 18 control patients. In contrast, all 4 patients whose treatment was failing had harbored drug-resistant viruses at low frequencies before treatment, with a frequency range of 0.07%-2.0%. A range of 1-4 mutations was detected in viruses from each patient. Most of the minority quasispecies were rapidly selected and represented the major virus population within weeks after the patients started antiretroviral therapy. All 4 patients showed good adherence to treatment. Nonnucleoside reverse-transcriptase inhibitor plasma concentrations were in normal ranges for all 4 patients at 2 separate assessment times. CONCLUSIONS: Minority quasispecies of drug-resistant viruses, detected at baseline, can rapidly outgrow and become the major virus population and subsequently lead to early therapy failure in treatment-naive patients who receive antiretroviral therapy regimens with a low genetic resistance barrier.
Resumo:
Background: The prevalence of small intestinal bowel bacterial overgrowth (SIBO) in patients with irritable bowel syndrome (IBS) ranges from 43% to 78% as determined by the lactulose hydrogen breath (LHBT) test. Although rifaximine, a non-absorbable antibiotic, has been able to decrease global IBS symptoms as well as bloating in placebo-controlled randomized trials, these results were not repeated in phase IV studies in daily clinical practice. Aim: To assess the prevalence of SIBO in an IBS cohort and to evaluate the treatment response in the IBS cohort affected by SIBO. Methods: Enrolled patients were diagnosed with IBS using the following criteria: fulfillment of the Rome III criteria, absence of alarm symptoms (anemia, weight loss, nocturnal symptoms etc), normal fecal calproectin, normal endoscopic workup including histology. Celiac disease was excluded by serology and/or duodenal biopsy. All patients underwent lactulose hydrogen breath testing (LHBT) for SIBO diagnosis. Patients with SIBO were treated with rifaximine tablets (400mg twice daily for 14 days). Both before and at week 6 after rifaximin treatment, patients completed a questionnaire, where the following criteria were assessed individually using 11-point Likert scales: the bloating, flatulence, abdominal pain, diarrhea, and overall well-being. Results: Hundred-fifty IBS patients were enrolled (76% female, mean age 44 ± 16 years), of whom 106 (71%) were diagnosed with SIBO and consequently treated with rifaximine. Rifaximine treatment significantly reduced the following symptoms as assessed by the symptom questionnaire: bloating (5.5 ± 2.6 before vs. 3.6 ± 2.7 after treatment, p <0.001), flatulence (5 ± 2.7 vs. 4 ± 2.7, p = 0.015), diarrhea (2.9 ± 2.4 vs. 2 ± 2.4, p = 0.005), abdominal pain (4.8 ± 2.7 vs. 3.3 ± 2.5, p <0.001) and resulted in improved overall well-being (3.9 ± 2.4 vs. 2.7 ± 2.3, p <0.001). Thirteen of the 106 treated patients were lost to follow-up (12%). The LHBT was repeated 2-4 weeks after rifaximine treatment in 65/93 (70%) patients. Eradication of SIBO was documented in 85% of all patients (55/65), whereas 15% of patients (10/65) tested positive for SIBO as determined by the LHBT testing. Conclusions: The results of our phase IV trial indicate that a high proportion of IBS patients tested positive for SIBO. IBS symptoms (bloating, flatulence, diarrhea, pain, overall well-being) were significantly diminished following a 2-week treatment with rifaximine. These results support the previous findings of randomized controlled trials that the presence of SIBO is associated with symptom generation in IBS patients and that reduction and/or elimination of SIBO may help to alleviate IBSassociated symptoms.
Resumo:
Digital holographic microscopy (DHM) is a technique that allows obtaining, from a single recorded hologram, quantitative phase image of living cell with interferometric accuracy. Specifically the optical phase shift induced by the specimen on the transmitted wave front can be regarded as a powerful endogenous contrast agent, depending on both the thickness and the refractive index of the sample. Thanks to a decoupling procedure cell thickness and intracellular refractive index can be measured separately. Consequently, Mean corpuscular volume (MCV) and mean corpuscular hemoglobin concentration (MCHC), two highly relevant clinical parameters, have been measured non-invasively at a single cell level. The DHM nanometric axial and microsecond temporal sensitivities have permitted to measure the red blood cell membrane fluctuations (CMF) on the whole cell surface. ©2009 COPYRIGHT SPIE--The International Society for Optical Engineering.
Resumo:
Current levels of endangerment and historical trends of species and habitats are the main criteria used to direct conservation efforts globally. Estimates of future declines, which might indicate different priorities than past declines, have been limited by the lack of appropriate data and models. Given that much of conservation is about anticipating and responding to future threats, our inability to look forward at a global scale has been a major constraint on effective action. Here, we assess the geography and extent of projected future changes in suitable habitat for terrestrial mammals within their present ranges. We used a global earth-system model, IMAGE, coupled with fine-scale habitat suitability models and parametrized according to four global scenarios of human development. We identified the most affected countries by 2050 for each scenario, assuming that no additional conservation actions other than those described in the scenarios take place. We found that, with some exceptions, most of the countries with the largest predicted losses of suitable habitat for mammals are in Africa and the Americas. African and North American countries were also predicted to host the most species with large proportional global declines. Most of the countries we identified as future hotspots of terrestrial mammal loss have little or no overlap with the present global conservation priorities, thus confirming the need for forward-looking analyses in conservation priority setting. The expected growth in human populations and consumption in hotspots of future mammal loss mean that local conservation actions such as protected areas might not be sufficient to mitigate losses. Other policies, directed towards the root causes of biodiversity loss, are required, both in Africa and other parts of the world.
Resumo:
The treatment of some cancer patients has shifted from traditional, non-specific cytotoxic chemotherapy to chronic treatment with molecular targeted therapies. Imatinib mesylate, a selective inhibitor of tyrosine kinases (TKIs) is the most prominent example of this new era and has opened the way to the development of several additional TKIs, including sunitinib, nilotinib, dasatinib, sorafenib and lapatinib, in the treatment of various hematological malignancies and solid tumors. All these agents are characterized by an important inter-individual pharmacokinetic variability, are at risk for drug interactions, and are not devoid of toxicity. Additionally, they are administered for prolonged periods, anticipating the careful monitoring of their plasma exposure via Therapeutic Drug Monitoring (TDM) to be an important component of patients' follow-up. We have developed a liquid chromatography-tandem mass spectrometry method (LC-MS/MS) requiring 100 microL of plasma for the simultaneous determination of the six major TKIs currently in use. Plasma is purified by protein precipitation and the supernatant is diluted in ammonium formate 20 mM (pH 4.0) 1:2. Reverse-phase chromatographic separation of TKIs is obtained using a gradient elution of 20 mM ammonium formate pH 2.2 and acetonitrile containing 1% formic acid, followed by rinsing and re-equilibration to the initial solvent composition up to 20 min. Analyte quantification, using matrix-matched calibration samples, is performed by electro-spray ionization-triple quadrupole mass spectrometry by selected reaction monitoring detection using the positive mode. The method was validated according to FDA recommendations, including assessment of extraction yield, matrix effects variability (<9.6%), overall process efficiency (87.1-104.2%), as well as TKIs short- and long-term stability in plasma. The method is precise (inter-day CV%: 1.3-9.4%), accurate (-9.2 to +9.9%) and sensitive (lower limits of quantification comprised between 1 and 10 ng/mL). This is the first broad-range LC-MS/MS assay covering the major currently in-use TKIs. It is an improvement over previous methods in terms of convenience (a single extraction procedure for six major TKIs, reducing significantly the analytical time), sensitivity, selectivity and throughput. It may contribute to filling the current knowledge gaps in the pharmacokinetics/pharmacodynamics relationships of the latest TKIs developed after imatinib and better define their therapeutic ranges in different patient populations in order to evaluate whether a systematic TDM-guided dose adjustment of these anticancer drugs could contribute to minimize the risk of major adverse reactions and to increase the probability of efficient, long lasting, therapeutic response.
Resumo:
Plants are sessile organisms, often characterized by limited dispersal. Seeds and pollen are the critical stages for gene flow. Here we investigate spatial genetic structure, gene dispersal and the relative contribution of pollen vs seed in the movement of genes in a stable metapopulation of the white campion Silene latifolia within its native range. This short-lived perennial plant is dioecious, has gravity-dispersed seeds and moth-mediated pollination. Direct measures of pollen dispersal suggested that large populations receive more pollen than small isolated populations and that most gene flow occurs within tens of meters. However, these studies were performed in the newly colonized range (North America) where the specialist pollinator is absent. In the native range (Europe), gene dispersal could fall on a different spatial scale. We genotyped 258 individuals from large and small (15) subpopulations along a 60 km, elongated metapopulation in Europe using six highly variable microsatellite markers, two X-linked and four autosomal. We found substantial genetic differentiation among subpopulations (global F(ST)=0.11) and a general pattern of isolation by distance over the whole sampled area. Spatial autocorrelation revealed high relatedness among neighboring individuals over hundreds of meters. Estimates of gene dispersal revealed gene flow at the scale of tens of meters (5-30 m), similar to the newly colonized range. Contrary to expectations, estimates of dispersal based on X and autosomal markers showed very similar ranges, suggesting similar levels of pollen and seed dispersal. This may be explained by stochastic events of extensive seed dispersal in this area and limited pollen dispersal.
Resumo:
Objective: Aspergillus species are the main pathogens causing invasive fungal infections but the prevalence of other mould species is rising. Resistance to antifungals among these new emerging pathogens presents a challenge for managing of infections. Conventional susceptibility testing of non-Aspergillus species is laborious and often difficult to interpret. We evaluated a new method for real-time susceptibility testing of moulds based on their of growth-related heat production.Methods: Laboratory and clinical strains of Mucor spp. (n = 4), Scedoporium spp. (n = 4) and Fusarium spp. (n = 5) were used. Conventional MIC was determined by microbroth dilution. Isothermal microcalorimetry was performed at 37 C using Sabouraud dextrose broth (SDB) inoculated with 104 spores/ml (determined by microscopical enumeration). SDB without antifungals was used for evaluation of growth characteristics. Detection time was defined as heat flow exceeding 10 lW. For susceptibility testing serial dilutions of amphotericin B, voriconazole, posaconazole and caspofungin were used. The minimal heat inhibitory concentration (MHIC) was defined as the lowest antifungal concentration, inhbiting 50% of the heat produced by the growth control at 48 h or at 24 h for Mucor spp. Susceptibility tests were performed in duplicate.Results: Tested mould genera had distinctive heat flow profiles with a median detection time (range) of 3.4 h (1.9-4.1 h) for Mucor spp, 11.0 h (7.1-13.7 h) for Fusarium spp and 29.3 h (27.4-33.0 h) for Scedosporium spp. Graph shows heat flow (in duplicate) of one representative strain from each genus (dashed line marks detection limit). Species belonging to the same genus showed similar heat production profiles. Table shows MHIC and MIC ranges for tested moulds and antifungals.Conclusions: Microcalorimetry allowed rapid detection of growth of slow-growing species, such as Fusarium spp. and Scedosporium spp. Moreover, microcalorimetry offers a new approach for antifungal susceptibility testing of moulds, correlating with conventional MIC values. Interpretation of calorimetric susceptibility data is easy and real-time data on the effect of different antifungals on the growth of the moulds is additionally obtained. This method may be used for investigation of different mechanisms of action of antifungals, new substances and drug-drug combinations.
Resumo:
INTRODUCTION: The spatio-temporal pattern of arrhythmias in the embryonic/fetal heart subjected to a transient hypoxic or hypothermic stress remains to be established. METHODS AND RESULTS: Spontaneously beating hearts or isolated atria, ventricles, and conotruncus from 4-day-old chick embryos were subjected in vitro to 30-minute anoxia and 60-minute reoxygenation. Hearts were also submitted to 30-minute hypothermia (0-4 degrees C) and 60-minute rewarming. ECG disturbances and alterations of atrial and ventricular electromechanical delay (EMD) were systematically investigated. Baseline functional parameters were stable during at least 2 hours. Anoxia induced tachycardia, followed by bradycardia, atrial ectopy, first-, second-, and third-degree atrio-ventricular blocks and, finally, transient electromechanical arrest after 6.8 minutes, interquartile ranges (IQR) 3.1-16.2 (n = 8). Reoxygenation triggered also Wenckebach phenomenon and ventricular escape beats. At the onset of reoxygenation QT, PR, and ventricular EMD increased by 68%, 70%, and 250%, respectively, whereas atrial EMD was not altered. No fibrillations, no ventricular ectopic beats, and no electromechanical dissociation were observed. Arrhythmic activity of the isolated atria persisted throughout anoxia and upon reoxygenation, whereas activity of the isolated ventricles abruptly ceased after 5 minutes of anoxia and resumed after 5 minutes of reoxygenation. During hypothermia-rewarming, cardiac activity stopped at 17.9 degrees C, IQR 16.2-20.6 (n = 4) and resumed at the same temperature with no arrhythmias. All preparations fully recovered after 40 minutes of reoxygenation or rewarming. CONCLUSION: In the embryonic heart, arrhythmias mainly originated in the sinoatrial tissue and resembled those observed in the adult heart. Furthermore, oxygen readmission was by far more arrhythmogenic than rewarming and the chronotropic, dromotropic, and inotropic effects were fully reversible.
Resumo:
Introduction: In the middle of the 90's, the discovery of endogenous ligands for cannabinoid receptors opened a new era in this research field. Amides and esters of arachidonic acid have been identified as these endogenous ligands. Arachidonoylethanolamide (anandamide or AEA) and 2-Arachidonoylglycerol (2-AG) seem to be the most important of these lipid messengers. In addition, virodhamine (VA), noladin ether (2-AGE), and N-arachidonoyl dopamine (NADA) have been shown to bind to CB receptors with varying affinities. During recent years, it has become more evident that the EC system is part of fundamental regulatory mechanisms in many physiological processes such as stress and anxiety responses, depression, anorexia and bulimia, schizophrenia disorders, neuroprotection, Parkinson disease, anti-proliferative effects on cancer cells, drug addiction, and atherosclerosis. Aims: This work presents the problematic of EC analysis and the input of Information Dependant Acquisition based on hybrid triple quadrupole linear ion trap (QqQLIT) system for the profiling of these lipid mediators. Methods: The method was developed on a LC Ultimate 3000 series (Dionex, Sunnyvale, CA, USA) coupled to a QTrap 4000 system (Applied biosystems, Concord, ON, Canada). The ECs were separated on an XTerra C18 MS column (50 × 3.0 mm i.d., 3.5 μm) with a 5 min gradient elution. For confirmatory analysis, an information-dependant acquisition experiment was performed with selected reaction monitoring (SRM) as survey scan and enhanced produced ion (EPI) as dependant scan. Results: The assay was found to be linear in the concentration range of 0.1-5 ng/mL for AEA, 0.3-5 ng/mL for VA, 2-AGE, and NADA and 1-20 ng/mL for 2-AG using 0.5 mL of plasma. Repeatability and intermediate precision were found less than 15% over the tested concentration ranges. Under non-pathophysiological conditions, only AEA and 2-AG were actually detected in plasma with concentration ranges going from 104 to 537 pg/mL and from 2160 to 3990 pg/mL respectively. We have particularly focused our scopes on the evaluation of EC level changes in biological matrices through drug addiction and atherosclerosis processes. We will present preliminary data obtained during pilot study after administration of cannabis on human patients. Conclusion: ECs have been shown to play a key role in regulation of many pathophysiological processes. Medical research in these different fields continues to growth in order to understand and to highlight the predominant role of EC in the CNS and peripheral tissues signalisation. The profiling of these lipids needs to develop rapid, highly sensitive and selective analytical methods.