188 resultados para Percentage of syllables stuttered


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although many studies have been carried out to verify the involvement of the peripheral nervous system (PNS) in dystrophia myotonica (DM1) patients, the results remain controversial. The generation of DM1 transgenic mice displaying the human DM1 phenotype provides a useful tool to investigate the type and incidence of structural abnormalities in the PNS. In the present study, the morphological and morphometric analysis of semi-thin sections of sciatic and sural nerves, lumbar dorsal root ganglia (DRG) and lumbar spinal cords revealed that in DM1 transgenic mice carrying 300 CTG repeats, there is no change in the number and diameter of myelinated axons compared to wild type. Only a non-significant reduction in the percentage of thin myelinated axons was detected in electron micrographs of ultra-thin sciatic nerve sections. Analysis of the number of neurons did not reveal a loss in number of either sensory neurons in the lumbar DRG or motor neurons in the lumbar spinal cord in these DM1 mice. Furthermore, in hind limb muscle sections, stained with a neurofilament antibody and alpha-bungarotoxin, the intramuscular axon arborization appeared normal in DM1 mice and undistinguishable from that in wild-type mice. Moreover, in DM1 mice, there was no irregularity in the structure or an increase in the endplate area. Also statistical analysis did not show an increase in endplate density or in the concentration of acetylcholine receptors. Altogether, these results suggest that 300 CTG repeats are not sufficient to induce axonopathy, demyelination or neuronopathies in this transgenic mouse model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Although CD4 cell count monitoring is used to decide when to start antiretroviral therapy in patients with HIV-1 infection, there are no evidence-based recommendations regarding its optimal frequency. It is common practice to monitor every 3 to 6 months, often coupled with viral load monitoring. We developed rules to guide frequency of CD4 cell count monitoring in HIV infection before starting antiretroviral therapy, which we validated retrospectively in patients from the Swiss HIV Cohort Study.Methodology/Principal Findings: We built up two prediction rules ("Snap-shot rule" for a single sample and "Track-shot rule" for multiple determinations) based on a systematic review of published longitudinal analyses of CD4 cell count trajectories. We applied the rules in 2608 untreated patients to classify their 18 061 CD4 counts as either justifiable or superfluous, according to their prior >= 5% or < 5% chance of meeting predetermined thresholds for starting treatment. The percentage of measurements that both rules falsely deemed superfluous never exceeded 5%. Superfluous CD4 determinations represented 4%, 11%, and 39% of all actual determinations for treatment thresholds of 500, 350, and 200x10(6)/L, respectively. The Track-shot rule was only marginally superior to the Snap-shot rule. Both rules lose usefulness for CD4 counts coming near to treatment threshold.Conclusions/Significance: Frequent CD4 count monitoring of patients with CD4 counts well above the threshold for initiating therapy is unlikely to identify patients who require therapy. It appears sufficient to measure CD4 cell count 1 year after a count > 650 for a threshold of 200, > 900 for 350, or > 1150 for 500x10(6)/L, respectively. When CD4 counts fall below these limits, increased monitoring frequency becomes advisable. These rules offer guidance for efficient CD4 monitoring, particularly in resource-limited settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Epidemiological studies in urban areas have linked increasing respiratory and cardiovascular pathologies with atmospheric particulate matter (PM) from anthropic activities. However, the biological fate of metal-rich PM industrial emissions in urban areas of developed countries remains understudied. Lead toxicity and bioaccessibility assessments were therefore performed on emissions from a lead recycling plant, using complementary chemical acellular tests and toxicological assays, as a function of PM size (PM(10-2.5), PM(2.5-1) and PM(1)) and origin (furnace, refining and channeled emissions). Process PM displayed differences in metal content, granulometry, and percentage of inhalable fraction as a function of their origin. Lead gastric bioaccessibility was relatively low (maximum 25%) versus previous studies; although, because of high total lead concentrations, significant metal quantities were solubilized in simulated gastrointestinal fluids. Regardless of origin, the finest PM(1) particles induced the most significant pro-inflammatory response in human bronchial epithelial cells. Moreover, this biological response correlated with pro-oxidant potential assay results, suggesting some biological predictive value for acellular tests. Pulmonary effects from lead-rich PM could be driven by thiol complexation with either lead ions or directly on the particulate surface. Finally, health concern of PM was discussed on the basis of pro-inflammatory effects, accellular test results, and PM size distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Tuberculosis (TB) screening in prisons is recommended, but the appropriate methods remain controversial. Studies evaluating screening in remand prisons arc scarce. METHOD: Between 1997 and 2001, voluntary screening based on the tuberculin skin test (TST) was offered to all prisoners on entry into the largest remand prison in Switzerland. Prisoners with positive results underwent chest X-rays. We analysed this information collected in an anonymous database. RESULTS: A total of 4890 prisoners entered the prison and were eligible for screening; 3779 (77.3%) had TST performed on average 9 days after arrival: 46.9% were positive (induration >= 10 mm). Positive TST rates were similar over the 5 years. Women were more likely to have a negative TST (60.4%) than men (47.7%; P < 0.001, Pearson's chi(2) 16.5). Positive TSTs varied according to the prisoner's country of origin (64% for sub-Saharan Africa, 57% for Eastern Europe, 56% for North Africa, 51% for Asia and 34% for North and West Europe). CONCLUSION: The percentage of TST-positive subjects was high, and most did not receive preventive treatment for latent TB. The usefulness of systematic TST for all prisoners on entry is limited, as diagnosis of TB disease usually remains the priority in prisons. Keywords

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effet d'un bolus intraveineux de phénylephrine ou d'éphedríne sur le flux sanguin cutané lors d'une anesthésie rachidienne Introduction : La phénylephrine et l'éphedrine sont des substances vaso-actives utilisées de routine pour corriger des épisodes d'hypotension artérielle induits par l'anesthésie intrarachidienne. L'influence de ces deux vasopresseurs sur le flux sanguin cutané (FSC) dans ce contexte n'a jusqu'à maintenant pas été décrite. Cette étude évalue l'effet d'une injection intraveineuse de 75 µg de phénylephrine ou de 7.5 mg d'éphedrine sur le FSC mesuré par Laser Doppler, dans les zones concernées parle bloc sympathiqué induit par l'anesthésie intrarachidienne (membres inférieurs) et dans les zones non concernées (membres supérieurs). Méthode :Après acceptation par le Comité d'Éthique, et obtention de leur accord écrit, 20 patients devant subir une intervention chirurgicale élective en décubitus dorsal sous anesthésie. intrarachidienne ont été inclus dans cette étude randomisée en double insu. Le FSC a été mesuré en continu par deux sondes fixées l'une à la cuisse (zone avec bloc sympathique) et l'autre sur l'avantbras (zone sans bloc sympathique). Les valeurs de FSC ont été enregistrées après l'anesthésie rachidienne (valeur contrôle), puis après l'injection i.v. dè phénylephrine (10 patients) ou d'éphedrine (10 patients) pour corriger une hypotension définie comme une chute de 20 mmHg de la pression artérielle systolique. Les variations de FSC exprimées en pourcentage de la valeur contrôle moyenne (+/- écart type) ont été analysées par le test t de Student. Résultats :Les données démographiques des patients et le niveau sensitif induit par l'anesthésie rachidienne sont similaires dans les deux groupes. Aux doses utilisées, seule l'éphedrine restaure la pression artérielle aux valeurs précédant l'anesthésie rachidienne. La phénylephrine augmente le FSC de l'avant-bras de 44% (+/- 79%) et de la cuisse de 34% (+/-24%), alors que l'éphedrine diminue le débit sanguin cutané de l'avant-bras de 16% (+/- 15%) et de la cuisse de 22% (+/-11%). Conclusion : L'injection intraveineuse de phénylephrine et d'éphedrine ont des effets opposés sur le flux sanguin cutané, et cette réponse n'est pas modifiée par le bloc sympathique.. Cette différence peut s'expliquer par la distribution des sous-types de récepteurs adrénergiques alpha et leur prédominance relative dans les veines et les artères de différents diamètres perfusant le tissu sous-cutané et la peau. L'éphedrine, èn raison de sa meilleure efficacité pour traiter les épisodes d'hypotension artérielle après anesthésie intrarachidienne devrait être préféré à la phénylephrine, leurs effets opposés sur le flux sanguin cutané n'étant pas pertinents en pratique clinique. SUMMARY Background: Phenylephrine or ephedrine is routinely used to correct hypotensive episodes fallowing spinal anaesthesia (SA). The influence of these two vasopressors on skin blood flow (SBF) has not yet been described. We have therefore evaluated the effects of an i.v. bolus of 75 µg phenylephrine or 7.5 mg of ephedrine on SBF measured by laser Doppler flowmetry during sympathetic blockade induced by SA. Methods: With Ethical Committee approval and written consent, 20 patients scheduled for elective procedures in supine position under SA were enrolled in this double-blind randomized study. SBF was measured continuously by two probes fixed at the thigh (area with sympathic blockade) and forearm level (area without sympathic blockade) respectively. SBF values were recorded after SA (control values) and then after a bolus administration of phenylephriné (n=10) or ephedrine (n=10) when systolic blood pressure decreased by 20 mmHg. Changes were expressed as percentage of control SBF values and analysed by Student's paired t-test. Results: Patient characteristics and dermatomal sensory levels were similar in both groups. Phenylephrine increases mean SBF at the forearm level by 44% (79%) [mean (SD)j and at the thigh by 34% (24%). Ephedrine decreases SBF at the forearm level by 16% (15%) and at the thigh by 22% (il%). Ephedrine bolus restores arterial blood pressure to pre-anaesthesia values, whereas phenylephrine does not. Conclusion: Administratión of phenylephrine and ephedrine has opposite effects on skin blood flow and sympathetic blockade does not modify this response. These findings could be explained by the distribution of the alpha-adrenoréceptor subtypes and their relative predominance among veins and arteries of different size perfusing the subcutaneous tissue and the skin. Ephedrine, due to its better efficacy to correct hypotensive episodes following SA, should be preferred, to phenylephrine, their opposite effects on SBF being not relevant for clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: A clinical decision rule to improve the accuracy of a diagnosis of influenza could help clinicians avoid unnecessary use of diagnostic tests and treatments. Our objective was to develop and validate a simple clinical decision rule for diagnosis of influenza. METHODS: We combined data from 2 studies of influenza diagnosis in adult outpatients with suspected influenza: one set in California and one in Switzerland. Patients in both studies underwent a structured history and physical examination and had a reference standard test for influenza (polymerase chain reaction or culture). We randomly divided the dataset into derivation and validation groups and then evaluated simple heuristics and decision rules from previous studies and 3 rules based on our own multivariate analysis. Cutpoints for stratification of risk groups in each model were determined using the derivation group before evaluating them in the validation group. For each decision rule, the positive predictive value and likelihood ratio for influenza in low-, moderate-, and high-risk groups, and the percentage of patients allocated to each risk group, were reported. RESULTS: The simple heuristics (fever and cough; fever, cough, and acute onset) were helpful when positive but not when negative. The most useful and accurate clinical rule assigned 2 points for fever plus cough, 2 points for myalgias, and 1 point each for duration <48 hours and chills or sweats. The risk of influenza was 8% for 0 to 2 points, 30% for 3 points, and 59% for 4 to 6 points; the rule performed similarly in derivation and validation groups. Approximately two-thirds of patients fell into the low- or high-risk group and would not require further diagnostic testing. CONCLUSION: A simple, valid clinical rule can be used to guide point-of-care testing and empiric therapy for patients with suspected influenza.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Interventions have been developed to reduce overestimations of substance use among others, especially for alcohol and among students. Nevertheless, there is a lack of knowledge on misperceptions of use for substances other than alcohol. We studied the prevalence of misperceptions of use for tobacco, cannabis, and alcohol and whether the perception of tobacco, cannabis, and alcohol use by others is associated with one's own use. METHODS: Participants (n=5216) in a cohort study from a census of 20-year-old men (N=11,819) estimated the prevalence of tobacco and cannabis use among peers of the same age and sex and the percentage of their peers drinking more alcohol than they did. Using the census data, we determined whether participants overestimated, accurately estimated, or underestimated substance use by others. Regression models were used to compare substance use by those who overestimated or underestimated peer substance with those who accurately estimated peer use. Other variables included in the analyses were the presence of close friends with alcohol or other drug problems and family history of substance use. RESULTS: Tobacco use by others was overestimated by 46.1% and accurately estimated by 37.3% of participants. Cannabis use by others was overestimated by 21.8% and accurately estimated by 31.6% of participants. Alcohol use by others was overestimated by more than half (53.4%) of participants and accurately estimated by 31.0%. In multivariable models, compared with participants who accurately estimated tobacco use by others, those who overestimated it reported smoking more cigarettes per week (incidence rate ratio [IRR] [95% CI], 1.17 [range, 1.05, 1.32]). There was no difference in the number of cigarettes smoked per week between those underestimating and those accurately estimating tobacco use by others (IRR [95% CI], 0.99 [range, 0.84, 1.17]). Compared with participants accurately estimating cannabis use by others, those who overestimated it reported more days of cannabis use per month (IRR [95% CI], 1.43 [range, 1.21, 1.70]), whereas those who underestimated it reported fewer days of cannabis use per month (IRR [95% CI], 0.62 [range, 0.23, 0.75]). Compared with participants accurately estimating alcohol use by others, those who overestimated it reported consuming more drinks per week (IRR [95% CI], 1.57 [range, 1.43, 1.72]), whereas those who underestimated it reported consuming fewer drinks per week (IRR [95% CI], 0.41 [range, 0.34, 0.50]). CONCLUSIONS: Perceptions of substance use by others are associated with one's own use. In particular, overestimating use by others is frequent among young men and is associated with one's own greater consumption. This association is independent of the substance use environment, indicating that, even in the case of proximity to a heavy-usage group, perception of use by others may influence one's own use. If preventive interventions are to be based on normative feedback, and their aim is to reduce overestimations of use by others, then the prevalence of overestimation indicates that they may be of benefit to roughly half the population; or, in the case of cannabis, to as few as 20%. Such interventions should take into account differing strengths of association across substances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Cardiovascular magnetic resonance (CMR) has become an important diagnostic imaging modality in cardiovascular medicine. However, insufficient image quality may compromise its diagnostic accuracy. We aimed to describe and validate standardized criteria to evaluate a) cine steady-state free precession (SSFP), b) late gadolinium enhancement (LGE), and c) stress first-pass perfusion images. These criteria will serve for quality assessment in the setting of the Euro-CMR registry. METHODS: Thirty-five qualitative criteria were defined (scores 0-3) with lower scores indicating better image quality. In addition, quantitative parameters were measured yielding 2 additional quality criteria, i.e. signal-to-noise ratio (SNR) of non-infarcted myocardium (as a measure of correct signal nulling of healthy myocardium) for LGE and % signal increase during contrast medium first-pass for perfusion images. These qualitative and quantitative criteria were assessed in a total of 90 patients (60 patients scanned at our own institution at 1.5T (n=30) and 3T (n=30) and in 30 patients randomly chosen from the Euro-CMR registry examined at 1.5T). Analyses were performed by 2 SCMR level-3 experts, 1 trained study nurse, and 1 trained medical student. RESULTS: The global quality score was 6.7±4.6 (n=90, mean of 4 observers, maximum possible score 64), range 6.4-6.9 (p=0.76 between observers). It ranged from 4.0-4.3 for 1.5T (p=0.96 between observers), from 5.9-6.9 for 3T (p=0.33 between observers), and from 8.6-10.3 for the Euro-CMR cases (p=0.40 between observers). The inter- (n=4) and intra-observer (n=2) agreement for the global quality score, i.e. the percentage of assignments to the same quality tertile ranged from 80% to 88% and from 90% to 98%, respectively. The agreement for the quantitative assessment for LGE images (scores 0-2 for SNR <2, 2-5, >5, respectively) ranged from 78-84% for the entire population, and 70-93% at 1.5T, 64-88% at 3T, and 72-90% for the Euro-CMR cases. The agreement for perfusion images (scores 0-2 for %SI increase >200%, 100%-200%,<100%, respectively) ranged from 81-91% for the entire population, and 76-100% at 1.5T, 67-96% at 3T, and 62-90% for the Euro-CMR registry cases. The intra-class correlation coefficient for the global quality score was 0.83. CONCLUSIONS: The described criteria for the assessment of CMR image quality are robust with a good inter- and intra-observer agreement. Further research is needed to define the impact of image quality on the diagnostic and prognostic yield of CMR studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The expression of substance P (SP) was studied in sensory neurons of developing chick lumbosacral dorsal root ganglia (DRG) by using a mixture of periodic acid, lysine and paraformaldehyde as fixative and a monoclonal antibody for SP-like immunostaining. The first SP-like-immunoreactive DRG cells appeared first at E5, then rapidly increased in number to reach a peak (88% of ganglion cells) at E8, and finally declined (59% at E12, 51% after hatching). The fall of the SP-like-positive DRG cells resulted from two concomitant events affecting a subset of small B-neurons: a loss of neuronal SP-like immunoreactivity and cell death. After one hindlimb resection at an early (E6) or late (E12) stage of development (that is before or after establishment of peripheral connections), the DRG were examined 6 days later. In both cases, a drastic neuronal death occurred in the ispilateral DRG. However, the resection at E6 did not change the percentage of SP-like-positive neurons, while the resection at E12 severely reduced the proportion of SP-like-immunoreactive DRG cells (25%). In conclusion, connections established between DRG and peripheral target tissues not only promote the survival of sensory neurons, but also control the maintenance of SP-like-expression. Factors issued from innervated targets such as NGF would support the survival of SP-expressing DRG cells and enhance their SP content while other factors present in skeletal muscle or skin would hinder SP expression and therefore lower SP levels in a subset of primary sensory neurons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The antihypertensive effect of debrisoquine (20 mg/day), methyldopa (100 mg/day) and propranolol (160 mg/day) was compared to that obtained with a placebo in a controlled trial carried out by a group of 14 internists. Forty-eight patients with uncomplicated essential hypertension were included. Mefruside (25 mg/day) was first given alone for 6 weeks ("open phase" of the trial) and to this diuretic was then added in double-blind fashion and randomized sequence a placebo or an active drug. Each of the 4 blind phases lasted 4 weeks. At the end of the "open phase", blood pressure in seated position averaged 168/111 +/- 19.6/13.5 mm Hg (mean +/- SD). A significant blood pressure decrease was observed after 4 weeks of treatment with the placebo as well as with the investigated compounds. With the placebo blood pressure was reduced to 158/102 +/- 19.6/13.5 mm Hg (p less than 0.001). The magnitude of the additional blood pressure decrease induced by the active drugs was relatively small and varied from 4 (debrisoquine) to 10 mm Hg (methyldopa, p less than 0.01) for the systolic and from 3 (debrisoquine, p less than 0.05) to 5 mm Hg (propranolol, p less than 0.05) for the diastolic. The percentage of patients with systolic pressure of less than or equal to 140 mm Hg and with diastolic pressure of less than 90 mm Hg during administration of either drug was not greater than 40 to 20% respectively. Propranolol appeared to be better tolerated than the other antihypertensive agents. These rather disappointing blood pressure results suggest that the efficacy of antihypertensive agents in private practice cannot be extrapolated from studies carried out in specialized hypertension clinics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although important progresses have been achieved in the therapeutic management of transplant recipients, acute and chronic rejections remain the leading causes of premature graft loss after solid organ transplantation. This, together with the undesirable side effects of immunosuppressive drugs, has significant implications for the long-term outcome of transplant recipients. Thus, a better understanding of the immunological events occurring after transplantation is essential. The immune system plays an ambivalent role in the outcome of a graft. On one hand, some T lymphocytes with effector functions (called alloreactive) can mediate a cascade of events eventually resulting in the rejection, either acute or chronic, of the grafted organ ; on the other hand, a small subset of T lymphocytes, called regulatory T cells, has been shown to be implicated in the control of these harmful rejection responses, among other things. Thus, we focused our interest on the study of the balance between circulating effectors (alloreactive) and regulatory T lymphocytes, which seems to play an important role in the outcome of allografts, in the context of kidney transplantation. The results were correlated with various variables such as the clinical status of the patients, the immunosuppressive drugs used as induction or maintenance agents, and past or current episodes of rejection. We observed that the percentage of the alloreactive T lymphocyte population was correlated with the clinical status of the kidney transplant recipients. Indeed, the highest percentage was found in patients suffering from chronic humoral rejection, whilst patients on no or only minimal immunosuppressive treatment or on sirolimus-based immunosuppression displayed a percentage comparable to healthy non-transplanted individuals. During the first year after renal transplantation, the balance between effectors and regulatory T lymphocytes was tipped towards the detrimental effector immune response, with the two induction agents studied (thymoglobulin and basiliximab). Overall, these results indicate that monitoring these immunological parameters may be very useful for the clinical follow-up of transplant recipients ; these tests may contribute to identify patients who are more likely to develop rejection or, on the contrary, who tolerate well their graft, in order to adapt the immunosuppressive treatment on an individual basis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: To measure postabsorptive fat oxidation (F(ox)) and to assess its association with body composition (lean body mass [LBM] and body fat mass [BFM]) and pubertal development. DESIGN: We studied 235 control (male/female ratio = 116/119; age [mean +/- SD]: 13.1 +/- 1.7 years; weight: 45.3 +/- 10.5 kg; LBM: 34.3 +/- 7.1 kg; BFM: 11.0 +/- 4.5 kg) and 159 obese (male/female ratio = 93/66; age: 12.9 +/- 2.1 years; weight: 76.2 +/- 19.1 kg; LBM: 47.4 +/- 10.9 kg; BFM: 28.8 +/- 9.2 kg) adolescents. Postabsorptive F(ox) was calculated from oxygen consumption, carbon dioxide production, and urinary nitrogen as measured by indirect calorimetry and Kjeldahl's method, respectively. Body composition was determined by anthropometry. RESULTS: Postabsorptive F(ox) (absolute value and percentage of resting metabolic rate) was significantly (p < 0.001) higher in the obese adolescents (76.7 +/- 26.3 gm/24 hours, 42.3% +/- 18.7%) than in the control subjects (40.0 +/- 26.3 gm/24 hours, 28.7% +/- 17.0%), even if adjusted for LBM. F(ox) corrected for BFM was similar in control and in obese children, but was significantly lower in girls compared with boys (control male subjects: 62.1 +/- 29.1 gm/24 hours, control female subjects: 51.6 +/- 28.4 gm/24 hours, obese male subjects: 57.3 +/- 29 gm/24 hour, obese female subjects: 45.0 +/- 28.4 gm/24 hours). BFM and LBM showed a significant positive correlation with F(ox). By stepwise regression analysis the most important determinant of F(ox) was BFM in obese and LBM in control children. There was a significant rise in F(ox) during puberty; however, it was mainly explained by changes in body composition. CONCLUSIONS: Obese adolescents have higher F(ox) rates than their normal-weight counterparts. Both LBM and fat mass are important determinants of F(ox).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The 2007 European Crohn's and Colitis Organization guidelines on anemia in inflammatory bowel disease (IBD) favour intravenous (iv) over oral (po) iron supplementation due to better effectiveness and tolerance. We aimed to determine the percentage of IBD patients under iron supplementation therapy and the dynamics of prescription habits (iv versus po) over time. Methods: Helsana, a leading Swiss health insurance company provides coverage for approximately 18% of the Swiss population, corresponding to about 1.2 million enrollees. Patients with Crohn's disease (CD) and ulcerative colitis (UC) were analyzed from the anonymised Helsana database. Results: In total, 629 CD (61% female) and 398 UC (57% female) patients were identified, mean observation time was 31.8 months for CD and 31.0 months for UC patients. Of the entire study population, 27.1% were prescribed iron (21.1% in males and 31.1% in females). Patients treated with IBDspecific drugs (steroids, immunomodulators, anti-TNF agents) were more frequently treated with iron compared to patients without any medication (35.0% vs. 20.9%, OR 1.91, 95%- CI 1.41 2.61). The prescription of iv iron increased from 2006/2007 (48.8% of all patients receiving any iron priscription) to 65.2% in 2008/2009 by a factor of 1.89. Conclusions: One third of the IBD population was treated with iron supplementation. A gradual shift from oral to iv iron was observed over time. This switch in prescription habits goes along with the implementation of the ECCO consensus guidelines on anemia in IBD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Determine the effect of repeated intravitreal injections of ranibizumab (0.5 mg; 0.05 ml) on retrobulbar blood flow velocities (BFVs) using ultrasound imaging quantification in twenty patients with exudative age-related macular degeneration treated for 6 months. METHODS: Visual acuity (ETDRS), central macular thickness (OCT), peak-systolic, end-diastolic and mean-BFVs in central retinal (CRA), temporal posterior ciliary (TPCA) and ophthalmic (OA) arteries were measured before, 2 days, 3 weeks and 6 months after the first injection. Patients were examined monthly and received 1-5 additional injections depending on ophthalmologic examination results. RESULTS: Six months after the first injection, a significant increase in visual acuity 50.9 ± 25.9 versus 44.4 ± 21.7 (p < 0.01) and decrease in mean central macular thickness 267 ± 74 versus 377 ± 115 μm (p < 0.001) were observed compared to baseline. Although mean-BFVs decreased by 16%±3% in CRA and 20%±5% in TPCA (p < 0.001) 2 days after the first injection, no significant change was seen thereafter. Mean-BFVs in OA decreased by 19%±5% at week 3 (p < 0.001). However, the smallest number of injections (two injections) was associated with the longest time interval between the last injection and month 6 (20 weeks) and with the best return to baseline levels for mean-BFVs in CRA, suggesting that ranibizumab had reversible effects on native retinal vascular supply after its discontinuation. Moreover, a significant correlation between the number of injections and percentage of changes in mean-BFVs in CRA was observed at month 6 (R = 0.74, p < 0.001) unlike TPCA or OA. CONCLUSION: Ranibizumab could impair the native choroidal and retinal vascular networks, but its effect seems reversible after its discontinuation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Pseudomonas aeruginosa, cell-cell communication based on N-acyl-homoserine lactone (AHL) signal molecules (termed quorum sensing) is known to control the production of extracellular virulence factors. Hence, in pathogenic interactions with host organisms, the quorum-sensing (QS) machinery can confer a selective advantage on P. aeruginosa. However, as shown by transcriptomic and proteomic studies, many intracellular metabolic functions are also regulated by quorum sensing. Some of these serve to regenerate the AHL precursors methionine and S-adenosyl-methionine and to degrade adenosine via inosine and hypoxanthine. The fact that a significant percentage of clinical and environmental isolates of P. aeruginosa is defective for QS because of mutation in the major QS regulatory gene lasR, raises the question of whether the QS machinery can have a negative impact on the organism's fitness. In vitro, lasR mutants have a higher probability to escape lytic death in stationary phase under alkaline conditions than has the QS-proficient wild type. Similar selective forces might also operate in natural environments.