990 resultados para Serviço Social, Serviço Social Escolar, Anos 60 e 70


Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo deste trabalho foi observar o comportamento imunológico de potros recém-nascidos das raças Mangalarga e Anglo-Árabe no que se refere ao processo de aquisição de anticorpos maternos e sua correlação com os níveis de imunoglobulina do colostro. Foram utilizados 7 potros Anglo-Árabe e 6 potros Mangalarga para amostragem de sangue imediatamente após o nascimento (antes de qualquer ingestão de colostro), 24 e 48 horas, 5, 10, 15, 20, 25, 30, 40, 50, 60 e 70 dias após o nascimento, e suas respectivas mães cujo colostro foi amostrado imediatamente após a parição, antes da primeira mamada. A quantificação das imunoglobulinas séricas foi efetuada pelo método do ZST (Zinc Sulfate Turbidity) e para a análise dos resultados foram testados modelos matemáticos que estudam o processo em questão. Para a raça Mangalarga o modelo matemático foi: y = 23,9274 - 0,39766x + 6,4675 10-3 x², com r = 0,91 e P < 0,01 e para a raça Anglo Árabe foi y = 34,161 - 0,756062x + 0,015604x² - 1,013 10-3 x³, com r = 0,96 e P < 0,05. A concentração de IgG do colostro foi estimada com base na concentração de proteína total avaliada pelo método Micro-Kjeldhal. Este estudo permitiu concluir: (1) a presença de maior quantidade de IgG (imunoglobulina G) passiva no sangue dos potros, retardou o estabelecimento dos níveis normais de Ig (imunoglobulina G + M+A+E); (2) animais que adquiriram menor quantidade de IgG passiva apresentaram resposta mais intensa de produção endógena de Ig; (3) a concentração estimada de IgG no colostro apresentou correlação positiva com a concentração sérica dos potros no pico da absorção.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The induction of granuloma formation by soluble egg antigens (SEA) of Schistosoma mansoni is accompanied by T cell-mediated lymphokine production that regulates the intensity of the response. In the present study we have examined the ability of SDS-PAGE fractioned SEA proteins to elicit granulomas and lymphokine production in infected and egg-immunized mice. At the acute stage of infection SEA fractions (<21, 25-30, 32-38, 60-66, 70-90, 93-125, and > 200 kD) that elicited pulmonary granulomas also elicited IL-2, IL-4 lymphokine production. At the chronic stage a diminished number of fractions (60-66, 70-90, 93-125, and > 200 kD) were able to elicit granulomas with an overall decrease in IL-2, IL-4 production. Granulomas were elicited by larval-egg crossreactive and egg-specific fractions at both the acute and chronic stage of the infection. Examination of lymphokine production from egg-immunized mice demonstrated that as early as 4 days IL-2 was produced by spleen cells stimulated with <21, 32-38, 40-46, 93-125, and >200 kD fractions. By 16 days, IL-2production was envoked by 8 of 9 fractions. IL-4 production at 4 days in response to all fractions was minimal while at 16 days IL-4 was elicited with the < 21, 25-30, 50-56, 93-125, and > 200 kD fractions. The present study reveals differences in the range of SEA fractions able to elicit granulomas and IL-2, IL-4 production between acute and chronic stages of infection. Additionally, this study demonstrates sequential (IL-2 followed by IL-4) lymphokine production during the primary egg antigen response.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background. The broad spectrum of antitumor activity of both the oral platinum analogue satraplatin (S) and capecitabine (C), along with the advantage of their oral administration, prompted a clinical study aimed to define the maximum tolerated dose (MTD) of the combination. Patients and methods. Four dose levels of S (mg/m(2)/day) and C (mg/m(2)/day) were evaluated in adult patients with advanced solid tumors: 60/1650, 80/1650, 60/2000, 70/2000; a course consisted of 28 days with sequential administration of S (days 1-5) and C (days 8-21) followed by one week rest. Results. Thirty-seven patients were treated, 24 in the dose escalation and 13 in the expansion phase; at the MTD, defined at S 70/C 2000, two patients presented dose limiting toxicities: lack of recovery of neutropenia by day 42 and nausea with dose skip of C. Most frequent toxicities were nausea (57%), diarrhea (51%), neutropenia (46%), anorexia, fatigue, vomiting (38% each). Two partial responses were observed in platinum sensitive ovarian cancer and one in prostate cancer. Conclusion. At S 70/C 2000 the combination of sequential S and C is tolerated with manageable toxicities; its evaluation in platinum and fluorouracil sensitive tumor types is worthwhile because of the easier administration and lack of nephro- and neurotoxicity as compared to parent compounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Osteoporotic fracture (OF) is one of the major causes of morbidity and mortality in industrialized countries. Switzerland is among the countries with the greatest risk. Our aim was (1) to calculate the FRAX(®) in a selected Swiss population the day before the occurrence of an OF and (2) to compare the results with the proposed Swiss FRAX(®) thresholds. The Swiss Association Against Osteoporosis proposed guidelines for the treatment of osteoporosis based on age-dependent thresholds. To identify a population at a very high risk of osteoporotic fracture, we included all consecutive patients in the active OF pathway cohort from the Lausanne University Hospital, Switzerland. FRAX(®) was calculated with the available data the day before the actual OF. People with a FRAX(®) body mass index (BMI) or a FRAX(®) (bone mineral density) BMD lower than the Swiss thresholds were not considered at high risk. Two-hundred thirty-seven patients were included with a mean age of 77.2 years, and 80 % were female. Major types of fracture included hip (58 %) and proximal humerus (25 %) fractures. Mean FRAX(®) BMI values were 28.0, 10.0, 13.0, 26.0, and 37.0 % for age groups 50-59, 60-69, 70-79, and 80-89 years old, respectively. Fifty percent of the population was not considered at high risk by the FRAX(®) BMI. FRAX(®) BMD was available for 95 patients, and 45 % had a T score < -2.5 standard deviation. Only 30 % of patients with a normal or osteopenic BMD were classified at high risk by FRAX(®) BMD. The current proposed Swiss thresholds were not able to classify at high risk in 50 to 70 % of the studied population the day before a major OF.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: Skin notations are used as a hazard identification tool to flag chemicals associated with a potential risk related to transdermal penetration. The transparency and rigorousness of the skin notation assignment process have recently been questioned. We compared different approaches proposed as criteria for these notations as a starting point for improving and systematizing current practice. METHODS: In this study, skin notations, dermal acute lethal dose 50 in mammals (LD(50)s) and two dermal risk indices derived from previously published work were compared using the lists of Swiss maximum allowable concentrations (MACs) and threshold limit values (TLVs) from the American Conference of Governmental Industrial Hygienists (ACGIH). The indices were both based on quantitative structure-activity relationship (QSAR) estimation of transdermal fluxes. One index compared the cumulative dose received through skin given specific exposure surface and duration to that received through lungs following inhalation 8 h at the MAC or TLV. The other index estimated the blood level increase caused by adding skin exposure to the inhalation route at kinetic steady state. Dermal-to-other route ratios of LD(50) were calculated as secondary indices of dermal penetrability. RESULTS: The working data set included 364 substances. Depending on the subdataset, agreement between the Swiss and ACGIH skin notations varied between 82 and 87%. Chemicals with a skin notation were more likely to have higher dermal risk indices and lower dermal LD(50) than chemicals without a notation (probabilities between 60 and 70%). The risk indices, based on cumulative dose and kinetic steady state, respectively, appeared proportional up to a constant independent of chemical-specific properties. They agreed well with dermal LD(50)s (Spearman correlation coefficients -0.42 to -0.43). Dermal-to-other routes LD(50) ratios were moderately associated with QSAR-based transdermal fluxes (Spearman correlation coefficients -0.2 to -0.3). CONCLUSIONS: The plausible but variable relationship between current skin notations and the different approaches tested confirm the need to improve current skin notations. QSAR-based risk indices and dermal toxicity data might be successfully integrated in a systematic alternative to current skin notations for detecting chemicals associated with potential dermal risk in the workplace. [Authors]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION Higher and lower cerebral perfusion pressure (CPP) thresholds have been proposed to improve brain tissue oxygen pressure (PtiO2) and outcome. We study the distribution of hypoxic PtiO2 samples at different CPP thresholds, using prospective multimodality monitoring in patients with severe traumatic brain injury. METHODS This is a prospective observational study of 22 severely head injured patients admitted to a neurosurgical critical care unit from whom multimodality data was collected during standard management directed at improving intracranial pressure, CPP and PtiO2. Local PtiO2 was continuously measured in uninjured areas and snapshot samples were collected hourly and analyzed in relation to simultaneous CPP. Other variables that influence tissue oxygen availability, mainly arterial oxygen saturation, end tidal carbon dioxide, body temperature and effective hemoglobin, were also monitored to keep them stable in order to avoid non-ischemic hypoxia. RESULTS Our main results indicate that half of PtiO2 samples were at risk of hypoxia (defined by a PtiO2 equal to or less than 15 mmHg) when CPP was below 60 mmHg, and that this percentage decreased to 25% and 10% when CPP was between 60 and 70 mmHg and above 70 mmHg, respectively (p < 0.01). CONCLUSION Our study indicates that the risk of brain tissue hypoxia in severely head injured patients could be really high when CPP is below the normally recommended threshold of 60 mmHg, is still elevated when CPP is slightly over it, but decreases at CPP values above it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prevalence of obesity and hypertension has increased these last decades. Around 60 to 70% of the incidence of hypertension is related to obesity. The relationship between obesity and hypertension is now well established. The sympathetic nervous system and the renin-angiotensin-aldosterone (RAA) system are activated in obese patients, mostly by insulin, and predispose the kidney to reabsorb sodium and water. In obese patients with hypertension, it is recommended to target a blood pressure < 140/90 mmHg. Lifestyle changes (weight loss, physical activity, low-salt diets) are useful to decrease blood pressure but are difficult to maintain in the long-term. When drugs are necessary, drugs that are metabolically neutral should be used, and often need to be combined to other drug classes in order to achieve blood pressure target.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tasosartan is a long-acting angiotensin II (AngII) receptor blocker. Its long duration of action has been attributed to its active metabolite enoltasosartan. In this study we evaluated the relative contribution of tasosartan and enoltasosartan to the overall pharmacological effect of tasosartan. AngII receptor blockade effect of single doses of tasosartan (100 mg p.o. and 50 mg i.v) and enoltasosartan (2.5 mg i.v.) were compared in 12 healthy subjects in a randomized, double blind, three-period crossover study using two approaches: the in vivo blood pressure response to exogenous AngII and an ex vivo AngII radioreceptor assay. Tasosartan induced a rapid and sustained blockade of AngII subtype-1 (AT1) receptors. In vivo, tasosartan (p.o. or i.v.) blocked by 80% AT1 receptors 1 to 2 h after drug administration and still had a 40% effect at 32 h. In vitro, the blockade was estimated to be 90% at 2 h and 20% at 32 h. In contrast, the blockade induced by enoltasosartan was markedly delayed and hardly reached 60 to 70% despite the i.v. administration and high plasma levels. In vitro, the AT1 antagonistic effect of enoltasosartan was markedly influenced by the presence of plasma proteins, leading to a decrease in its affinity for the receptor and a slower receptor association rate. The early effect of tasosartan is due mainly to tasosartan itself with little if any contribution of enoltasosartan. The antagonistic effect of enoltasosartan appears later. The delayed in vivo blockade effect observed for enoltasosartan appears to be due to a high and tight protein binding and a slow dissociation process from the carrier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four-lane undivided roadways in urban areas can experience a degradation of service and/or safety as traffic volumes increase. In fact, the existence of turning vehicles on this type of roadway has a dramatic effect on both of these factors. The solution identified for these problems is typically the addition of a raised median or two-way left-turn lane (TWLTL). The mobility and safety benefits of these actions have been proven and are discussed in the “Past Research” chapter of this report along with some general cross section selection guidelines. The cost and right-of-way impacts of these actions are widely accepted. These guidelines focus on the evaluation and analysis of an alternative to the typical four-lane undivided cross section improvement approach described above. It has been found that the conversion of a four-lane undivided cross section to three lanes (i.e., one lane in each direction and a TWLTL) can improve safety and maintain an acceptable level of service. These guidelines summarize the results of past research in this area (which is almost nonexistent) and qualitative/quantitative before-and-after safety and operational impacts of case study conversions located throughout the United States and Iowa. Past research confirms that this type of conversion is acceptable or feasible in some situations but for the most part fails to specifically identify those situations. In general, the reviewed case study conversions resulted in a reduction of average or 85th percentile speeds (typically less than five miles per hour) and a relatively dramatic reduction in excessive speeding (a 60 to 70 percent reduction in the number of vehicles traveling five miles per hour faster than the posted speed limit was measured in two cases) and total crashes (reductions between 17 to 62 percent were measured). The 13 roadway conversions considered had average daily traffic volumes of 8,400 to 14,000 vehicles per day (vpd) in Iowa and 9,200 to 24,000 vehicles per day elsewhere. In addition to past research and case study results, a simulation sensitivity analysis was completed to investigate and/or confirm the operational impacts of a four-lane undivided to three-lane conversion. First, the advantages and disadvantages of different corridor simulation packages were identified for this type of analysis. Then, the CORridor SIMulation (CORSIM) software was used x to investigate and evaluate several characteristics related to the operational feasibility of a four-lane undivided to three-lane conversion. Simulated speed and level of service results for both cross sections were documented for different total peak-hour traffic, access densities, and access-point left-turn volumes (for a case study corridor defined by the researchers). These analyses assisted with the identification of the considerations for the operational feasibility determination of a four -lane to three-lane conversion. The results of the simulation analyses primarily confirmed the case study impacts. The CORSIM results indicated only a slight decrease in average arterial speed for through vehicles can be expected for a large range of peak-hour volumes, access densities, and access-point left-turn volumes (given the assumptions and design of the corridor case study evaluated). Typically, the reduction in the simulated average arterial speed (which includes both segment and signal delay) was between zero and four miles per hour when a roadway was converted from a four-lane undivided to a three-lane cross section. The simulated arterial level of service for a converted roadway, however, showed a decrease when the bi-directional peak-hour volume was about 1,750 vehicles per hour (or 17,500 vehicles per day if 10 percent of the daily volume is assumed to occur in the peak hour). Past research by others, however, indicates that 12,000 vehicles per day may be the operational capacity (i.e., level of service E) of a three-lane roadway due to vehicle platooning. The simulation results, along with past research and case study results, appear to support following volume-related feasibility suggestions for four-lane undivided to three-lane cross section conversions. It is recommended that a four-lane undivided to three-lane conversion be considered as a feasible (with respect to volume only) option when bi-directional peak-hour volumes are less than 1,500 vehicles per hour, but that some caution begin to be exercised when the roadway has a bi-directional peak-hour volume between 1,500 and 1,750 vehicles per hour. At and above 1,750 vehicles per hour, the simulation indicated a reduction in arterial level of service. Therefore, at least in Iowa, the feasibility of a four-lane undivided to three-lane conversion should be questioned and/or considered much more closely when a roadway has (or is expected to have) a peak-hour volume of more than 1,750 vehicles. Assuming that 10 percent of the daily traffic occurs during the peak-hour, these volume recommendations would correspond to 15,000 and 17,500 vehicles per day, respectively. These suggestions, however, are based on the results from one idealized case xi study corridor analysis. Individual operational analysis and/or simulations should be completed in detail once a four-lane undivided to three-lane cross section conversion is considered feasible (based on the general suggestions above) for a particular corridor. All of the simulations completed as part of this project also incorporated the optimization of signal timing to minimize vehicle delay along the corridor. A number of determination feasibility factors were identified from a review of the past research, before-and-after case study results, and the simulation sensitivity analysis. The existing and expected (i.e., design period) statuses of these factors are described and should be considered. The characteristics of these factors should be compared to each other, the impacts of other potentially feasible cross section improvements, and the goals/objectives of the community. The factors discussed in these guidelines include • roadway function and environment • overall traffic volume and level of service • turning volumes and patterns • frequent-stop and slow-moving vehicles • weaving, speed, and queues • crash type and patterns • pedestrian and bike activity • right-of-way availability, cost, and acquisition impacts • general characteristics, including - parallel roadways - offset minor street intersections - parallel parking - corner radii - at-grade railroad crossings xii The characteristics of these factors are documented in these guidelines, and their relationship to four-lane undivided to three-lane cross section conversion feasibility identified. This information is summarized along with some evaluative questions in this executive summary and Appendix C. In summary, the results of past research, numerous case studies, and the simulation analyses done as part of this project support the conclusion that in certain circumstances a four-lane undivided to three-lane conversion can be a feasible alternative for the mitigation of operational and/or safety concerns. This feasibility, however, must be determined by an evaluation of the factors identified in these guidelines (along with any others that may be relevant for a individual corridor). The expected benefits, costs, and overall impacts of a four-lane undivided to three-lane conversion should then be compared to the impacts of other feasible alternatives (e.g., adding a raised median) at a particular location.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1.1. La greffe de rein La greffe d'organes a révolutionné la médecine. De tout le temps elle a suscité les fantasmes et les rêves : la pratique est ancestrale ; elle remonte au 3ème siècle lorsque Saint Côme et Saint Damien réalisent pour la première fois une greffe de jambe de Maure sur un patient. Il faudra néanmoins attendre le 20ème siècle pour voir la transplantation se réaliser plus concrètement avec succès et se généraliser. A Vienne, en 1902, le Dr. Ulmann (1861-1937) pratique la toute première autogreffe de rein sur un chien. Il replace le rein de l'animal au niveau du cou, pratiquant une anastomose vasculaire. Depuis, les tentatives se multiplient et peu après le Dr. Von Decastello, pratique la première transplantation chien-chien. Par la suite, en associa- tion avec le Dr. Ulmann la première greffe entre un chien et une chèvre aura lieu, avec un certain succès. En effet, elle a permis à l'animal receveur de produire de l'urine. L'avancée majeure durant ce début de siècle fut le développement d'une nouvelle technique de suture vasculaire par le Dr. Carrel, qui obtiendra le prix Nobel en 1912. Son élève, le Dr. Jaboulay (1860-1913) a réalisé plusieurs tentatives de xénogreffes rénales. Il pratiquera en 1906 les deux premières xénogreffes en utilisant un cochon et une chèvre comme donneurs. Le greffon fut respectivement placé au niveau de la cuisse et du bras des patients. La fonction rénale durera une heure. En 1909 Ernest Unger (1875-1938) transplanta un rein de fox-terrier sur un boxer, avec une production d'urine pendant 14 jours. Durant la même année le Dr. Unger a pratiqué une xénogreffe en transplantant un rein de nouveau né sur un babouin, cette intervention se terminant par la mort de l'animal. Un autre essai de greffe singe à humain, pratiqué sur une femme mourant de défaillance rénale, a fait comprendre à Unger qu'il y a des barrières biologiques dans la transplantation, mais que la greffe rénale est techniquement faisable. En 1914, J.B. Murphy découvre l'importance de la rate et de la moelle osseuse dans la réponse immune. En 1933 et 1949 en Ukraine, les premières allogreffes humaines de reins sont pratiquées par le chirurgien soviétique Yu Yu Voronoy. Malheureuse- ment aucune fonction rénale des greffons n'a été observée. Après une période de « stagnation scientifique » générale qui durera à peu près 10 ans, l'intérêt pour la transplantation refait surface dans les années 1950. Deux équipes de chirurgien se forment : une à Boston et l'autre à Paris. De nombreux cas d'allogreffes humaines sans immunosuppression sont documentés de 1950 à 1953. Malheureusement chaque opération aboutit à un échec, ceci dû aux phénomènes du rejet. M. Simonsen et WJ. Dempster découvrent qu'un mécanisme immun est à la base du rejet. Ils établissent aussi que la position pelvienne était meilleure que la position plus superficielle. Grâce aux découvertes dans le domaine du rejet et les nombreux progrès techniques, une allogreffe entre vrais jumeaux est pratiquée à Boston en 1954. L'opération est un succès total et permet de contrer toutes les hypothèses négatives avancées par certains groupes de travail. Depuis 1948, de nombreux travaux dans le domaine de l'immunosuppression ont été entrepris. La découverte de l'action immunosuppressive de la cortisone permet son instauration dans le traitement anti-rejet, malheureusement avec peu de succès. En effet, l'irradiation totale reste la méthode de choix jusqu'en 1962, date de l'apparition de l'Azaothioprine (Imuran®). La découverte de l'Azaothioprine, permet d'avancer de nouvelles hypothèses concernant le rejet : en évitant le rejet post-opératoire aigu, une protection et une adaptation au receveur pourraient être modulées par l'immunosuppression. Dans les années 1960, l'apparition des immunosuppresseurs de synthèse permet de développer de nouvelles lignes de traitement. Le Dr.Starzl et ses collègues, découvrent l'efficacité d'un traitement combiné de Prednisone et d'Azathioprine qui devient alors le standard d'immunosuppression post greffe durant cette période. Les années 60 et 70 sont des années d'optimisme. La prise en charge des patients s'améliore, le développement de la dialyse permet de maintenir en vie les patients avant la greffe, les techniques de conservation des organes s'améliorent, la transplantation élargit son domaine d'action avec la première greffe de coeur en 1968. Le typage tissulaire permet de déterminer le type d'HLA et la compatibilité entre le re- ceveur et le donneur afin de minimiser les risques de rejet aigu. Les années 1970 se caractérisent par deux amélioration majeures : celle du typage HLA-DR et l'apparition des inhibiteurs de la calcineurine (Cyclosporine A). Ce dernier restera l'agent de premier choix jusqu'aux alentours des années 1990 où apparaissaient de nouveaux immunosuppresseurs, tels que les inhibiteurs mTOR (siroli- mus) et les inhibiteurs de l'inosine monophosphate déshydrogénase (mycophénolate mofétil), par exemple. En conclusion, la transplantation rénale a été une des premières transplantations d'organes solides pratiquées sur l'homme avec de nombreux essais cliniques impliquant une multitude d'acteurs. Malgré des périodes de hauts et de bas, les avancements techniques ont été notables, ce qui a été très favorable en terme de survie pour les patients nécessitant une greffe. 1.2. Le lymphocèle La greffe rénale, comme toute autre acte chirurgical, comporte des risques et une morbidité spécifique. Le lymphocèle a la prévalence la plus élevée, qui peut aller de 0.6 à 51% 1-3 avec des variations entre les études. Le lymphocèle est défini comme une collection post opératoire de liquide lymphatique dans une cavité non épithélialisée et n'est pas causée par une fuite urinaire ou une hémorragie1, 4. Historiquement, le lymphocèle a été décrit pour la première fois dans la littérature médicale dans les années 1950, par Kobayashi et Inoue5 en chirurgie gynécologique. Par la suite Mori et al.6 en 1960 documentent la première série d'analyse de lymphocèles. En 1969 le lymphocèle est décrit pour la première fois par Inociencio et al.7 en tant que complication de greffe rénale. Sa pathogénèse n'est pas complètement élucidée, cependant plusieurs facteurs de risque ont été identifiés tels que : la ligature inadéquate des vaisseaux lymphatiques lors de la dissection des vaisseaux iliaques du donneur et de la préparation du greffon, le BMI, les diurétiques, l'anticoagulation (héparine), les hautes doses de stéoïdes, certains agents immunosuppresseurs (sirolimus), le diabète, les problèmes de cicatrisation, une hypoalbuminémie, une chirurgie rétropéritonéale préalable et le rejet aigu de greffe. (Tableau 1) Une symptomatologie peut être présente ou absente : elle découle directement de la localisation et de la taille de la collection8, 9, 10. Lorsqu'on se trouve devant un tableau de lymphocèle asymptomatique, la découverte se fait de manière fortuite lors d'un contrôle de suivi de greffe11, 12 cliniquement ou par échographie. En cas de lymphocèle non significatif cela ne requiert aucun traitement. Au contraire, lorsqu'il atteint une certaines taille il provoque un effet de masse et de compression qui provoque la symptomatologie. Cette dernière est peu spécifique et apparait en moyenne entre 2 semaines et 6 mois 13 après la greffe. Le patient va se présenter avec un tableau pouvant aller de la simple douleur abdominale en passant par un oedème du membre inférieur ou, dans de plus rares cas, une thrombose veineuse profonde sera le seul signe consécutif au lymphocèle14, 15. La plupart du temps on observera des valeurs de créatinine élevées, signant une souffrance rénale. Le diagnostic du lymphocèle peut se faire selon plusieurs techniques. La plus utilisée est la ponction à l'aiguille fine sous guidage ultrasonographique4. L'analyse du liquide ponctionné permet de différencier un lymphocèle d'un urinome. Les autres techniques existantes sont : la ponction après injection de carmin d'indigo15, un pyelogramme intraveineux et un lymphangiogramme16, le CT-Scan ou l'IRM15. Le dosage sanguin d'IL6 et IL8 est parfois utilisé pour déterminer si le lymphocèle est infecté.15 Suite à l'apparition d'une collection symptomatique; le rein transplanté peut être dans une situation à risque pour laquelle un traitement doit être entrepris. A l'heure actuelle, il n'existe pas de solution universelle dans la prévention et le traitement de ce type de complication. Les solutions sont multiples et dépendent principalement de la localisation et de la taille de la collection. Pendant de nombreuses années, le seul traitement du lymphocèle a été celui de l'aspiration percutanée simple. Cette dernière conduit cependant à un taux de récidive de presque 100%.17 Cette technique reste une solution utilisée principalement à visée diagnostique18, 19, 20, 21 ou pour soulager les patients à court terme15. Pour améliorer l'efficacité de cette technique on a fait appel à des agents sclérosants comme l'éthanol, la povidone-iodine, la tétracycline, la doxycycline ou de la colle de fibrine. Des complications chirurgicales ont cependant été rapportées, pouvant aller jusqu'au rejet de greffe22. La fenestration par laparoscopie a été décrite pour la première fois en 1991 par McCullough et al.23 Cette technique reste, de nos jours, la technique la plus utilisée pour le traitement du lymphocèle. Elle a de nombreux avantages : un temps de convalescence court, des pertes de sang minimes et une réalimentation rapide24, 25. On constate en outre la quasi absence de récidives après traitement11, 26. L'évaluation radiologique est très importante, car la marsupialisation par laparoscopie est limitée par l'emplacement et le volume de la collection. Ainsi, on évitera ce type de traite- ment lorsque la collection se situera postérieurement, à proximité de la vessie, de l'uretère ou du hile rénal. Dans ces situations, la laparotomie s'impose malgré l'augmentation de la morbidité liée à cette technique24. Actuellement on cherche à trouver une technique universelle du traitement des lymphocèles avec la chirurgie la moins invasive possible et le taux de récidive le plus faible possible. Malgré ses li- mites, la fenestration par laparoscopie apparaît comme une très bonne solution. Cette étude consiste en une évaluation rétrospective des traitements chirurgicaux de cette complication post-opératoire de la greffe rénale au CHUV (Centre Hospitalier Universitaire Vaudois) de 2003 à 2011. Le but est de recenser et analyser les différentes techniques que l'on observe actuellement dans la littérature et pouvoir ainsi proposer une technique idéale pour le CHUV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT A detailed protocol for chemical clearing of bee specimens is presented. Dry specimens as well as those preserved in liquid media can be cleared using this protocol. The procedure consists of a combined use of alkaline solution (KOH or NaOH) and hydrogen peroxide (H2O2), followed by the boiling of the cleared specimens in 6070% EtOH. Clearing is particularly useful for internal skeletal morphological research. This procedure allows for efficient study of internal projections of the exoskeleton (e.g., apodemes, furcae, phragmata, tentoria, internal ridges and sulci), but this process makes external features of the integument, as some sutures and sulci, readily available for observation as well. Upon completion of the chemical clearing process the specimens can be stored in glycerin. This procedure was developed and evaluated for the preparation of bees and other Apoidea, but modifications for use with other insect taxa should be straightforward after some experimentation on variations of timing of steps, concentration of solutions, temperatures, and the necessity of a given step. Comments on the long-term storage, morphological examination, and photodocumentation of cleared specimens are also provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Foi estudado, em condições de casa de vegetação, em Nancy (França), em 1992, o destino de duas formas de fertilizantes nitrogenados, marcados com 15N, sulfato de amônio e uréia, em amostras do horizonte A de dois principais solos da Amazônia Central, classificados como latossolo amarelo e podzólico vermelho-amarelo. A planta teste foi o "rye-grass" da Itália (Lolium multiflorum L.). Em ambos os solos, a uréia foi mais bem utilizada do que o sulfato de amônio. Entre 60 e 70% do N aplicado como uréia foi absorvido pela planta, enquanto, com a aplicação de sulfato de amônio, esses valores variaram entre 44 e 49%. O balanço do 15N no final do ciclo da cultura mostrou que a imobilização do N nos dois solos foi maior na presença de uréia que na de sulfato de amônio. As perdas, estimadas por diferença, foram mais elevadas no tratamento com sulfato de amônio. Considerando que perdas por lixiviação foram praticamente nulas com a técnica de cultivo utilizada, elas devem ter ocorrido essencialmente por via gasosa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo deste trabalho foi determinar a erodibilidade entressulcos (Ki) de três solos de textura argilosa: Podzólico Vermelho-Escuro (PE) e Vermelho Amarelo (PV) e um Latossolo Roxo (LR), visando obter subsídios para a aplicação no modelo WEPP- Water Erosion Prediction Project - na região de Lavras (MG), bem como estudar a relação daquele parâmetro com alguns atributos físicos, químicos e mineralógicos dos solos. Foi adotado um delineamento experimental do tipo blocos inteiramente casualizados com os três solos, quatro declives (15, 25, 35 e 45%), cinco intensidades de chuva simulada (60, 50, 70, 90 e 120 mm h-1), com umedecimento prévio das parcelas, e cinco repetições. Foram determinados os valores de Ki, usando as intensidades médias das chuvas, as declividades das parcelas e as taxas médias de erosão entressulcos nos diversos tempos de coleta do deflúvio de cada chuva aplicada. Os resultados mostraram que o PV foi o solo que apresentou as maiores erodibilidades entressulcos (Ki) seguido do PE e LR. As erodibilidades entressulcos determinadas foram : 4,67 x 10(5) kg s m-4, para o PE; 6,85 x 10(5) kg s m-4, para o PV, e 3,38 x 10(5) kg s m-4, para o LR. Os atributos do solo que melhor se correlacionaram com a erodibilidade entressulcos foram os teores de óxidos de ferro e caulinita, a argila dispersa em água, o volume total de poros, as densidades do solo e de partículas, os teores de matéria orgânica e de agregados < 0,105 mm de diâmetro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo deste trabalho foi estudar os atributos físicos e químicos de substratos com diferentes doses de biossólido (BIO) e de casca de arroz carbonizada (CAC), com vistas em obter um meio de crescimento adequado para o desenvolvimento de mudas. Desta forma, utilizando biossólido proveniente da SABESP, estação de Franca (SP), estabeleceu-se um ensaio com os seguintes tratamentos (proporções BIO/CAC): 100/0, 90/10, 80/20, 70/30, 60/40, 50/50, 40/60, 30/70, 20/80, 10/90 e 0/100, os quais foram comparados ao substrato comercial Multiplant®. Foram realizadas análises para determinação dos atributos físicos, como: densidade aparente do substrato, macro e microporosidade, porosidade total, capacidade máxima de retenção de água, e dos atributos químicos dos substratos, como: teores totais de macro e micronutrientes, pH, relação C/N e condutividade elétrica. Com a elevação da dose de BIO no substrato houve aumento da densidade e do percentual de microporos e, conseqüentemente, da capacidade de retenção de água. O BIO apresentou teores razoáveis de nutrientes com destaque para N e P, mas baixos teores de K. Não foram detectados teores de metais pesados superiores aos limites estabelecidos pela Legislação Brasileira no biossólido usado. Comparando-se os valores considerados adequados para o desenvolvimento de mudas encontrados na literatura com os obtidos neste trabalho, encaixaram-se na faixa adequada os substratos cujas doses de biossólido variaram de 30 a 60 %. Nenhum substrato testado, incluindo o do tratamento com substrato comercial, apresentou valores ideais em todas as propriedades estudadas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A formação de recursos humanos em nível de pós-graduação no Brasil na área espacial teve início na década de 60, no Instituto Nacional de Pesquisas Espaciais. A fim de constituir um corpo de especialistas para desenvolver as suas atividades, o instituto inicialmente se apoiou em uma estrutura que combinou o envio de pesquisadores para o exterior com a criação de programas de pós-graduação. Com vistas a analisar as estratégias adotadas para a constituição do núcleo de formação de mestres e doutores, este trabalho apresenta as atividades da pós-graduação no instituto, com foco nas décadas de 60 e 70. Também é apresentado um quadro do perfil dos egressos dos programas de Astrofísica e Geofísica Espacial formados até 2005, com o objetivo de identificar e analisar parte dos resultados obtidos por este núcleo de formação.