894 resultados para class III cells


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AIM To systematically search the literature and assess the available evidence for the influence of chin-cup therapy on the temporomandibular joint regarding morphological adaptations and appearance of temporomandibular disorders (TMD). MATERIALS AND METHODS Electronic database searches of published and unpublished literature were performed. The following electronic databases with no language and publication date restrictions were searched: MEDLINE (via Ovid and PubMed), EMBASE (via Ovid), the Cochrane Oral Health Group's Trials Register, and CENTRAL. Unpublished literature was searched on ClinicalTrials.gov, the National Research Register, and Pro-Quest Dissertation Abstracts and Thesis database. The reference lists of all eligible studies were checked for additional studies. Two review authors performed data extraction independently and in duplicate using data collection forms. Disagreements were resolved by discussion or the involvement of an arbiter. RESULTS From the 209 articles identified, 55 papers were considered eligible for inclusion in the review. Following the full text reading stage, 12 studies qualified for the final review analysis. No randomized clinical trial was identified. Eight of the included studies were of prospective and four of retrospective design. All studies were assessed for their quality and graded eventually from low to medium level of evidence. Based on the reported evidence, chin-cup therapy affects the condylar growth pattern, even though two studies reported no significance changes in disc position and arthrosis configuration. Concerning the incidence of TMD, it can be concluded from the available evidence that chin-cup therapy constitutes no risk factor for TMD. CONCLUSION Based on the available evidence, chin-cup therapy for Class III orthodontic anomaly seems to induce craniofacial adaptations. Nevertheless, there are insufficient or low-quality data in the orthodontic literature to allow the formulation of clear statements regarding the influence of chin-cup treatment on the temporomandibular joint.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE Parametrial involvement (PMI) is one of the most important factors influencing prognosis in locally advanced stage cervical cancer (LACC) patients. We aimed to evaluate PMI rate among LACC patients undergoing neoadjuvant chemotherapy (NACT), thus evaluating the utility of parametrectomy in tailor adjuvant treatments. METHODS Retrospective evaluation of consecutive 275 patients affected by LACC (IB2-IIB), undergoing NACT followed by type C/class III radical hysterectomy. Basic descriptive statistics, univariate and multivariate analyses were applied in order to identify factors predicting PMI. Survival outcomes were assessed using Kaplan-Meier and Cox models. RESULTS PMI was detected in 37 (13%) patients: it was associated with vaginal involvement, lymph node positivity and both in 10 (4%), 5 (2%) and 12 (4%) patients, respectively; while PMI alone was observed in only 10 (4%) patients. Among this latter group, adjuvant treatment was delivered in 3 (1%) patients on the basis of pure PMI; while the remaining patients had other characteristics driving adjuvant treatment. Considering factors predicting PMI we observed that only suboptimal pathological responses (OR: 1.11; 95% CI: 1.01, 1.22) and vaginal involvement (OR: 1.29 (95%) CI: 1.17, 1.44) were independently associated with PMI. PMI did not correlate with survival (HR: 2.0; 95% CI: 0.82, 4.89); while clinical response to NACT (HR: 3.35; 95% CI: 1.59, 7.04), vaginal involvement (HR: 2.38; 95% CI: 1.12, 5.02) and lymph nodes positivity (HR: 3.47; 95% CI: 1.62, 7.41), independently correlated with worse survival outcomes. CONCLUSIONS Our data suggest that PMI had a limited role on the choice to administer adjuvant treatment, thus supporting the potential embrace of less radical surgery in LACC patients undergoing NACT. Further prospective studies are warranted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Obesity is a complex multifactorial disease and is a public health priority. Perilipin coats the surface of lipid droplets in adipocytes and is believed to stabilize these lipid bodies by protecting triglyceride from early lipolysis. This research project evaluated the association between genetic variation within the human perilipin (PLIN) gene and obesity-related quantitative traits and disease-related phenotypes in Non-Hispanic White (NHW) and African American (AA) participants from the Atherosclerosis Risk in Communities (ARIC) Study. ^ Multivariate linear regression, multivariate logistic regression, and Cox proportional hazards models evaluated the association between single gene variants (rs2304794, rs894160, rs8179071, and rs2304795) and multilocus variation (rs894160 and rs2304795) within the PLIN gene and both obesity-related quantitative traits (body weight, body mass index [BMI], waist girth, waist-to-hip ratio [WHR], estimated percent body fat, and plasma total triglycerides) and disease-related phenotypes (prevalent obesity, metabolic syndrome [MetS], prevalent coronary heart disease [CHD], and incident CHD). Single variant analyses were stratified by race and gender within race while multilocus analyses were stratified by race. ^ Single variant analyses revealed that rs2304794 and rs894160 were significantly related to plasma triglyceride levels in all NHWs and NHW women. Among AA women, variant rs8179071 was associated with triglyceride levels and rs2304794 was associated with risk-raising waist circumference (>0.8 in women). The multilocus effects of variants rs894160 and rs2304795 were significantly associated with body weight, waist girth, WHR, estimated percent body fat, class II obesity (BMI ≥ 35 kg/m2), class III obesity (BMI ≥ 35 kg/m2), and risk-raising WHR (>0.9 in men and >0.8 in women) in AAs. Variant rs2304795 was significantly related to prevalent MetS among AA males and prevalent CHD in NHW women; multilocus effects of the PLIN gene were associated with prevalent CHD among NHWs. Rs2304794 was associated with incident CHD in the absence of the MetS among AAs. These findings support the hypothesis that variation within the PLIN gene influences obesity-related traits and disease-related phenotypes. ^ Understanding these effects of the PLIN genotype on the development of obesity can potentially lead to tailored health promotion interventions that are more effective. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pulmonary fibrosis (PF) is the result of a variety of environmental and cancer treatment related insults and is characterized by excessive deposition of collagen. Gas exchange in the alveoli is impaired as the normal lung becomes dense and collapsed leading to a loss of lung volume. It is now accepted that lung injury and fibrosis are in part genetically regulated. ^ Bleomycin is a chemotherapeutic agent used for testicular cancer and lymphomas that induces significant pulmonary toxicity. We delivered bleomycin to mice subcutaneously via a miniosmotic pump in order to elicit lung injury (LI) and quantified the %LI morphometrically using video imaging software. We previously identified a quantitative trait loci, Blmpf-1(LOD=17.4), in the Major Histocompatibility Complex (MHC), but the exact genetic components involved have remained unknown. ^ In the current studies, Blmpf-1 was narrowed to an interval spanning 31.9-32.9Mb on Chromosome 17 using MHC Congenic mice. This region includes the MHC Class II and III genes, and is flanked by the TNF-alpha super locus and MHC Class I genes. Knockout mice of MHC Class I genes (B2mko), MHC Class II genes (Cl2ko), and TNF-alpha (TNF-/-) and its receptors (p55-/-, p75-/-, and p55/p75-/-) were treated with bleomycin in order to ascertain the role of these genes in the pathogenesis of lung injury. ^ Cl2ko mice had significantly better survival and %LI when compared to treated background BL/6 (B6, P<.05). In contrast, B2mko showed no differences in survival or %LI compared to B6. This suggests that the MHC Class II locus contains susceptibility genes for bleomycin-induced lung injury. ^ TNF-alpha, a Class III gene, was examined and it was found that TNF-/- and p55-/- mice had higher %LI and lower survival when compared to B6 (P<.05). In contrast, p75-/- mice had significantly reduced %LI when compared to TNF-/-, p55-/-, and B6 mice as well as higher survival (P<.01). These data contradict the current paradigm that TNF-alpha is a profibrotic mediator of lung injury and suggest a novel and distinct role for the p55 and p75 receptors in mediating lung injury. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Baseline elevation of troponin I (TnI) has been associated with worse outcomes in heart failure (HF). However, the prevalence of persistent TnI elevation and its association with clinical outcomes has not been well described. HF is a major public health issue due to its wide prevalence and prognosticators of this condition will have a significant impact on public health. Methods: A retrospective study was performed in 510 patients with an initial HF admission between 2002 to 2004, and all subsequent hospital admissions up to May 2009 were recorded in a de-identified database. Persistent TnI elevation was defined as a level ≥0.05 ng/ml on ≥3 HF admissions. Baseline characteristics, hospital readmissions and all cause mortality were compared between patients with persistent TnI elevation (Persistent), patients with no persistence of TnI (Nonpersistent) and patients who had less than three hospital admissions (admission <3) groups. Also the same data was analyzed using the mean method in which the mean value of all recorded troponin values of each patient was used to define persistence i.e. patients who had a mean troponin level ≥0.05 ng/ml were classified as persistent. Results: Mean age of our cohort was 68.4 years out of which 99.6% subjects were male, 62.4% had ischemic HF. 78.2% had NYHA class III to IV HF, mean LVEF was 25.9%. Persistent elevation of TnI was seen in 26% of the cohort and in 66% of patients with more than 3 hospital admissions. Mean TnI level was 0.67 ± 0.15 ng/ml in the 'Persistent' group. Mean TnI using the mean method was 1.11 ± 7.25 ng/ml. LVEF was significantly lower in persistent group. Hypertension, diabetes, chronic renal insufficiency and mean age did not differ between the two groups. 'Persistent' patients had higher mortality (HR = 1.26, 95% CI = 0.89–1.78, p = 0.199 when unadjusted and HR = 1.29, 95% CI = 0.89–1.86, p = 0.176 when adjusted for race, LVEF and ischemic etiology) HR for mortality in persistent patients was 1.99 (95% CI = 1.06–3.73, p = 0.03) using the mean method. The following results were found in those with ischemic cardiomyopathy (HR = 1.44034, 95% CI = 0.92–2.26, p = 0.113) and (HR = 1.89, 95% CI = 1.01–3.55, p = 0.046) by using the mean method. 2 out of three patients with HF who were readmitted three or more times had persistent elevation of troponin I levels. Patients with chronic persistence of troponin I elevation showed a trend towards lesser survival as compared to patients who did not have chronic persistence, however this did not reach statistical significance. This trend was seen more among ischemic patients than non ischemic patients, but did not reach statistical significance. With the mean method, patients with chronic persistence of troponin I elevation had significantly lesser survival than those without it. Also ischemic patients had significantly lesser survival than non ischemic patients. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Federal Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid (CMS) play key roles in making Class III, medical devices available to the public, and they are required by law to meet statutory deadlines for applications under review. Historically, both agencies have failed to meet their respective statutory requirements. Since these failures affect patient access and may adversely impact public health, Congress has enacted several “modernization” laws. However, the effectiveness of these modernization laws has not been adequately studied or established for Class III medical devices. ^ The aim of this research study was, therefore, to analyze how these modernization laws may have affected public access to medical devices. Two questions were addressed: (1) How have the FDA modernization laws affected the time to approval for medical device premarket approval applications (PMAs)? (2) How has the CMS modernization law affected the time to approval for national coverage decisions (NCDs)? The data for this research study were collected from publicly available databases for the period January 1, 1995, through December 31, 2008. These dates were selected to ensure that a sufficient period of time was captured to measure pre- and post-modernization effects on time to approval. All records containing original PMAs were obtained from the FDA database, and all records containing NCDs were obtained from the CMS database. Source documents, including FDA premarket approval letters and CMS national coverage decision memoranda, were reviewed to obtain additional data not found in the search results. Analyses were conducted to determine the effects of the pre- and post-modernization laws on time to approval. Secondary analyses of FDA subcategories were conducted to uncover any causal factors that might explain differences in time to approval and to compare with the primary trends. The primary analysis showed that the FDA modernization laws of 1997 and 2002 initially reduced PMA time to approval; after the 2002 modernization law, the time to approval began increasing and continued to increase through December 2008. The non-combined, subcategory approval trends were similar to the primary analysis trends. The combined, subcategory analysis showed no clear trends with the exception of non-implantable devices, for which time to approval trended down after 1997. The CMS modernization law of 2003 reduced NCD time to approval, a trend that continued through December 2008. This study also showed that approximately 86% of PMA devices do not receive NCDs. ^ As a result of this research study, recommendations are offered to help resolve statutory non-compliance and access issues, as follows: (1) Authorities should examine underlying causal factors for the observed trends; (2) Process improvements should be made to better coordinate FDA and CMS activities to include sharing data, reducing duplication, and establishing clear criteria for “safe and effective” and “reasonable and necessary”; (3) A common identifier should be established to allow tracking and trending of applications between FDA and CMS databases; (4) Statutory requirements may need to be revised; and (5) An investigation should be undertaken to determine why NCDs are not issued for the majority of PMAs. Any process improvements should be made without creating additional safety risks and adversely impacting public health. Finally, additional studies are needed to fully characterize and better understand the trends identified in this research study.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chronic β-blocker treatment improves survival and left ventricular ejection fraction (LVEF) in patients with systolic heart failure (HF). Data on whether the improvement in LVEF after β-blocker therapy is sustained for a long term or whether there is a loss in LVEF after an initial gain is not known. Our study sought to determine the prevalence and prognostic role of secondary decline in LVEF in chronic systolic HF patients on β-blocker therapy and characterize these patients. Retrospective chart review of HF hospitalizations fulfilling Framingham Criteria was performed at the MEDVAMC between April 2000 and June 2006. Follow up vital status and recurrent hospitalizations were ascertained until May 2010. Three groups of patients were identified based on LVEF response to beta blockers; group A with secondary decline in LVEF following an initial increase, group B with progressive increase in LVEF and group C with progressive decline in LVEF. Covariate adjusted Cox proportional hazard models were used to examine differences in heart failure re-hospitalizations and all cause mortality between the groups. Twenty five percent (n=27) of patients had a secondary decline in LVEF following an initial gain. The baseline, peak and final LVEF in this group were 27.6±12%, 40.1±14% and 27.4±13% respectively. The mean nadir LVEF after decline was 27.4±13% and this decline occurred at a mean interval of 2.8±1.9 years from the day of beta blocker initiation. These patients were older, more likely to be whites, had advanced heart failure (NYHA class III/IV) more due to a non ischemic etiology compared to groups B & C. They were also more likely to be treated with metoprolol (p=0.03) compared to the other two groups. No significant differences were observed in combined risk of all cause mortality and HF re-hospitalization [hazard ratio 0.80, 95% CI 0.47 to 1.38, p=0.42]. No significant difference was observed in survival estimates between the groups. In conclusion, a late decline in LVEF does occur in a significant proportion of heart failure patients treated with beta blockers, more so in patients treated with metoprolol.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background. Cancer cachexia is a common syndrome complex in cancer, occurring in nearly 80% of patients with advanced cancer and responsible for at least 20% of all cancer deaths. Cachexia is due to increased resting energy expenditure, increased production of inflammatory mediators, and changes in lipid and protein metabolism. Non-steroidal anti-inflammatory drugs (NSAIDs), by virtue of their anti-inflammatory properties, are possibly protective against cancer-related cachexia. Since cachexia is also associated with increased hospitalizations, this outcome may also show improvement with NSAID exposure. ^ Design. In this retrospective study, computerized records from 700 non-small cell lung cancer patients (NSCLC) were reviewed, and 487 (69.57%) were included in the final analyses. Exclusion criteria were severe chronic obstructive pulmonary disease, significant peripheral edema, class III or IV congestive heart failure, liver failure, other reasons for weight loss, or use of research or anabolic medications. Information on medication history, body weight and hospitalizations was collected from one year pre-diagnosis until three years post-diagnosis. Exposure to NSAIDs was defined if a patient had a history of being treated with NSAIDs for at least 50% of any given year in the observation period. We used t-test and chi-square tests for statistical analyses. ^ Results. Neither the proportion of patients with cachexia (p=0.27) nor the number of hospitalizations (p=0.74) differed among those with a history of NSAID use (n=92) and those without (n=395). ^ Conclusions. In this study, NSAID exposure was not significantly associated with weight loss or hospital admissions in patients with NSCLC. Further studies may be needed to confirm these observations.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los polímeros compostables suponen en torno al 30% de los bioplásticos destinados a envasado, siendo a su vez esta aplicación el principal destino de la producción de este tipo de materiales que, en el año 2013, superó 1,6 millones de toneladas. La presente tesis aborda la biodegradación de los residuos de envases domésticos compostables en medio aerobio para dos tipos de formato y materiales, envase rígido de PLA (Clase I) y dos tipos de bolsas de PBAT+PLA (Clases II y III). Sobre esta materia se han realizado diversos estudios en escala de laboratorio pero para otro tipo de envases y biopolímeros y bajo condiciones controladas del compost con alguna proyección particularizada en plantas. La presente tesis da un paso más e investiga el comportamiento real de los envases plásticos compostables en la práctica del compostaje en tecnologías de pila y túnel, tanto a escala piloto como industrial, dentro del procedimiento y con las condiciones ambientales de instalaciones concretas. Para ello, con el método seguido, se han analizado los requisitos básicos que debe cumplir un envase compostable, según la norma UNE – EN 13432, evaluando el porcentaje de biodegradación de los envases objeto de estudio, en función de la pérdida de peso seco tras el proceso de compostaje, y la calidad del compost obtenido, mediante análisis físico-químico y de fitotoxicidad para comprobar que los materiales de estudio no aportan toxicidad. En cuanto a los niveles de biodegrabilidad, los resultados permiten concluir que los envases de Clase I se compostan adecuadamente en ambas tecnologías y que no requieren de unas condiciones de proceso muy exigentes para alcanzar niveles de biodegradación del 100%. En relación a los envases de Clase II, se puede asumir que se trata de un material que se composta adecuadamente en pila y túnel industrial pero que requiere de condiciones exigentes para alcanzar niveles de biodegradación del 100% al afectarle de forma clara la ubicación de las muestras en la masa a compostar, especialmente en el caso de la tecnología de túnel. Mientras el 90% de las muestras alcanza el 100% de biodegradación en pila industrial, tan sólo el 50% lo consigue en la tecnología de túnel a la misma escala. En cuanto a los envases de Clase III, se puede afirmar que es un material que se composta adecuadamente en túnel industrial pero que requiere de condiciones de cierta exigencia para alcanzar niveles de biodegradación del 100% al poderle afectar la ubicación de las muestras en la masa a compostar. El 75% de las muestras ensayadas en túnel a escala industrial alcanzan el 100% de biodegradación y, aunque no se ha ensayado este tipo de envase en la tecnología de pila al no disponer de muestras, cabe pensar que los resultados de biodegrabilidad que hubiera podido alcanzar habrían sido, como mínimo, los obtenidos para los envases de Clase II, al tratarse de materiales muy similares en composición. Por último, se concluye que la tecnología de pila es más adecuada para conseguir niveles de biodegradación superiores en los envases tipo bolsa de PBAT+PLA. Los resultados obtenidos permiten también sacar en conclusión que, en el diseño de instalaciones de compostaje para el tratamiento de la fracción orgánica recogida selectivamente, sería conveniente realizar una recirculación del rechazo del afino del material compostado para aumentar la probabilidad de someter este tipo de materiales a las condiciones ambientales adecuadas. Si además se realiza un triturado del residuo a la entrada del proceso, también se aumentaría la superficie específica a entrar en contacto con la masa de materia orgánica y por tanto se favorecerían las condiciones de biodegradación. En cuanto a la calidad del compost obtenido en los ensayos, los resultados de los análisis físico – químicos y de fitotoxicidad revelan que los niveles de concentración de microorganismo patógenos y de metales pesados superan, en la práctica totalidad de las muestras, los niveles máximos permitidos en la legislación vigente aplicable a productos fertilizantes elaborados con residuos. Mediante el análisis de la composición de los envases ensayados se constata que la causa de esta contaminación reside en la materia orgánica utilizada para compostar en los ensayos, procedente del residuo de origen doméstico de la denominada “fracción resto”. Esta conclusión confirma la necesidad de realizar una recogida selectiva de la fracción orgánica en origen, existiendo estudios que evidencian la mejora de la calidad del residuo recogido en la denominada “fracción orgánica recogida selectivamente” (FORM). Compostable polymers are approximately 30% of bioplastics used for packaging, being this application, at same time, the main destination for the production of such materials exceeded 1.6 million tonnes in 2013. This thesis deals with the biodegradation of household packaging waste compostable in aerobic medium for two format types and materials, rigid container made of PLA (Class I) and two types of bags made of PBAT + PLA (Classes II and III). There are several studies developed about this issue at laboratory scale but for other kinds of packaging and biopolymers and under composting controlled conditions with some specifically plants projection. This thesis goes one step further and researches the real behaviour of compostable plastic packaging in the composting practice in pile and tunnel technologies, both at pilot and industrial scale, within the procedure and environmental conditions of concrete devices. Therefore, with a followed method, basic requirements fulfilment for compostable packaging have been analysed according to UNE-EN 13432 standard. It has been assessed the biodegradability percentage of the packaging studied, based on loss dry weight after the composting process, and the quality of the compost obtained, based on physical-chemical analysis to check no toxicity provided by the studied materials. Regarding biodegradability levels, results allow to conclude that Class I packaging are composted properly in both technologies and do not require high exigent process conditions for achieving 100% biodegradability levels. Related to Class II packaging, it can be assumed that it is a material that composts properly in pile and tunnel at industrial scale but requires exigent conditions for achieving 100% biodegradability levels for being clearly affected by sample location in the composting mass, especially in tunnel technology case. While 90% of the samples reach 100% of biodegradation in pile at industrial scale, only 50% achieve it in tunnel technology at the same scale. Regarding Class III packaging, it can be said that it is a material properly composted in tunnel at industrial scale but requires certain exigent conditions for reaching 100% biodegradation levels for being possibly affected by sample location in the composting mass. The 75% of the samples tested in tunnel at industrial scale reaches 100% biodegradation. Although this kind of packaging has not been tested on pile technology due to unavailability of samples, it is judged that biodegradability results that could be reached would have been, at least, the same obtained for Class II packaging, as they are very similar materials in composition. Finally, it is concluded that pile technology is more suitable for achieving highest biodegradation levels in bag packaging type of PBAT+PLA. Additionally, the obtained results conclude that, in the designing of composting devices for treatment of organic fraction selectively collected, it would be recommended a recirculation of the refining refuse of composted material in order to increase the probability of such materials to expose to proper environmental conditions. If the waste is grinded before entering the process, the specific surface in contact with organic material would also be increased and therefore biodegradation conditions would be more favourable. Regarding quality of the compost obtained in the tests, physical-chemical and phytotoxicity analysis results reveal that pathogen microorganism and heavy metals concentrations exceed, in most of the samples, the maximum allowed levels by current legislation for fertilizers obtained from wastes. Composition analysis of tested packaging verifies that the reason for this contamination is the organic material used for composting tests, comes from the household waste called “rest fraction”. This conclusion confirms the need of a selective collection of organic fraction in the origin, as existing studies show the quality improvement of the waste collected in the so-called “organic fraction selectively collected” (FORM).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Members of the polo subfamily of protein kinases play pivotal roles in cell-cycle control and proliferation. In addition to a high degree of sequence similarity in the kinase domain, polo kinases contain a strikingly conserved motif termed “polo-box” in the noncatalytic C-terminal domain. We have previously shown that the mammalian polo-like kinase Plk is a functional homolog of Saccharomyces cerevisiae Cdc5. Here, we show that, in a polo-box- and kinase activity-dependent manner, ectopic expression of Plk in budding yeast can induce a class of cells with abnormally elongated buds. In addition to localization at spindle poles and cytokinetic neck filaments, Plk induces and localizes to ectopic septin ring structures within the elongated buds. In contrast, mutations in the polo-box abolish both localization to, and induction of, septal structures. Consistent with the polo-box-dependent subcellular localization, the C-terminal domain of Plk, but not its polo-box mutant, is sufficient for subcellular localization. Our data suggest that Plk may contribute a signal to initiate or promote cytokinetic event(s) and that an intact polo-box is required for regulation of these cellular processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Griffonia simplicifolia leaf lectin II (GSII), a plant defense protein against certain insects, consists of an N-acetylglucosamine (GlcNAc)-binding large subunit with a small subunit having sequence homology to class III chitinases. Much of the insecticidal activity of GSII is attributable to the large lectin subunit, because bacterially expressed recombinant large subunit (rGSII) inhibited growth and development of the cowpea bruchid, Callosobruchus maculatus (F). Site-specific mutations were introduced into rGSII to generate proteins with altered GlcNAc binding, and the different rGSII proteins were evaluated for insecticidal activity when added to the diet of the cowpea bruchid. At pH 5.5, close to the physiological pH of the cowpea bruchid midgut lumen, rGSII recombinant proteins were categorized as having high (rGSII, rGSII-Y134F, and rGSII-N196D mutant proteins), low (rGSII-N136D), or no (rGSII-D88N, rGSII-Y134G, rGSII-Y134D, and rGSII-N136Q) GlcNAc-binding activity. Insecticidal activity of the recombinant proteins correlated with their GlcNAc-binding activity. Furthermore, insecticidal activity correlated with the resistance to proteolytic degradation by cowpea bruchid midgut extracts and with GlcNAc-specific binding to the insect digestive tract. Together, these results establish that insecticidal activity of GSII is functionally linked to carbohydrate binding, presumably to the midgut epithelium or the peritrophic matrix, and to biochemical stability of the protein to digestive proteolysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many pathogen recognition genes, such as plant R-genes, undergo rapid adaptive evolution, providing evidence that these genes play a critical role in plant-pathogen coevolution. Surprisingly, whether rapid adaptive evolution also occurs in genes encoding other kinds of plant defense proteins is unknown. Unlike recognition proteins, plant chitinases attack pathogens directly, conferring disease resistance by degrading chitin, a component of fungal cell walls. Here, we show that nonsynonymous substitution rates in plant class I chitinase often exceed synonymous rates in the plant genus Arabis (Cruciferae) and in other dicots, indicating a succession of adaptively driven amino acid replacements. We identify individual residues that are likely subject to positive selection by using codon substitution models and determine the location of these residues on the three-dimensional structure of class I chitinase. In contrast to primate lysozymes and plant class III chitinases, structural and functional relatives of class I chitinase, the adaptive replacements of class I chitinase occur disproportionately in the active site cleft. This highly unusual pattern of replacements suggests that fungi directly defend against chitinolytic activity through enzymatic inhibition or other forms of chemical resistance and identifies target residues for manipulating chitinolytic activity. These data also provide empirical evidence that plant defense proteins not involved in pathogen recognition also evolve in a manner consistent with rapid coevolutionary interactions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The past decade has seen a remarkable explosion in our knowledge of the size and diversity of the myosin superfamily. Since these actin-based motors are candidates to provide the molecular basis for many cellular movements, it is essential that motility researchers be aware of the complete set of myosins in a given organism. The availability of cDNA and/or draft genomic sequences from humans, Drosophila melanogaster, Caenorhabditis elegans, Arabidopsis thaliana, Saccharomyces cerevisiae, Schizosaccharomyces pombe, and Dictyostelium discoideum has allowed us to tentatively define and compare the sets of myosin genes in these organisms. This analysis has also led to the identification of several putative myosin genes that may be of general interest. In humans, for example, we find a total of 40 known or predicted myosin genes including two new myosins-I, three new class II (conventional) myosins, a second member of the class III/ninaC myosins, a gene similar to the class XV deafness myosin, and a novel myosin sharing at most 33% identity with other members of the superfamily. These myosins are in addition to the recently discovered class XVI myosin with N-terminal ankyrin repeats and two human genes with similarity to the class XVIII PDZ-myosin from mouse. We briefly describe these newly recognized myosins and extend our previous phylogenetic analysis of the myosin superfamily to include a comparison of the complete or nearly complete inventories of myosin genes from several experimentally important organisms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During anaerobic growth Escherichia coli uses a specific ribonucleoside-triphosphate reductase (class III enzyme) for the production of deoxyribonucleoside triphosphates. In its active form, the enzyme contains an iron-sulfur center and an oxygen-sensitive glycyl radical (Gly-681). The radical is generated in the inactive protein from S-adenosylmethionine by an auxiliary enzyme system present in E. coli. By modification of the previous purification procedure, we now prepared a glycyl radical-containing reductase, active in the absence of the auxiliary reducing enzyme system. This reductase uses formate as hydrogen donor in the reaction. During catalysis, formate is stoichiometrically oxidized to CO2, and isotope from [3H]formate appears in water. Thus E. coli uses completely different hydrogen donors for the reduction of ribonucleotides during anaerobic and aerobic growth. The aerobic class I reductase employs redox-active thiols from thioredoxin or glutaredoxin to this purpose. The present results strengthen speculations that class III enzymes arose early during the evolution of DNA.