901 resultados para Angle’s class II malocclusion
Resumo:
Although porcine circovirus type 2 (PCV2)-associated diseases have been evaluated for known immune evasion strategies, the pathogenicity of these viruses remained concealed for decades. Surprisingly, the same viruses that cause panzootics in livestock are widespread in young, unaffected animals. Recently, evidence has emerged that circovirus-like viruses are also linked to complex diseases in humans, including children. We detected PCV2 genome-carrying cells in fetal pig thymi. To elucidate virus pathogenicity, we developed a new pig infection model by in vivo transfection of recombinant PCV2 and the immunosuppressant cofactor cyclosporine A. Using flow cytometry, immunofluorescence and fluorescence in situ hybridization, we found evidence that PCV2 dictates positive and negative selection of maturing T cells in the thymus. We show for the first time that PCV2-infected cells reside at the corticomedullary junction of the thymus. In diseased animals, we found polyclonal deletion of single positive cells (SPs) that may result from a loss of major histocompatibility complex class-II expression at the corticomedullary junction. The percentage of PCV2 antigen-presenting cells correlated with the degree of viremia and, in turn, the severity of the defect in thymocyte maturation. Moreover, the reversed T-cell receptor/CD4-coreceptor expression dichotomy on thymocytes at the CD4(+)CD8(interm) and CD4SP cell stage is viremia-dependent, resulting in a specific hypo-responsiveness of T-helper cells. We compare our results with the only other better-studied member of Circoviridae, chicken anemia virus. Our data show that PCV2 infection leads to thymocyte selection dysregulation, adding a valuable dimension to our understanding of virus pathogenicity.
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
This article proposes a combined technique including bone grafting, connective tissue graft, and coronally advanced flap to create some space for simultaneous bone regrowth and root coverage. A 23 year-old female was referred to our private clinic with a severe class II Miller recession and lack of attached gingiva. The suggested treatment plan comprised of root coverage combined with xenograft bone particles. The grafted area healed well and full coverage was achieved at 12-month follow-up visit. Bone-added periodontal plastic surgery can be considered as a practical procedure for management of deep gingival recession without buccal bone plate.
Resumo:
AIM To analyse meta-analyses included in systematic reviews (SRs) published in leading orthodontic journals and the Cochrane Database of Systematic Reviews (CDSR) focusing on orthodontic literature and to assess the quality of the existing evidence. MATERIALS AND METHODS Electronic searching was undertaken to identify SRs published in five major orthodontic journals and the CDSR between January 2000 and June 2014. Quality assessment of the overall body of evidence from meta-analyses was conducted using the Grading of Recommendations Assessment, Development and Evaluation working group (GRADE) tool. RESULTS One hundred and fifty-seven SRs were identified; meta-analysis was present in 43 of these (27.4 per cent). The highest proportion of SRs that included a meta-analysis was found in Orthodontics and Craniofacial Research (6/13; 46.1 per cent), followed by the CDSR (12/33; 36.4 per cent) and the American Journal of Orthodontics and Dentofacial Orthopaedics (15/44; 34.1 per cent). Class II treatment was the most commonly addressed topic within SRs in orthodontics (n = 18/157; 11.5 per cent). The number of trials combined to produce a summary estimate was small for most meta-analyses with a median of 4 (range: 2-52). Only 21 per cent (n = 9) of included meta-analyses were considered to have a high/moderate quality of evidence according to GRADE, while the majority were of low or very low quality (n = 34; 79.0 per cent). CONCLUSIONS Overall, approximately one quarter of orthodontic SRs included quantitative synthesis, with a median of four trials per meta-analysis. The overall quality of evidence from the selected orthodontic SRs was predominantly low to very low indicating the relative lack of high quality of evidence from SRs to inform clinical practice guidelines.
Resumo:
Interleukin 4 (IL-4) is a pleotropic cytokine affecting a wide range of cell types in both the mouse and the human. These activities include regulation of the growth and differentiation of both T and B lymphocytes. The activities of IL-4 in nonprimate, nonmurine systems are not well established. Herein, we demonstrate in the bovine system that IL-4 upregulates production of IgM, IgG1, and IgE in the presence of a variety of costimulators including anti-IgM, Staphylococcus aureus cowan strain I, and pokeweed mitogen. IgE responses are potentiated by the addition of IL-2 to IL-4. Culture of bovine B lymphocytes with IL-4 in the absence of additional costimulators resulted in the increased surface expression of CD23 (low-affinity Fc epsilon RII), IgM, IL-2R, and MHC class II in a dose-dependent manner. IL-4 alone increased basal levels of proliferation of bulk peripheral blood mononuclear cells but in the presence of Con A inhibited proliferation. In contrast to the activities of IL-4 in the murine system, proliferation of TH1- and TH2-like clones was inhibited in a dose-dependent manner as assessed by antigen-or IL-2-driven in vitro proliferative responses. These observations are consistent with the role of IL-4 as a key player in regulation of both T and B cell responses.
Resumo:
Staphylococcus aureus is an opportunistic pathogen that is a major health threat in the clinical and community settings. An interesting hallmark of patients infected with S. aureus is that they do not usually develop a protective immune response and are susceptible to reinfection, in part because of the ability of S. aureus to modulate host immunity. The ability to evade host immune responses is a key contributor to the infection process and is critical in S. aureus survival and pathogenesis. This study investigates the immunomodulatory effects of two secreted proteins produced by S. aureus, the MHC class II analog protein (Map) and the extracellular fibrinogen-binding protein (Efb). Map has been demonstrated to modulate host immunity by interfering with T cell function. Map has been shown to significantly reduce T cell proliferative responses and significantly reduce delayed-type hypersensitivity responses to challenge antigen. In addition, the effects of Map on the infection process were tested in a mouse model of infection. Mice infected with Map− S. aureus (Map deficient strain) presented with significantly reduced levels of arthritis, osteomyelitis and abscess formation compared to mice infected with the wild-type Map+S. aureus strain suggesting that Map−S. aureus is much less virulent than Map+S. aureus. Furthermore, Map−S. aureus-infected nude mice developed arthritis and osteomyelitis to a severity similar to Map +S. aureus-infected controls, suggesting that T cells can affect disease outcome following S. aureus infection and Map may attenuate cellular immunity against S. aureus. The extracellular fibrinogen-binding protein (Efb) was identified when cultured S. aureus supernatants were probed with the complement component C3. The binding of C3 to Efb resulted in studies investigating the effects of Efb on complement activation. We have demonstrated that Efb can inhibit both the classical and alternative complement pathways. Moreover, we have shown that Efb can inhibit complement mediated opsonophagocytosis. Further studies have characterized the Efb-C3 binding interaction and localized the C3-binding domain to the C-terminal region of Efb. In addition, we demonstrate that Efb binds specifically to a region within the C3d fragment of C3. This study demonstrates that Map and Efb can interfere with both the acquired and innate host immune pathways and that these proteins contribute to the success of S. aureus in evading host immunity and in establishing disease. ^
Resumo:
Obesity is a complex multifactorial disease and is a public health priority. Perilipin coats the surface of lipid droplets in adipocytes and is believed to stabilize these lipid bodies by protecting triglyceride from early lipolysis. This research project evaluated the association between genetic variation within the human perilipin (PLIN) gene and obesity-related quantitative traits and disease-related phenotypes in Non-Hispanic White (NHW) and African American (AA) participants from the Atherosclerosis Risk in Communities (ARIC) Study. ^ Multivariate linear regression, multivariate logistic regression, and Cox proportional hazards models evaluated the association between single gene variants (rs2304794, rs894160, rs8179071, and rs2304795) and multilocus variation (rs894160 and rs2304795) within the PLIN gene and both obesity-related quantitative traits (body weight, body mass index [BMI], waist girth, waist-to-hip ratio [WHR], estimated percent body fat, and plasma total triglycerides) and disease-related phenotypes (prevalent obesity, metabolic syndrome [MetS], prevalent coronary heart disease [CHD], and incident CHD). Single variant analyses were stratified by race and gender within race while multilocus analyses were stratified by race. ^ Single variant analyses revealed that rs2304794 and rs894160 were significantly related to plasma triglyceride levels in all NHWs and NHW women. Among AA women, variant rs8179071 was associated with triglyceride levels and rs2304794 was associated with risk-raising waist circumference (>0.8 in women). The multilocus effects of variants rs894160 and rs2304795 were significantly associated with body weight, waist girth, WHR, estimated percent body fat, class II obesity (BMI ≥ 35 kg/m2), class III obesity (BMI ≥ 35 kg/m2), and risk-raising WHR (>0.9 in men and >0.8 in women) in AAs. Variant rs2304795 was significantly related to prevalent MetS among AA males and prevalent CHD in NHW women; multilocus effects of the PLIN gene were associated with prevalent CHD among NHWs. Rs2304794 was associated with incident CHD in the absence of the MetS among AAs. These findings support the hypothesis that variation within the PLIN gene influences obesity-related traits and disease-related phenotypes. ^ Understanding these effects of the PLIN genotype on the development of obesity can potentially lead to tailored health promotion interventions that are more effective. ^
Resumo:
Pulmonary fibrosis (PF) is the result of a variety of environmental and cancer treatment related insults and is characterized by excessive deposition of collagen. Gas exchange in the alveoli is impaired as the normal lung becomes dense and collapsed leading to a loss of lung volume. It is now accepted that lung injury and fibrosis are in part genetically regulated. ^ Bleomycin is a chemotherapeutic agent used for testicular cancer and lymphomas that induces significant pulmonary toxicity. We delivered bleomycin to mice subcutaneously via a miniosmotic pump in order to elicit lung injury (LI) and quantified the %LI morphometrically using video imaging software. We previously identified a quantitative trait loci, Blmpf-1(LOD=17.4), in the Major Histocompatibility Complex (MHC), but the exact genetic components involved have remained unknown. ^ In the current studies, Blmpf-1 was narrowed to an interval spanning 31.9-32.9Mb on Chromosome 17 using MHC Congenic mice. This region includes the MHC Class II and III genes, and is flanked by the TNF-alpha super locus and MHC Class I genes. Knockout mice of MHC Class I genes (B2mko), MHC Class II genes (Cl2ko), and TNF-alpha (TNF-/-) and its receptors (p55-/-, p75-/-, and p55/p75-/-) were treated with bleomycin in order to ascertain the role of these genes in the pathogenesis of lung injury. ^ Cl2ko mice had significantly better survival and %LI when compared to treated background BL/6 (B6, P<.05). In contrast, B2mko showed no differences in survival or %LI compared to B6. This suggests that the MHC Class II locus contains susceptibility genes for bleomycin-induced lung injury. ^ TNF-alpha, a Class III gene, was examined and it was found that TNF-/- and p55-/- mice had higher %LI and lower survival when compared to B6 (P<.05). In contrast, p75-/- mice had significantly reduced %LI when compared to TNF-/-, p55-/-, and B6 mice as well as higher survival (P<.01). These data contradict the current paradigm that TNF-alpha is a profibrotic mediator of lung injury and suggest a novel and distinct role for the p55 and p75 receptors in mediating lung injury. ^
Resumo:
Background. Ambulatory blood pressure (ABP) measurement is a means of monitoring cardiac function in a noninvasive way, but little is known about ABP in heart failure (HF) patients. Blood pressure (BP) declines during sleep as protection from consistent BP load, a phenomenon termed "dipping." The aims of this study were (1) to compare BP dipping and physical activity between two groups of HF patients with different functional statuses and (2) to determine whether the strength of the association between ambulatory BP and PA is different between these two different functional statuses of HF. ^ Methods. This observational study used repeated measures of ABP and PA over a 24-hour period to investigate the profiles of BP and PA in community-based individuals with HF. ABP was measured every 30 minutes by using a SpaceLabs 90207, and a Basic Motionlogger actigraph was used to measure PA minute by minute. Fifty-six participants completed both BP and physical activity for a 24-hour monitoring period. Functional status was based on New York Heart Association (NYHA) ratings. There were 27 patients with no limitation of PA (NYHA class I HF) and 29 with some limitation of PA but no discomfort at rest (NYHA class II or III HF). The sample consisted of 26 men and 30 women, aged 45 to 91 years (66.96 ± 12.35). ^ Results. Patients with NYHA class I HF had significantly greater dipping percent than those with NYHA class II/III HF after controlling their left ventricular ejection fraction (LVEF). In a mixed model analysis (PROC MIXED, SAS Institute, v 9.1), PA was significantly related to ambulatory systolic and diastolic BP and mean arterial pressure. The strength of the association between PA and ABP readings was not significantly different for the two groups of patients. ^ Conclusions. These preliminary findings demonstrate differences between NYHA class I and class II/III of HF in BP dipping status and ABP but not PA. Longitudinal research is recommended to improve understanding of the influence of disease progression on changes in 24-hour physical activity and BP profiles of this patient population. ^ Key Words. Ambulatory Blood Pressure; Blood Pressure Dipping; Heart Failure; Physical Activity. ^
Resumo:
This study examined the effects of skipping breakfast on selected aspects of children's cognition, specifically their memory (both immediate and one week following presentation of stimuli), mental tempo, and problem solving accuracy. Test instruments used included the Hagen Central/Incidental Recall Test, Matching Familiar Figures Test, McCarthy Digit Span and Tapping Tests. The study population consisted of 39 nine-to eleven year old healthy children who were admitted for overnight stays at a clinical research setting for two nights approximately one week apart. The study was designed to be able to adequately monitor and control subjects' food consumption. The design chosen was the cross-over design where randomly on either the first or second visit, the child skipped breakfast. In this way, subjects acted as their own controls. Subjects were tested at noon of both visits, this representing an 18-hour fast.^ Analysis focused on whether or not fasting for this period of time affected an individual's performance. Results indicated that for most of the tests, subjects were not significantly affected by skipping breakfast for one morning. However, on tests of short-term central and incidental recall, subjects who had skipped breakfast recalled significantly more of the incidental cues although they did so at no apparent expense to their storing of central information. In the area of problem-solving accuracy, subjects skipping breakfast at time two made significantly more errors on hard sections of the MFF Test. It should be noted that although a large number of tests were conducted, these two tests showed the only significant differences.^ These significant results in the areas of short-term incidental memory and in problem solving accuracy were interpreted as being an effect of subject fatigue. That is, when subjects missed breakfast, they were more likely to become fatigued and in the novel environment presented in the study setting, it is probable that these subjects responded by entering Class II fatigue which is characterized by behavioral excitability, diffused attention and altered performance patterns. ^
Resumo:
Many eukaryotic promoters contain a CCAAT element at a site close ($-$80 to $-$120) to the transcription initiation site. CBF (CCAAT Binding Factor), also called NF-Y and CP1, was initially identified as a transcription factor binding to such sites in the promoters of the Type I collagen, albumin and MHC class II genes. CBF is a heteromeric transcription factor and purification and cloning of two of the subunits, CBF-A and CBF-B revealed that it was evolutionarily conserved with striking sequence identities with the yeast polypeptides HAP3 and HAP2, which are components of a CCAAT binding factor in yeast. Recombinant CBF-A and CBF-B however failed to bind to DNA containing CCAAT sequences. Biochemical experiments led to the identification of a third subunit, CBF-C which co-purified with CBF-A and complemented the DNA binding of recombinant CBF-A and CBF-B. We have recently isolated CBF-C cDNAs and have shown that bacterially expressed purified CBF-C binds to CCAAT containing DNA in the presence of recombinant CBF-A and CBF-B. Our experiments also show that a single molecule each of all the three subunits are present in the protein-DNA complex. Interestingly, CBF-C is also evolutionarily conserved and the conserved domain between CBF-C and its yeast homolog HAP5 is sufficient for CBF-C activity. Using GST-pulldown experiments we have demonstrated the existence of protein-protein interaction between CBF-A and CBF-C in the absence of CBF-B and DNA. CBF-B on other hand, requires both CBF-A and CBF-C to form a ternary complex which then binds to DNA. Mutational studies of CBF-A have revealed different domains of the protein which are involved in CBF-C interaction and CBF-B interaction. In addition, CBF-A harbors a domain which is involved in DNA recognition along with CBF-B. Dominant negative analogs of CBF-A have also substantiated our initial observation of assembly of CBF subunits. Our studies define a novel DNA binding structure of heterotrimeric CBF, where the three subunits of CBF follow a particular pathway of assembly of subunits that leads to CBF binding to DNA and activating transcription. ^
Resumo:
Regulation of cytoplasmic deadenylation, the first step in mRNA turnover, has direct impact on the fate of gene expression. AU-rich elements (AREs) found in the 3′ untranslated regions of many labile mRNAs are the most common RNA-destabilizing elements known in mammalian cells. Based on their sequence features and functional properties, AREs can be divided into three classes. Class I or class III ARE directs synchronous deadenylation, whereas class II ARE directs asynchronous deadenylation with the formation of poly(A)-intermediates. Through systematic mutagenesis study, we found that a cluster of five or six copies of AUUUA motifs forming various degrees of reiteration is the key feature dictating the choice between asynchronous versus synchronous deadenylation. A 20–30 nt AU-rich sequence immediately 5 ′ to this cluster of AUUUA motifs can greatly enhance its destabilizing ability and is an integral part of the AREs. These two features are the defining characteristics of class II AREs. ^ To better understand the decay mechanism of AREs, current methods have several limitations. Taking the advantage of tetracycline-regulated promoter, we developed a new transcriptional pulse strategy, Tet-system. By controlling the time and the amount of Tet addition, a pulse of RNA could be generated. Using this new system, we showed that AREs function in both growth- and density-arrested cells. The new strategy offers for the first time an opportunity to investigate control of mRNA deadenylation and decay kinetics in mammalian cells that exhibit physiologically relevant conditions. ^ As a member of heterogeneous nuclear RNA-binding protein, hnRNP D 0/AUF1 displays specific affinities for ARE sequences in vitro . But its in vivo function in ARE-mediated mRNA decay is unclear. AUF1/hnRNP D0 is composed of at least four isoforms derived by alternative RNA splicing. Each isoform exhibits different affinity for ARE sequence in vitro. Here, we examined in vivo effect of AUF1s/hnRNP D0s on degradation of ARE-containing mRNA. Our results showed that all four isoforms exhibit various RNA stabilizing effects in NIH3T3 cells, which are positively correlated with their binding affinities for ARE sequences. Further experiments indicated that AUF1/hnRNP D0 has a general role in modulating the stability of cytoplasmic mRNAs in mammalian cells. ^
Resumo:
IL-24 is an unusual member of the IL-10 family, which is considered a Th1 cytokine that exhibits tumor cell cytotoxicity. I describe the purification of this novel cytokine from the supernatant of IL-24 gene transfected human embryonic kidney cells and define the biochemical and functional properties of the soluble, human IL-24 protein. ^ I showed IL-24 non-covalently associates with bovine albumin. Immunoaffinity purification followed by cation exchange chromatography resulted in the significant enrichment of N-glycosylated IL-24. This protein elicited dose-dependent secretion of TNF-α and IL-6 from purified human monocytes and TNF-α secretion from PMA differentiated U937 cells. I showed this same protein was cytotoxic to melanoma tumor cells via the induction of IFN-α. ^ I reported IL-24 associates as at least two disulfide linked, N-glycosylated dimers. Enzymatic removal of N-linked-glycosylation from purified IL-24 partially diminished its cytokine and cytotoxic functions. Disruption of IL-24 dimers via reduction and alkylation of intermolecular disulfide bonds nearly abolished IL-24s cytokine function. ^ I elucidated IL-24 induced TNF-α secretion was pSTAT1, pSTAT3 as well as the class II heterodimeric receptors IL-20R1/IL-22R2 independent. I identified a requirement for the heterodimer of Toll-like Receptors 1 and 2 for IL-24s cytokine function and show a physical interaction between IL-24 and the extracellular domain of TLR-1. ^ Thus, I demonstrated that purified N-glycosylated, soluble, dimeric, human IL-24 exhibits both immunomodulatory and anti-cancer activities and these functions remain associated during purification. IL-24 induced TNF-α secretion required an interaction with the heterodimeric receptor TLR-1/2 and IL-24s cytotoxic affect to melanoma tumor cells was in part due to its induction of IFN-β. ^
Resumo:
Heavy metals pollution in marine environments has caused great damage to marine biological and ecological systems. Heavy metals accumulate in marine creatures, after which they are delivered to higher trophic levels of marine organisms through the marine food chain, which causes serious harm to marine biological systems and human health. Additionally, excess carbon dioxide in the atmosphere has caused ocean acidification. Indeed, about one third of the CO2 released into the atmosphere by anthropogenic activities since the beginning of the industrial revolution has been absorbed by the world's oceans, which play a key role in moderating climate change. Modeling has shown that, if current trends in CO2 emissions continue, the average pH of the ocean will reach 7.8 by the end of this century, corresponding to 0.5 units below the pre-industrial level, or a three-fold increase in H+ concentration. The ocean pH has not been at this level for several millions of years. Additionally, these changes are occurring at speeds 100 times greater than ever previously observed. As a result, several marine species, communities and ecosystems might not have time to acclimate or adapt to these fast changes in ocean chemistry. In addition, decreasing ocean pH has the potential to seriously affect the growth, development and reproduction reproductive processes of marine organisms, as well as threaten normal development of the marine ecosystem. Copepods are an important part of the meiofauna that play an important role in the marine ecosystem. Pollution of the marine environment can influence their growth and development, as well as the ecological processes they are involved in. Accordingly, there is important scientific value to investigation of the response of copepods to ocean acidification and heavy metals pollution. In the present study, we evaluated the effects of simulated future ocean acidification and the toxicological interaction between ocean acidity and heavy metals of Cu and Cd on T. japonicus. To accomplish this, harpacticoids were exposed to Cu and Cd concentration gradient seawater that had been equilibrated with CO2 and air to reach pH 8.0, 7.7, 7.3 and 6.5 for 96 h. Survival was not significantly suppressed under single sea water acidification, and the final survival rates were greater than 93% in both the experimental groups and the controls. The toxicity of Cu to T. japonicus was significantly affected by sea water acidification, with the 96h LC50 decreasing by nearly threefold from 1.98 to 0.64 mg/L with decreasing pH. The 96 h LC50 of Cd decreased with decreasing pH, but there was no significant difference in mortality among pH treatments. The results of the present study demonstrated that the predicted future ocean acidification has the potential to negatively affect survival of T. japonicus by exacerbating the toxicity of Cu. The calculated safe concentrations of Cu were 11.9 (pH 7.7) and 10.5 (pH 7.3) µg/L, which were below the class I value and very close to the class II level of the China National Quality Standard for Sea Water. Overall, these results indicate that the Chinese coastal sea will face a
Resumo:
Los polímeros compostables suponen en torno al 30% de los bioplásticos destinados a envasado, siendo a su vez esta aplicación el principal destino de la producción de este tipo de materiales que, en el año 2013, superó 1,6 millones de toneladas. La presente tesis aborda la biodegradación de los residuos de envases domésticos compostables en medio aerobio para dos tipos de formato y materiales, envase rígido de PLA (Clase I) y dos tipos de bolsas de PBAT+PLA (Clases II y III). Sobre esta materia se han realizado diversos estudios en escala de laboratorio pero para otro tipo de envases y biopolímeros y bajo condiciones controladas del compost con alguna proyección particularizada en plantas. La presente tesis da un paso más e investiga el comportamiento real de los envases plásticos compostables en la práctica del compostaje en tecnologías de pila y túnel, tanto a escala piloto como industrial, dentro del procedimiento y con las condiciones ambientales de instalaciones concretas. Para ello, con el método seguido, se han analizado los requisitos básicos que debe cumplir un envase compostable, según la norma UNE – EN 13432, evaluando el porcentaje de biodegradación de los envases objeto de estudio, en función de la pérdida de peso seco tras el proceso de compostaje, y la calidad del compost obtenido, mediante análisis físico-químico y de fitotoxicidad para comprobar que los materiales de estudio no aportan toxicidad. En cuanto a los niveles de biodegrabilidad, los resultados permiten concluir que los envases de Clase I se compostan adecuadamente en ambas tecnologías y que no requieren de unas condiciones de proceso muy exigentes para alcanzar niveles de biodegradación del 100%. En relación a los envases de Clase II, se puede asumir que se trata de un material que se composta adecuadamente en pila y túnel industrial pero que requiere de condiciones exigentes para alcanzar niveles de biodegradación del 100% al afectarle de forma clara la ubicación de las muestras en la masa a compostar, especialmente en el caso de la tecnología de túnel. Mientras el 90% de las muestras alcanza el 100% de biodegradación en pila industrial, tan sólo el 50% lo consigue en la tecnología de túnel a la misma escala. En cuanto a los envases de Clase III, se puede afirmar que es un material que se composta adecuadamente en túnel industrial pero que requiere de condiciones de cierta exigencia para alcanzar niveles de biodegradación del 100% al poderle afectar la ubicación de las muestras en la masa a compostar. El 75% de las muestras ensayadas en túnel a escala industrial alcanzan el 100% de biodegradación y, aunque no se ha ensayado este tipo de envase en la tecnología de pila al no disponer de muestras, cabe pensar que los resultados de biodegrabilidad que hubiera podido alcanzar habrían sido, como mínimo, los obtenidos para los envases de Clase II, al tratarse de materiales muy similares en composición. Por último, se concluye que la tecnología de pila es más adecuada para conseguir niveles de biodegradación superiores en los envases tipo bolsa de PBAT+PLA. Los resultados obtenidos permiten también sacar en conclusión que, en el diseño de instalaciones de compostaje para el tratamiento de la fracción orgánica recogida selectivamente, sería conveniente realizar una recirculación del rechazo del afino del material compostado para aumentar la probabilidad de someter este tipo de materiales a las condiciones ambientales adecuadas. Si además se realiza un triturado del residuo a la entrada del proceso, también se aumentaría la superficie específica a entrar en contacto con la masa de materia orgánica y por tanto se favorecerían las condiciones de biodegradación. En cuanto a la calidad del compost obtenido en los ensayos, los resultados de los análisis físico – químicos y de fitotoxicidad revelan que los niveles de concentración de microorganismo patógenos y de metales pesados superan, en la práctica totalidad de las muestras, los niveles máximos permitidos en la legislación vigente aplicable a productos fertilizantes elaborados con residuos. Mediante el análisis de la composición de los envases ensayados se constata que la causa de esta contaminación reside en la materia orgánica utilizada para compostar en los ensayos, procedente del residuo de origen doméstico de la denominada “fracción resto”. Esta conclusión confirma la necesidad de realizar una recogida selectiva de la fracción orgánica en origen, existiendo estudios que evidencian la mejora de la calidad del residuo recogido en la denominada “fracción orgánica recogida selectivamente” (FORM). Compostable polymers are approximately 30% of bioplastics used for packaging, being this application, at same time, the main destination for the production of such materials exceeded 1.6 million tonnes in 2013. This thesis deals with the biodegradation of household packaging waste compostable in aerobic medium for two format types and materials, rigid container made of PLA (Class I) and two types of bags made of PBAT + PLA (Classes II and III). There are several studies developed about this issue at laboratory scale but for other kinds of packaging and biopolymers and under composting controlled conditions with some specifically plants projection. This thesis goes one step further and researches the real behaviour of compostable plastic packaging in the composting practice in pile and tunnel technologies, both at pilot and industrial scale, within the procedure and environmental conditions of concrete devices. Therefore, with a followed method, basic requirements fulfilment for compostable packaging have been analysed according to UNE-EN 13432 standard. It has been assessed the biodegradability percentage of the packaging studied, based on loss dry weight after the composting process, and the quality of the compost obtained, based on physical-chemical analysis to check no toxicity provided by the studied materials. Regarding biodegradability levels, results allow to conclude that Class I packaging are composted properly in both technologies and do not require high exigent process conditions for achieving 100% biodegradability levels. Related to Class II packaging, it can be assumed that it is a material that composts properly in pile and tunnel at industrial scale but requires exigent conditions for achieving 100% biodegradability levels for being clearly affected by sample location in the composting mass, especially in tunnel technology case. While 90% of the samples reach 100% of biodegradation in pile at industrial scale, only 50% achieve it in tunnel technology at the same scale. Regarding Class III packaging, it can be said that it is a material properly composted in tunnel at industrial scale but requires certain exigent conditions for reaching 100% biodegradation levels for being possibly affected by sample location in the composting mass. The 75% of the samples tested in tunnel at industrial scale reaches 100% biodegradation. Although this kind of packaging has not been tested on pile technology due to unavailability of samples, it is judged that biodegradability results that could be reached would have been, at least, the same obtained for Class II packaging, as they are very similar materials in composition. Finally, it is concluded that pile technology is more suitable for achieving highest biodegradation levels in bag packaging type of PBAT+PLA. Additionally, the obtained results conclude that, in the designing of composting devices for treatment of organic fraction selectively collected, it would be recommended a recirculation of the refining refuse of composted material in order to increase the probability of such materials to expose to proper environmental conditions. If the waste is grinded before entering the process, the specific surface in contact with organic material would also be increased and therefore biodegradation conditions would be more favourable. Regarding quality of the compost obtained in the tests, physical-chemical and phytotoxicity analysis results reveal that pathogen microorganism and heavy metals concentrations exceed, in most of the samples, the maximum allowed levels by current legislation for fertilizers obtained from wastes. Composition analysis of tested packaging verifies that the reason for this contamination is the organic material used for composting tests, comes from the household waste called “rest fraction”. This conclusion confirms the need of a selective collection of organic fraction in the origin, as existing studies show the quality improvement of the waste collected in the so-called “organic fraction selectively collected” (FORM).