926 resultados para 1 Corinthians 8:1-13
Resumo:
BACKGROUND Recent reports using administrative claims data suggest the incidence of community- and hospital-onset sepsis is increasing. Whether this reflects changing epidemiology, more effective diagnostic methods, or changes in physician documentation and medical coding practices is unclear. METHODS We performed a temporal-trend study from 2008 to 2012 using administrative claims data and patient-level clinical data of adult patients admitted to Barnes-Jewish Hospital in St. Louis, Missouri. Temporal-trend and annual percent change were estimated using regression models with autoregressive integrated moving average errors. RESULTS We analyzed 62,261 inpatient admissions during the 5-year study period. 'Any SIRS' (i.e., SIRS on a single calendar day during the hospitalization) and 'multi-day SIRS' (i.e., SIRS on 3 or more calendar days), which both use patient-level data, and medical coding for sepsis (i.e., ICD-9-CM discharge diagnosis codes 995.91, 995.92, or 785.52) were present in 35.3 %, 17.3 %, and 3.3 % of admissions, respectively. The incidence of admissions coded for sepsis increased 9.7 % (95 % CI: 6.1, 13.4) per year, while the patient data-defined events of 'any SIRS' decreased by 1.8 % (95 % CI: -3.2, -0.5) and 'multi-day SIRS' did not change significantly over the study period. Clinically-defined sepsis (defined as SIRS plus bacteremia) and severe sepsis (defined as SIRS plus hypotension and bacteremia) decreased at statistically significant rates of 5.7 % (95 % CI: -9.0, -2.4) and 8.6 % (95 % CI: -4.4, -12.6) annually. All-cause mortality, SIRS mortality, and SIRS and clinically-defined sepsis case fatality did not change significantly during the study period. Sepsis mortality, based on ICD-9-CM codes, however, increased by 8.8 % (95 % CI: 1.9, 16.2) annually. CONCLUSIONS The incidence of sepsis, defined by ICD-9-CM codes, and sepsis mortality increased steadily without a concomitant increase in SIRS or clinically-defined sepsis. Our results highlight the need to develop strategies to integrate clinical patient-level data with administrative data to draw more accurate conclusions about the epidemiology of sepsis.
Resumo:
Gebiet: Chirurgie Abstract: Background: Preservation of cardiac grafts for transplantation is not standardized and most centers use a single administration of crystalloid solution at the time of harvesting. We investigated possible benefits of an additional dose of cardioplegia dispensed immediately before implantation. – – Methods: Consecutive adult cardiac transplantations (2005?2012) were reviewed. Hearts were harvested following a standard protocol (Celsior 2L, 4?8°C). In 2008, 100 ml crys-talloid cardioplegic solution was added and administered immediately before implanta-tion. Univariate and logistic regression analyses were used to investigate risk factors for post-operative graft failure and mid-term outcome. – – Results: A total of 81 patients, 44 standard (?Cardio???) vs. 37 with additional cardiople-gia (?CardioC?) were analyzed. Recipients and donors were comparable in both groups. CardioC patients demonstrated a reduced need for defibrillation (24 vs. 48%, p D0.03), post-operative ratio of CK-MB/CK (10.1_3.9 vs. 13.3_4.2%, p D0.001), intubation time (2.0_1.6 vs. 7.2_11.5 days, p D0.05), and ICU stay (3.9_2.1 vs. 8.5_7.8 days, p D0.001). Actuarial survival was reduced when graft ischemic time was >180 min in Cardio?? but not in CardioC patients (p D0.033). Organ ischemic time >180 min (OR: 5.48, CI: 1.08?27.75), donor female gender (OR: 5.84, CI: 1.13?33.01), and recipient/donor age >60 (OR: 6.33, CI: 0.86?46.75), but not the additional cardioplegia or the observation period appeared independent predictors of post-operative acute graft failure. – – Conclusion: An additional dose of cardioplegia administered immediately before implan-tation may be a simple way to improve early and late outcome of cardiac transplantation, especially in situations of prolonged graft ischemia.A large, ideally multicentric, randomized study is desirable to verify this preliminary observation.
Resumo:
AIMS Proprotein convertase subtilisin kexin 9 (PCSK9) is an emerging target for the treatment of hypercholesterolaemia, but the clinical utility of PCSK9 levels to guide treatment is unknown. We aimed to prospectively assess the prognostic value of plasma PCSK9 levels in patients with acute coronary syndromes (ACS). METHODS AND RESULTS Plasma PCSK9 levels were measured in 2030 ACS patients undergoing coronary angiography in a Swiss prospective cohort. At 1 year, the association between PCSK9 tertiles and all-cause death was assessed adjusting for the Global Registry of Acute Coronary Events (GRACE) variables, as well as the achievement of LDL cholesterol targets of <1.8 mmol/L. Patients with higher PCSK9 levels at angiography were more likely to have clinical familial hypercholesterolaemia (rate ratio, RR 1.21, 95% confidence interval, CI 1.09-1.53), be treated with lipid-lowering therapy (RR 1.46, 95% CI 1.30-1.63), present with longer time interval of chest pain (RR 1.29, 95% CI 1.09-1.53) and higher C-reactive protein levels (RR 1.22, 95% CI 1.16-1.30). PCSK9 increased 12-24 h after ACS (374 ± 149 vs. 323 ± 134 ng/mL, P < 0.001). At 1 year follow-up, HRs for upper vs. lower PCSK9-level tertiles were 1.13 (95% CI 0.69-1.85) for all-cause death and remained similar after adjustment for the GRACE score. Patients with higher PCSK9 levels were less likely to reach the recommended LDL cholesterol targets (RR 0.81, 95% CI 0.66-0.99). CONCLUSION In ACS patients, high initial PCSK9 plasma levels were associated with inflammation in the acute phase and hypercholesterolaemia, but did not predict mortality at 1 year.
Resumo:
AIM To evaluate the prognostic value of electrophysiological stimulation (EPS) in the risk stratification for tachyarrhythmic events and sudden cardiac death (SCD). METHODS We conducted a prospective cohort study and analyzed the long-term follow-up of 265 consecutive patients who underwent programmed ventricular stimulation at the Luzerner Kantonsspital (Lucerne, Switzerland) between October 2003 and April 2012. Patients underwent EPS for SCD risk evaluation because of structural or functional heart disease and/or electrical conduction abnormality and/or after syncope/cardiac arrest. EPS was considered abnormal, if a sustained ventricular tachycardia (VT) was inducible. The primary endpoint of the study was SCD or, in implanted patients, adequate ICD-activation. RESULTS During EPS, sustained VT was induced in 125 patients (47.2%) and non-sustained VT in 60 patients (22.6%); in 80 patients (30.2%) no arrhythmia could be induced. In our cohort, 153 patients (57.7%) underwent ICD implantation after the EPS. During follow-up (mean duration 4.8 ± 2.3 years), a primary endpoint event occurred in 49 patients (18.5%). The area under the receiver operating characteristic curve (AUROC) was 0.593 (95%CI: 0.515-0.670) for a left ventricular ejection fraction (LVEF) < 35% and 0.636 (95%CI: 0.563-0.709) for inducible sustained VT during EPS. The AUROC of EPS was higher in the subgroup of patients with LVEF ≥ 35% (0.681, 95%CI: 0.578-0.785). Cox regression analysis showed that both, sustained VT during EPS (HR: 2.26, 95%CI: 1.22-4.19, P = 0.009) and LVEF < 35% (HR: 2.00, 95%CI: 1.13-3.54, P = 0.018) were independent predictors of primary endpoint events. CONCLUSION EPS provides a benefit in risk stratification for future tachyarrhythmic events and SCD and should especially be considered in patients with LVEF ≥ 35%.
Resumo:
BACKGROUND Impact of contemporary treatment of pre-invasive breast cancer (ductal carcinoma in situ [DCIS]) on long-term outcomes remains poorly defined. We aimed to evaluate national treatment trends for DCIS and to determine their impact on disease-specific (DSS) and overall survival (OS). METHODS The Surveillance, Epidemiology, and End Results (SEER) registry was queried for patients diagnosed with DCIS from 1991 to 2010. Treatment pattern trends were analyzed using Cochran-Armitage trend test. Survival analyses were performed using inverse probability weights (IPW)-adjusted competing risk analyses for DSS and Cox proportional hazard regression for OS. All tests performed were two-sided. RESULTS One hundred twenty-one thousand and eighty DCIS patients were identified. The greatest proportion of patients was treated with lumpectomy and radiation therapy (43.0%), followed by lumpectomy alone (26.5%) and unilateral (23.8%) or bilateral mastectomy (4.5%) with significant shifts over time. The rate of sentinel lymph node biopsy increased from 9.7% to 67.1% for mastectomy and from 1.4% to 17.8% for lumpectomy. Compared with mastectomy, OS was higher for lumpectomy with radiation (hazard ratio [HR] = 0.79, 95% confidence interval [CI] = 0.76 to 0.83, P < .001) and lower for lumpectomy alone (HR = 1.17, 95% CI = 1.13 to 1.23, P < .001). IPW-adjusted ten-year DSS was highest in lumpectomy with XRT (98.9%), followed by mastectomy (98.5%), and lumpectomy alone (98.4%). CONCLUSIONS We identified substantial shifts in treatment patterns for DCIS from 1991 to 2010. When outcomes between locoregional treatment options were compared, we observed greater differences in OS than DSS, likely reflecting both a prevailing patient selection bias as well as clinically negligible differences in breast cancer outcomes between groups.
Resumo:
BACKGROUND VEGF and VEGF receptor-2-mediated angiogenesis contribute to hepatocellular carcinoma pathogenesis. Ramucirumab is a recombinant IgG1 monoclonal antibody and VEGF receptor-2 antagonist. We aimed to assess the safety and efficacy of ramucirumab in advanced hepatocellular carcinoma following first-line therapy with sorafenib. METHODS In this randomised, placebo-controlled, double-blind, multicentre, phase 3 trial (REACH), patients were enrolled from 154 centres in 27 countries. Eligible patients were aged 18 years or older, had hepatocellular carcinoma with Barcelona Clinic Liver Cancer stage C disease or stage B disease that was refractory or not amenable to locoregional therapy, had Child-Pugh A liver disease, an Eastern Cooperative Oncology Group performance status of 0 or 1, had previously received sorafenib (stopped because of progression or intolerance), and had adequate haematological and biochemical parameters. Patients were randomly assigned (1:1) to receive intravenous ramucirumab (8 mg/kg) or placebo every 2 weeks, plus best supportive care, until disease progression, unacceptable toxicity, or death. Randomisation was stratified by geographic region and cause of liver disease with a stratified permuted block method. Patients, medical staff, investigators, and the funder were masked to treatment assignment. The primary endpoint was overall survival in the intention-to-treat population. This study is registered with ClinicalTrials.gov, number NCT01140347. FINDINGS Between Nov 4, 2010, and April 18, 2013, 565 patients were enrolled, of whom 283 were assigned to ramucirumab and 282 were assigned to placebo. Median overall survival for the ramucirumab group was 9·2 months (95% CI 8·0-10·6) versus 7·6 months (6·0-9·3) for the placebo group (HR 0·87 [95% CI 0·72-1·05]; p=0·14). Grade 3 or greater adverse events occurring in 5% or more of patients in either treatment group were ascites (13 [5%] of 277 patients treated with ramucirumab vs 11 [4%] of 276 patients treated with placebo), hypertension (34 [12%] vs ten [4%]), asthenia (14 [5%] vs five [2%]), malignant neoplasm progression (18 [6%] vs 11 [4%]), increased aspartate aminotransferase concentration (15 [5%] vs 23 [8%]), thrombocytopenia (13 [5%] vs one [<1%]), hyperbilirubinaemia (three [1%] vs 13 [5%]), and increased blood bilirubin (five [2%] vs 14 [5%]). The most frequently reported (≥1%) treatment-emergent serious adverse event of any grade or grade 3 or more was malignant neoplasm progression. INTERPRETATION Second-line treatment with ramucirumab did not significantly improve survival over placebo in patients with advanced hepatocellular carcinoma. No new safety signals were noted in eligible patients and the safety profile is manageable. FUNDING Eli Lilly and Co.
Resumo:
The anterior superior alveolar nerve (ASAN) is a branch of the infraorbital nerve. Only few studies have morphometrically evaluated the course of the ASAN. Midfacial segments of ten hemisectioned fresh adult cadaver heads were dissected to uncover the anterior wall of the maxilla. Specimens were subsequently decalcified and the bone overlying the ASAN was removed under a microscope to expose the ASAN. Its branching pattern from the infraorbital nerve was recorded, and the course of the ASAN within the anterior wall of the maxillary sinus was morphometrically assessed measuring distances to predefined landmarks using a digital caliper. A distinct ASAN was observed in all specimens. It arose lateral (six cases) or inferior (four cases) from the infraorbital nerve. The point of origin was located at a mean distance of 12.2 ± 5.79 mm posterior to the infraorbital foramen. The ASAN was located on average 2.8 ± 5.13 mm lateral to the infraorbital foramen. After coursing medially, the ASAN ran inferior to the foramen at a mean distance of 5.5 ± 3.07 mm. When approaching the nasal aperture, the loop of the ASAN was on average 13.6 ± 3.07 mm above the nasal floor. The horizontal mean distance from the ASAN to the nasal aperture was 4.3 ± 2.74 mm halfway down from the loop, and 3.3 ± 2.60 mm at the floor of the nose, respectively. In conclusion, the present study evaluated the course of the ASAN relative to the infraorbital foramen and nasal aperture. This information is helpful to avoid damage to this anatomical structure during interventions in the infraobrital region of the maxilla. Further, knowledge of the course of the ASAN and of its bony correlate (canalis sinuosus) may be valuable in interpreting anesthetic or radiologic findings in the anterior maxilla.
Resumo:
Background. Obstructive genitourinary defects include all anomalies causing obstruction anywhere along the urinary tract. Previous studies have noted a large excess of males among infants affected by these types of defects. This is the first epidemiologic study focused solely on obstructive genitourinary defects (OGD). ^ Methods. Data on 1,683 mild and 302 severe cases of isolated OGD born between 1999 and 2003 and ascertained by the Texas Birth Defects Registry were compared to all births in Texas during the same time period. Adjusted prevalence odds ratios (POR) were calculated for infant sex, birth weight, gestational age, mother’s race/ethnicity, mother’s age, mother’s education, parity, birth year, start of prenatal care, multiple birth, and public health region of birth. Severe cases were defined as those cases that died prior to birth, died after birth, or underwent surgery for OGD in the first year of life. Cases of OGD that had other major birth defects besides OGD were excluded from this study. ^ Results. Severe cases of OGD were more likely than mild cases to have multiple obstructive genitourinary anomalies (37.8% vs. 18.9%) and bilateral defects (40.9% vs. 31.3%). Males had a significantly greater risk of OGD than females for both severe and mild cases: adjusted POR = 3.26 (95% CI = 2.45-4.33) and adjusted POR = 2.60 (95% CI = 2.33-2.90), respectively. Infants with both severe and mild OGD were more likely to be very preterm birth at birth compared with infants without OGD: crude POR of 16.19 (95% CI = 10.60-24.74) and 4.75 (95% CI = 3.54-6.37), respectively. Among the severe group, minority races had a decreased risk of OGD with an adjusted POR of 0.74 (95% CI = 0.55-0.98) compared with whites. Among the mild cases, increased rates of OGD were found in older mothers (adjusted POR = 1.10, 95% CI = 1.05-1.15), college/higher educated mothers (adjusted POR = 1.07, 95% CI = 1.01-1.13) and multiple births (adjusted POR = 1.28, 95% CI = 1.01-1.62). There was also a decreased risk of mild cases among black mothers compared to whites (adjusted POR = 0.63, 95% CI = 0.52-0.76). Compared to 1999, the prevalence of mild cases of OGD increased significantly over the 5 year study period with an adjusted POR of 1.10 (95% CI = 1.06-1.15) by 2003. ^ Conclusion. Risk factors of OGD for both severe and mild forms were male sex and preterm birth. Severe cases were more likely to have multiple OGD defects and be affected bilaterally. An increase in prevalence of mild cases of OGD over time and differences in rates of black, older, and higher educated mothers in mild cases may be attributed to ultrasound use. ^
Resumo:
Objective. To systematically review studies published in English on the relationship between plasma total homocysteine (Hcy) levels and the clinical and/or postmortem diagnosis of Alzheimer's disease (AD) in subjects who are over 60 years old.^ Method. Medline, PubMed, PsycINFO and Academic Search Premier, were searched by using the keywords "homocysteine", "Alzheimer disease" and "dementia", and "cognitive disorders". In addition, relevant articles in PubMed using the "related articles" link and by cross-referencing were identified. The study design, study setting and study population, sample size, the diagnostic criteria of the National Institute of Neurological and Communicative Disorders and Stroke (NINCDS) and the Alzheimer's Disease and Related Disorders Association (ADRDA), and description of how Hcy levels were measured or defined had to have been clearly stated. Empirical investigations reporting quantitative data on the epidemiology of the relationship between plasma total Hcy (exposure factor) and AD (outcome) were included in the systematic review.^ Results. A total of 7 studies, which included a total of 2,989 subjects, out of 388 potential articles met the inclusion criteria: four case control and three cohort studies were identified. All 7 studies had association statistics, such as the odds ratio (OR), the relative rates (RR), and the hazard ratio (HR) of AD, examined using multivariate and logistic regression analyses. Three case - comparison studies: Clarke et al. (1998) (OR: 4.5, 95% CI.: 2.2 - 9.2); McIlroy et al. (2002) (OR: 2.9, 95% CI.: 1.00–8.1); Quadri et al. (2004) (OR: 3.7, 95% CI.: 1.1 - 13.1), and two cohort studies: Seshadri et al. (2002) (RR: 1.8, 95% CI.: 1.3 - 2.5); Ravaglia et al. (2005) (HR: 2.1, 95% CI.: 1.7 - 3.8) found a significant association between serum total Hcy and AD. One case-comparison study, Miller et al. (2002) (OR: 2.2, 95% C.I.: 0.3 -16), and one cohort study, Luchsinger et al. (2004) (HR: 1.4, 95% C.I.: 0.7 - 2.3) failed to reject H0.^ Conclusions. The purpose of this review is to provide a thorough analysis of studies that examined the relationship between Hcy levels and AD. Five studies showed a positive statistically significant association between elevated total Hcy values and AD but the association was not statistically significant in two studies. Further research is needed in order to establish evidence of the strong, consistent association between serum total Hcy and AD as well as the presence of the appropriate temporal relationship. To answer these questions, it is important to conduct more prospective studies that examine the occurrence of AD in individuals with and without elevated Hcy values at baseline. In addition, the international standardization of measurements and cut-off points for plasma Hcy levels across laboratories is a critical issue to be addressed for the conduct of future studies on the topic.^
Resumo:
Multiple studies have shown an association between periodontitis and coronary heart disease due to the chronic inflammatory nature of periodontitis. Also, studies have indicated similar risk factors and patho-physiologic mechanisms for periodontitis and CHD. Among these factors, smoking has been the most discussed common risk factor and some studies suggested the periodontitis - CHD association to be largely a result of confounding due to smoking or inadequate adjustment for it. We conducted a secondary data analysis of the Dental ARIC Study, an ancillary study to the ARIC Study, to evaluate the effect of smoking on the periodontitis - CHD association using three periodontitis classifications namely, BGI, AAP-CDC, and Dental-ARIC classification (Beck et al 2001). We also compared these results with edentulous ARIC participants. Using Cox proportional hazard models, we found that the individuals with the most severe form of periodontitis in each of the three classifications (BGI: HR = 1.56, 95%CI: 1.15 – 2.13; AAP-CDC: HR = 1.42, 95%CI: 1.13 – 1.79; and Dental-ARIC: HR = 1.49, 95%CI: 1.22 – 1.83) were at a significantly higher risk of incident CHD in the unadjusted models; whereas only BGI-P3 showed statistically significant increased risk in the smoking adjusted models (HR = 1.43, 95%CI: 1.04 – 1.96). However none of the categories in any of the classifications showed significant association when a list of traditional CHD risk factors was introduced into the models. On the other hand, edentulous participants showed significant results when compared to the dentate ARIC participants in the crude (HR = 1.56, 95%CI: 1.34 – 1.82); smoking adjusted (HR = 1.39, 95%CI: 1.18 – 1.64) age, race and sex adjusted (HR = 1.52, 95%CI: 1.30 – 1.77); and ARIC traditional risk factors (except smoking) adjusted (HR = 1.27, 95%CI: 1.02 – 1.57) models. Also, the risk remained significantly higher even when smoking was introduced in the age, sex and race adjusted model (HR = 1.38, 95%CI: 1.17 – 1.63). Smoking did not reduce the hazard ratio by more than 8% when it was included in any of the Cox models. ^ This is the first study to include the three most recent case definitions of periodontitis simultaneously while looking at its association with incident coronary heart disease. We found smoking to be partially confounding the periodontitis and coronary heart disease association and edentulism to be significantly associated with incident CHD even after adjusting for smoking and the ARIC traditional risk factors. The difference in the three periodontitis classifications was not found to be statistical significant when they were tested for equality of the area under their ROC curves but this should not be confused with their clinical significance.^
Resumo:
Childhood obesity is a significant public health problem. Over 15 percent of children in the United States are obese, and about 25 percent of children in Texas are overweight (CDC NHANES). Furthermore, about 30 percent of elementary school aged children in Harris County, Texas are overweight or obese (Children at Risk Institute 2010). In addition to actions such as increasing physical activity, decreasing television watching and video game time, decreasing snacking on low nutrient calorie dense foods and sugar sweetened beverages, children need to consume more fruits and vegetables. According to the National Health and Nutrition Examination Survey (NHANES) from 2002, about 26 percent of U.S. children are meeting the recommendations for daily fruit intake and about 16 percent are meeting the recommendations for daily vegetable intake (CDC NHANES). In 2004, the average total intake of vegetables was 0.9 cups per day and 1.1 cups of fruit per day by children ages four to nine years old in the U.S. (CDC NHANES). Not only do children need effective nutrition education to learn about fruits and vegetables, they also need access and repeated exposure to fruits and vegetables (Anderson 2009, Briefel 2009). Nutrition education interventions that provide a structured, hands-on curriculum such as school gardens have produced significant changes in child fruit and vegetable intake (Blair 2009, McAleese 2007). To prevent childhood obesity from continuing into adolescence and adulthood, effective nutrition education interventions need to be implemented immediately and for the long-term. However, research has shown short-term nutrition education interventions such as summer camps to be effective for significant changes in child fruit and vegetable intake, preferences, and knowledge (Heim 2009). ^ A four week summer camp based on cooking and gardening was implemented at 6 Multi-Service centers in a large, urban city. The participants included children ranging in age from 7 to 14 years old (n=64). The purpose of the camp was to introduce children to their food from the seed to the plate through the utilization of gardening and culinary exercises. The summer camp activities were aimed at increasing the children's exposure, willingness to try, preferences, knowledge, and intake of fruits and vegetables. A survey was given on the first day of camp and again on the last day of camp that measured the pre- and post differences in knowledge, intake, willingness to try, and preferences of fruits and vegetables. The present study examined the short-term effectiveness of a cooking and garden-based nutrition education program on the knowledge, willingness, preferences, and intake among children aged 8 to 13 years old (n=40). The final sample of participants (n=40) was controlled for those who completed pre- and post-test surveys and who were in or above the third grade level. Results showed a statistically significant increase in the reported intake of vegetables and preferences for vegetables, specifically green beans, and fruits. There was also a significant increase in preferences for fruits among boys and participants ages 11 to 13 years. The results showed a change in the expected direction of willingness to try, preferences for vegetables, and intake of fruit, however these were not statistically significant. Interestingly, the results also showed a decrease in the intake of low nutrient calorie dense foods such as sweets and candy.^
Resumo:
The climate evolution of the South Shetland Islands during the last c. 2000 years is inferred from the multiproxy analyses of a long (928 cm) sediment core retrieved from Maxwell Bay off King George Island. The vertical sediment flux at the core location is controlled by summer melting processes that cause sediment-laden meltwater plumes to form. These leave a characteristic signature in the sediments of NE Maxwell Bay. We use this signature to distinguish summer and winter-dominated periods. During the Medieval Warm Period, sediments are generally finer which indicates summer-type conditions. In contrast, during the Little Ice Age (LIA) sediments are generally coarser and are indicative of winter-dominated conditions. Comparison with Northern and Southern Hemisphere, Antarctic, and global temperature reconstructions reveals that the mean grain-size curve from Maxwell Bay closely resembles the curve of the global temperature reconstruction. We show that the medieval warming occurred earlier in the Southern than in the Northern Hemisphere, which might indicate that the warming was driven by processes occurring in the south. The beginning of the LIA appears to be almost synchronous in both hemispheres. The warming after the LIA closely resembles the Northern Hemisphere record which might indicate this phase of cooling was driven by processes occurring in the north. Although the recent rapid regional warming is clearly visible, the Maxwell Bay record does not show the dominance of summer-type sediments until the 1970s. Continued warming in this area will likely affect the marine ecosystem through meltwater induced turbidity of the surface waters as well as an extension of the vegetation period due to the predicted decrease of sea ice in this area.
Resumo:
The continental rise west of the Antarctic Peninsula includes a number of large sediment mounds interpreted as contourite drifts. Cores from six sediment drifts spanning some 650 km of the margin and 48 of latitude have been dated using chemical and isotopic tracers of palaeoproductivity and diatom biostratigraphy. Interglacial sedimentation rates range from 1.1 to 4.3 cm/ka. Glacial sedimentation rates range from 1.8 to 13.5 cm/ka, and decrease from proximal to distal sites on each drift. Late Quaternary sedimentation was cyclic, with brown, biogenic, burrowed mud containing ice-rafted debris (IRD) in interglacials and grey, barren, laminated mud in glacials. Foraminiferal intervals occur in interglacial stages 5 and 7 but not in the Holocene. Processes of terrigenous sediment supply during glacial stages differed; meltwater plumes were more important in stages 2-4, turbidity currents and ice-rafting in stage 6. The terrigenous component shows compositional changes along the margin, more marked in glacials. The major oxides Al2O3 and K2O are higher in the southwest, and CaO and TiO2 higher in the northeast. There is more smectite among the clay minerals in the northeast. Magnetic susceptibility varies along and between drifts. These changes reflect source variations along the margin. Interglacial sediments show less clear trends, and their IRD was derived from a wider area. Downslope processes were dominant in glacials, but alongslope processes may have attained equal importance in interglacials. The area contrasts with the East Antarctic continental slope in the SE Weddell Sea, where ice-rafting is the dominant process and where interglacial sedimentation rates are much higher than glacial. The differences in glacial setting and margin physiography can account for these contrasts.
Resumo:
El objetivo del trabajo fue determinar la concentración de polifenoles en extractos de hojas de verano y otoño y de escobajo de variedades tintas de vitis vinifera, obtenidos por distintos métodos.Se trabajó con hojas de verano y de otoño de variedad Syrah y con hojas de otoño variedad Malbec, dividiéndolas por su color en rojas y amarillas, y con escobajo de variedad Malbec. Los extractos se obtuvieron por maceración en agua y baño María hirviente durante tres horas (BM), maceración en agua y agitación, a temperatura ambiente, durante 48 horas (AG). Se utilizó como referencia oleorresina de romero. El orden decreciente respecto de la cantidad de polifenoles expresados en g de quercitina / g de extracto seco fue el siguiente: oleorresina de romero: 56,3 ± 0,3 extracto de: hojas de otoño Syrah por maceración y agitación en agua a temperatura ambiente durante 48 horas: 7,5 ± 0,3; escobajo Malbec por maceración y agitación en agua a temperatura ambiente durante 48 horas: 24 ± 0,3; hojas de otoño rojas Malbec por maceración y agitación en agua a temperatura ambiente durante 48 horas: 22 ± 0,3; escobajo de Malbec: a baño María 21,4 ± 0,3; hojas de otoño Syrah 3 horas a baño María: 21,1 ±0,3; hojas de otoño amarillas Malbec por maceración y agitación en agua a temperatura ambiente durante 48 horas: 17 ± 0,3; hojas de verano Syrah 3 horas a baño María: 16,5 ± 0,3; hojas de otoño rojas Malbec 3 horas a baño María: 13,8 ± 0,3; hojas de otoño amarillas Malbec 3 horas a baño María: 12,4 ± 0,3; hojas de verano Syrah por maceración y agitación en agua a temperatura ambiente durante 48 horas: 12,2 ± 0,3. Se concluyó que las hojas de otoño contienen mayor cantidad de polifenoles que las de verano; la variedad Syrah posee más polifenoles en hojas que la variedad Malbec; el escobajo de Malbec contiene más olifenoles que las hojas de la misma variedad; y el mejor método de extracción para obtener estos compuestos activos fue por maceración en agua y agitación a temperatura ambiente durante 48 horas.