973 resultados para Primary history


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An 8-year-old crossbred dog was presented with a one-month history of progressive weakness, respiratory impairment and abdominal distension. Surgical exploration revealed the presence of a splenic mass that infiltrated the mesentery and was adherent to the stomach and pancreas. The mass was composed of highly cellular areas of spindle-shaped cells arranged in interlacing bundles, streams, whorls and storiform patterns (Antoni A pattern) and less cellular areas with more loosely arranged spindle to oval cells (Antoni B pattern). The majority of neoplastic cells expressed vimentin, S-100 and glial fibrillary acidic protein (GFAP), but did not express desmin, alpha-smooth muscle actin or factor VIII. These morphological and immunohistochemical findings characterized the lesion as a malignant peripheral nerve sheath tumour (PNST). Primary splenic PNST has not been documented previously in the dog.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to examine the aetiology of acute diarrhoea and the relapse rate in 100 client-owned dogs presented to a first-opinion clinic. History, physical examination, faecal testing and owner questionnaire data were collected at initial presentation (T0) and at either the time of relapse or at a recheck performed within 3 months. All dogs received treatment according to their clinical signs. Of 96 dogs that completed the study, 37 (38.5%) relapsed during the study period, 21 (21.9%) relapsed within 3 months, and 16 others (16.6%) at 3 months to 1 year after initial examination. Dogs that had undergone a change in housing location within 1 month prior to presentation and dogs <1 year old were significantly more likely to have positive parasitological analyses (P=0.02 and P=0.001, respectively). Pica was a risk factor for relapse (P=0.0002).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Varved lake sediments are excellent natural archives providing quantitative insights into climatic and environmental changes at very high resolution and chronological accuracy. However, due to the multitude of responses within lake ecosystems it is often difficult to understand how climate variability interacts with other environmental pressures such as eutrophication, and to attribute observed changes to specific causes. This is particularly challenging during the past 100 years when multiple strong trends are superposed. Here we present a high-resolution multi-proxy record of sedimentary pigments and other biogeochemical data from the varved sediments of Lake Żabińskie (Masurian Lake District, north-eastern Poland, 54°N–22°E, 120 m a.s.l.) spanning AD 1907 to 2008. Lake Żabińskie exhibits biogeochemical varves with highly organic late summer and winter layers separated by white layers of endogenous calcite precipitated in early summer. The aim of our study is to investigate whether climate-driven changes and anthropogenic changes can be separated in a multi-proxy sediment data set, and to explore which sediment proxies are potentially suitable for long quantitative climate reconstructions. We also test if convoluted analytical techniques (e.g. HPLC) can be substituted by rapid scanning techniques (visible reflectance spectroscopy VIS-RS; 380–730 nm). We used principal component analysis and cluster analysis to show that the recent eutrophication of Lake Żabińskie can be discriminated from climate-driven changes for the period AD 1907–2008. The eutrophication signal (PC1 = 46.4%; TOC, TN, TS, Phe-b, high TC/CD ratios total carotenoids/chlorophyll-a derivatives) is mainly expressed as increasing aquatic primary production, increasing hypolimnetic anoxia and a change in the algal community from green algae to blue-green algae. The proxies diagnostic for eutrophication show a smooth positive trend between 1907 and ca 1980 followed by a very rapid increase from ca. 1980 ± 2 onwards. We demonstrate that PC2 (24.4%, Chl-a-related pigments) is not affected by the eutrophication signal, but instead is sensitive to spring (MAM) temperature (r = 0.63, pcorr < 0.05, RMSEP = 0.56 °C; 5-yr filtered). Limnological monitoring data (2011–2013) support this finding. We also demonstrate that scanning visible reflectance spectroscopy (VIS-RS) data can be calibrated to HPLC-measured chloropigment data and be used to infer concentrations of sedimentary Chl-a derivatives {pheophytin a + pyropheophytin a}. This offers the possibility for very high-resolution (multi)millennial-long paleoenvironmental reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biomass burning is a major source of greenhouse gases and influences regional to global climate. Pre-industrial fire-history records from black carbon, charcoal and other proxies provide baseline estimates of biomass burning at local to global scales spanning millennia, and are thus useful to examine the role of fire in the carbon cycle and climate system. Here we use the specific biomarker levoglucosan together with black carbon and ammonium concentrations from the North Greenland Eemian (NEEM) ice cores (77.49° N, 51.2° W; 2480 m a.s.l) over the past 2000 years to infer changes in boreal fire activity. Increases in boreal fire activity over the periods 1000–1300 CE and decreases during 700–900 CE coincide with high-latitude NH temperature changes. Levoglucosan concentrations in the NEEM ice cores peak between 1500 and 1700 CE, and most levoglucosan spikes coincide with the most extensive central and northern Asian droughts of the past millennium. Many of these multi-annual droughts are caused by Asian monsoon failures, thus suggesting a connection between low- and high-latitude climate processes. North America is a primary source of biomass burning aerosols due to its relative proximity to the Greenland Ice Cap. During major fire events, however, isotopic analyses of dust, back trajectories and links with levoglucosan peaks and regional drought reconstructions suggest that Siberia is also an important source of pyrogenic aerosols to Greenland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES The aim of this study was to assess the safety of the concurrent administration of a clopidogrel and prasugrel loading dose in patients undergoing primary percutaneous coronary intervention. BACKGROUND Prasugrel is one of the preferred P2Y12 platelet receptor antagonists for ST-segment elevation myocardial infarction patients. The use of prasugrel was evaluated clinically in clopidogrel-naive patients. METHODS Between September 2009 and October 2012, a total of 2,023 STEMI patients were enrolled in the COMFORTABLE (Comparison of Biomatrix Versus Gazelle in ST-Elevation Myocardial Infarction [STEMI]) and the SPUM-ACS (Inflammation and Acute Coronary Syndromes) studies. Patients receiving a prasugrel loading dose were divided into 2 groups: 1) clopidogrel and a subsequent prasugrel loading dose; and 2) a prasugrel loading dose. The primary safety endpoint was Bleeding Academic Research Consortium types 3 to 5 bleeding in hospital at 30 days. RESULTS Of 2,023 patients undergoing primary percutaneous coronary intervention, 427 (21.1%) received clopidogrel and a subsequent prasugrel loading dose, 447 (22.1%) received a prasugrel loading dose alone, and the remaining received clopidogrel only. At 30 days, the primary safety endpoint was observed in 1.9% of those receiving clopidogrel and a subsequent prasugrel loading dose and 3.4% of those receiving a prasugrel loading dose alone (adjusted hazard ratio [HR]: 0.57; 95% confidence interval [CI]: 0.25 to 1.30, p = 0.18). The HAS-BLED (hypertension, abnormal renal/liver function, stroke, bleeding history or predisposition, labile international normalized ratio, elderly, drugs/alcohol concomitantly) bleeding score tended to be higher in prasugrel-treated patients (p = 0.076). The primary safety endpoint results, however, remained unchanged after adjustment for these differences (clopidogrel and a subsequent prasugrel loading dose vs. prasugrel only; HR: 0.54 [95% CI: 0.23 to 1.27], p = 0.16). No differences in the composite of cardiac death, myocardial infarction, or stroke were observed at 30 days (adjusted HR: 0.66, 95% CI: 0.27 to 1.62, p = 0.36). CONCLUSIONS This observational, nonrandomized study of ST-segment elevation myocardial infarction patients suggests that the administration of a loading dose of prasugrel in patients pre-treated with a loading dose of clopidogrel is not associated with an excess of major bleeding events. (Comparison of Biomatrix Versus Gazelle in ST-Elevation Myocardial Infarction [STEMI] [COMFORTABLE]; NCT00962416; and Inflammation and Acute Coronary Syndromes [SPUM-ACS]; NCT01000701).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge about vegetation and fire history of the mountains of Northern Sicily is scanty. We analysed five sites to fill this gap and used terrestrial plant macrofossils to establish robust radiocarbon chronologies. Palynological records from Gorgo Tondo, Gorgo Lungo, Marcato Cixé, Urgo Pietra Giordano and Gorgo Pollicino show that under natural or near natural conditions, deciduous forests (Quercus pubescens, Q. cerris, Fraxinus ornus, Ulmus), that included a substantial portion of evergreen broadleaved species (Q. suber, Q. ilex, Hedera helix), prevailed in the upper meso-mediterranean belt. Mesophilous deciduous and evergreen broadleaved trees (Fagus sylvatica, Ilex aquifolium) dominated in the natural or quasi-natural forests of the oro-mediterranean belt. Forests were repeatedly opened for agricultural purposes. Fire activity was closely associated with farming, providing evidence that burning was a primary land use tool since Neolithic times. Land use and fire activity intensified during the Early Neolithic at 5000 bc, at the onset of the Bronze Age at 2500 bc and at the onset of the Iron Age at 800 bc. Our data and previous studies suggest that the large majority of open land communities in Sicily, from the coastal lowlands to the mountain areas below the thorny-cushion Astragalus belt (ca. 1,800 m a.s.l.), would rapidly develop into forests if land use ceased. Mesophilous Fagus-Ilex forests developed under warm mid Holocene conditions and were resilient to the combined impacts of humans and climate. The past ecology suggests a resilience of these summer-drought adapted communities to climate warming of about 2 °C. Hence, they may be particularly suited to provide heat and drought-adapted Fagus sylvatica ecotypes for maintaining drought-sensitive Central European beech forests under global warming conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spatial distributions of species of tree ≥10 cm gbh were examined in two 4 ha plots and related to the local variation in topography and soil chemistry. The plots were similar in their species composition, particularly in terms of the densities of small trees, and they showed very similar edaphic characteristics. Size class distributions varied little within and between plots. Ordination of 0.25 ha subplots highlighted parallel gradients in the vegetation of both plots when the densities of trees ≥10 cm gbh were considered. Focusing on understorey trees in the 10-<50 cm gbh class at the 0.04 ha subplot scale showed a similar vegetation gradient in both plots closely associated with change from lower slope to ridge. No relationship with soil chemistry was found. On the ridges a special group of understorey species formed clumps and these species contributed importantly to the ordinations. Borneo has a regional history of occasionally severe droughts. It is suggested here that the observed patterns in the understorey are due to differential responses to low soil water supply, the ridges probably tending to dryness more than the lower slopes. Within the large and diverse family Euphorbiaceae, which dominates the understorey at Danum, there may be ecophysiological groupings of species. The long-term effects of disturbance interacting with local edaphic factors on forest structure and composition are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Primary ciliary dyskinesia is a rare heterogeneous recessive genetic disorder of motile cilia, leading to chronic upper and lower respiratory symptoms. Prevalence is estimated at around 1:10,000, but many patients remain undiagnosed, while others receive the label incorrectly. Proper diagnosis is complicated by the fact that the key symptoms such as wet cough, chronic rhinitis and recurrent upper and lower respiratory infection, are common and nonspecific. There is no single gold standard test to diagnose PCD. Presently, the diagnosis is made by augmenting the medical history and physical examination with in patients with a compatible medical history following a demanding combination of tests including nasal nitric oxide, high- speed video microscopy, transmission electron microscopy, genetics, and ciliary culture. These tests are costly and need sophisticated equipment and experienced staff, restricting use to highly specialised centers. Therefore, it would be desirable to have a screening test for identifying those patients who should undergo detailed diagnostic testing. Three recent studies focused on potential screening tools: one paper assessed the validity of nasal nitric oxide for screening, and two studies developed new symptom-based screening tools. These simple tools are welcome, and hopefully remind physicians whom to refer for definitive testing. However, they have been developed in tertiary care settings, where 10 to 50% of tested patients have PCD. Sensitivity and specificity of the tools are reasonable, but positive and negative predictive values may be poor in primary or secondary care settings. While these studies take an important step forward towards an earlier diagnosis of PCD, more remains to be done before we have tools tailored to different health care settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Symptoms of primary ciliary dyskinesia (PCD) are nonspecific and guidance on whom to refer for testing is limited. Diagnostic tests for PCD are highly specialised, requiring expensive equipment and experienced PCD scientists. This study aims to develop a practical clinical diagnostic tool to identify patients requiring testing.Patients consecutively referred for testing were studied. Information readily obtained from patient history was correlated with diagnostic outcome. Using logistic regression, the predictive performance of the best model was tested by receiver operating characteristic curve analyses. The model was simplified into a practical tool (PICADAR) and externally validated in a second diagnostic centre.Of 641 referrals with a definitive diagnostic outcome, 75 (12%) were positive. PICADAR applies to patients with persistent wet cough and has seven predictive parameters: full-term gestation, neonatal chest symptoms, neonatal intensive care admittance, chronic rhinitis, ear symptoms, situs inversus and congenital cardiac defect. Sensitivity and specificity of the tool were 0.90 and 0.75 for a cut-off score of 5 points. Area under the curve for the internally and externally validated tool was 0.91 and 0.87, respectively.PICADAR represents a simple diagnostic clinical prediction rule with good accuracy and validity, ready for testing in respiratory centres referring to PCD centres.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The magnitude of the interaction between cigarette smoking, radiation therapy, and primary lung cancer after breast cancer remains unresolved. This case control study further examines the main and joint effects of cigarette smoking and radiation therapy (XRT) among breast cancer patients who subsequently developed primary lung cancer, at The University of Texas M. D. Anderson Cancer Center (MDACC) in Houston, Texas. Cases (n = 280) were women diagnosed with primary lung cancer between 1955 and 1970, between 30–89 years of age, who had a prior history of breast cancer, and were U.S. residents. Controls (n = 300) were randomly selected from 37,000 breast cancer patients at MDACC and frequency matched to cases on age at diagnosis (in 5-year strata), ethnicity, year of breast cancer diagnosis (in 5-year strata), and had survived at least as long as the time interval for lung cancer diagnosis in the cases. Stratified analysis and unconditional logistic regression modeling were used to calculate the main and joint effects of cigarette smoking and radiation treatment on lung cancer risk. Medical record review yielded smoking information on 93% of cases and 84% of controls, and among cases 45% received XRT versus 44% of controls. Smoking increased the odds of lung cancer in women who did not receive XRT (OR = 6.0, 95%CI, 3.5–10.1) whereas XRT was not associated with increased odds (OR = 0.5, 95%CI, 0.2–1.1) in women who did not smoke. Overall the odds ratio for both XRT and smoking together compared with neither exposure was 9.00 (9 5% CI, 5.1–15.9). Similarly, when stratifying on laterality of the lung cancer in relation to the breast cancer, and when the time interval between breast and lung cancers was >10 years, there was an increased odds for both smoking and XRT together for lung cancers on the same side as the breast cancer (ipsilateral) (OR = 11.5, 95% CI, 4.9–27.8) and lung cancers on the opposite side of the breast cancer (contralateral) (OR= 9.6, 95% CI, 2.9–0.9). After 20 years the odds for the ipsilateral lung were even more pronounced (OR = 19.2, 95% CI, 4.2–88.4) compared to the contralateral lung (OR = 2.6, 95% CI, 0.2–2.1). In conclusion, smoking was a significant independent risk factor for lung cancer after breast cancer. Moreover, a greater than multiplicative effect was observed with smoking and XRT combined being especially evident after 10 years for both the ipsilateral and contralateral lung and after 20 years for the ipsilateral lung. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mitochondria are actively engaged in the production of cellular energy sources, generation of reactive oxygen species (ROS), and regulation of apoptosis. Mitochondrial DNA (mtDNA) mutations/deletions and other mitochondrial abnormalities have been implicated in many diseases, especially cancer. Despite this, the roles that these defects play in cancer development, drug sensitivity, and disease progression still remain to be elucidated. The major objective of this investigation was to evaluate the mechanistic relationship between mitochondrial defects and alterations in free radical generation and chemosensitivity in primary chronic lymphocytic leukemia (CLL) cells. This study revealed that the mtDNA mutation frequency and basal superoxide generation are both significantly higher in primary cells from CLL patients with a history of chemotherapy as compared to cells from their untreated counterparts. CLL cells from refractory patients tended to have high mutation frequencies. The data suggest that chemotherapy with DNA-damaging agents may cause mtDNA mutations, which are associated with increased ROS generation and reduced drug sensitivity. Subsequent analyses demonstrated that CLL cells contain significantly more mitochondria than normal lymphocytes. This abnormal accumulation of mitochondria was linked to increased expression of nuclear respiratory factor-1 and mitochondrial transcription factor A, two key free radical-regulated mitochondrial biogenesis factors. Further analysis showed that mitochondrial content may have therapeutic implications since patient cells with high mitochondrial mass display significantly reduced in vitro sensitivity to fludarabine, a frontline agent in CLL therapy. The reduced in vitro and in vivo sensitivity to fludarabine observed in CLL cells with mitochondrial defects highlights the need for novel therapeutic strategies for the treatment of refractory disease. Brefeldin A, an inhibitor of endoplasmic reticulum (ER) to Golgi protein transport that is being developed as an anticancer agent, effectively induces apoptosis in fludarabine-refractory CLL cells through a secretory stress-mediated mechanism involving intracellular sequestration of pro-survival secretory factors. Taken together, these data indicate that mitochondrial defects in CLL cells are associated with alterations in free radical generation, mitochondrial biogenesis activity, and chemosensitivity. Abrogation of survival signaling by blocking ER to Golgi protein transport may be a promising therapeutic strategy for the treatment of CLL patients that respond poorly to conventional chemotherapy. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. Previous studies have shown a survival advantage in ovarian cancer patients with Ashkenazi-Jewish (AJ) BRCA founder mutations, compared to sporadic ovarian cancer patients. The purpose of this study was to determine if this association exists in ovarian cancer patients with non-Ashkenazi Jewish BRCA mutations. In addition, we sought to account for possible "survival bias" by minimizing any lead time that may exist between diagnosis and genetic testing. ^ Methods. Patients with stage III/IV ovarian, fallopian tube, or primary peritoneal cancer and a non-Ashkenazi Jewish BRCA1 or 2 mutation, seen for genetic testing January 1996-July 2007, were identified from genetics and institutional databases. Medical records were reviewed for clinical factors, including response to initial chemotherapy. Patients with sporadic (non-hereditary) ovarian, fallopian tube, or primary peritoneal cancer, without family history of breast or ovarian cancer, were compared to similar cases, matched by age, stage, year of diagnosis, and vital status at time interval to BRCA testing. When possible, 2 sporadic patients were matched to each BRCA patient. An additional group of unmatched, sporadic ovarian, fallopian tube and primary peritoneal cancer patients was included for a separate analysis. Progression-free (PFS) & overall survival (OS) were calculated by the Kaplan-Meier method. Multivariate Cox proportional hazards models were calculated for variables of interest. Matched pairs were treated as clusters. Stratified log rank test was used to calculate survival data for matched pairs using paired event times. Fisher's exact test, chi-square, and univariate logistic regression were also used for analysis. ^ Results. Forty five advanced-stage ovarian, fallopian tube and primary peritoneal cancer patients with non-Ashkenazi Jewish (non-AJ) BRCA mutations, 86 sporadic-matched and 414 sporadic-unmatched patients were analyzed. Compared to the sporadic-matched and sporadic-unmatched ovarian cancer patients, non-AJ BRCA mutation carriers had longer PFS (17.9 & 13.8 mos. vs. 32.0 mos., HR 1.76 [95% CI 1.13–2.75] & 2.61 [95% CI 1.70–4.00]). In relation to the sporadic- unmatched patients, non-AJ BRCA patients had greater odds of complete response to initial chemotherapy (OR 2.25 [95% CI 1.17–5.41]) and improved OS (37.6 mos. vs. 101.4 mos., HR 2.64 [95% CI 1.49–4.67]). ^ Conclusions. This study demonstrates a significant survival advantage in advanced-stage ovarian cancer patients with non-AJ BRCA mutations, confirming the previous studies in the Jewish population. Our efforts to account for "survival bias," by matching, will continue with collaborative studies. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ascertaining the family health history (FHH) may provide insight into genetic and environmental susceptibilities specific to a variety of chronic diseases, including type II diabetes mellitus. However, discussion of FHH during patient-provider encounters has been limited and uncharacterized. A longitudinal, observational study was conducted in order to compare the content of FHH topics in a convenience sample of 37 patients, 13 new and 24 established. Each patient had an average of three follow-up encounters involving 6 staff physicians at the Audie L. Murphy Memorial Veterans Hospital (VHA) in San Antonio, TX from 2003 to 2005. A total of 131 encounters were analyzed in this study. The average age of the selected population was 68 years and included 35 males and two females. Transcriptions of encounters were obtained, coded and analyzed, in NVIVO 8. Of the 131 total encounters transcribed among the 37 patients, only 24 encounters (18.3%) included discussion of FHH. Additionally, the relationship between FHH discussion and discussion of self-care management (SCM) topics were assessed. In this study, providers were more likely to initiate discussion on family health history among new patients in the first encounter (ORnew = 8.55, 95% CI: 1.49–52.90). The discussion of FHH occurred sporadically in established patients throughout the longitudinal study with no apparent pattern. Provider-initiated FHH discussion most frequently had satisfactory level(s) of discussion while patient-initiated FHH discussion most frequently had minimal level(s) of discussion. FHH discussion most oftentimes involved topics of cancer and cardiovascular disease among primary-degree familial relationships. Overall, family health histories are largely, an underutilized tool in personalized preventive care.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The natural history of placebo treated travelers' diarrhea and the prognostic factors of recovery from diarrhea were evaluated using 9 groups of placebo treated subjects from 9 clinical trial studies conducted since 1975, for use as a historical control in the future clinical trial of antidiarrheal agents. All of these studies were done by the same group of investigators in one site (Guadalajara, Mexico). The studies are similar in terms of population, measured parameters, microbiologic identification of enteropathogens and definitions of parameters. The studies had two different durations of followup. In some studies, subjects were followed for two days, and in some they were followed for five days.^ Using definitions established by the Infectious Diseases society of America and the Food and Drug Administration, the following efficacy parameters were evaluated: Time to last unformed stool (TLUS), number of unformed stools post-initiation of placebo treatment for five consecutive days of followup, microbiologic cure, and improvement of diarrhea. Among the groups that were followed for five days, the mean TLUS ranged from 59.1 to 83.5 hours. Fifty percent to 78% had diarrhea lasting more than 48 hours and 25% had diarrhea more than five days. The mean number of unformed stools passed on the first day post-initiation of therapy ranged from 3.6 to 5.8 and, for the fifth day ranged from 0.5 to 1.5. By the end of followup, diarrhea improved in 82.6% to 90% of the subjects. Subjects with enterotoxigenic E. coli had 21.6% to 90.0% microbiologic cure; and subjects with shigella species experienced 14.3% to 60.0% microbiologic cure.^ In evaluating the prognostic factors of recovery from diarrhea (primary efficacy parameter in evaluating the efficacy of antidiarrheal agents against travelers' diarrhea). The subjects from five studies were pooled and the Cox proportional hazard model was used to evaluate the predictors of prolonged diarrhea. After adjusting for design characteristics of each trial, fever with a rate ratio (RR) of 0.40, presence of invasive pathogens with a RR of 0.41, presence of severe abdominal pain and cramps with a RR of 0.50, number of watery stools more than five with a RR of 0.60, and presence of non-invasive pathogens with a RR of 0.84 predicted a longer duration of diarrhea. Severe vomiting with a RR of 2.53 predicted a shorter duration of diarrhea. The number of soft stools, presence of fecal leukocytes, presence of nausea, and duration of diarrhea before enrollment were not associated with duration of diarrhea. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lipid content of three cores from Lake Enol (Picos de Europa National Park, Asturias, Northern Spain) was studied. The n-alkane profiles indicated a major input from terrigenous plants [predominance of high molecular weight (HMW) alkanes] since ca. 1695 AD to the water body, although the uppermost cm revealed a predominance of organic matter (OM) derived from algae, as the most abundant alkane was C17. Three units revealing different environmental conditions were defined. Unit A (ca. 1695–1860 AD) in the lowermost parts of ENO13-10 (< 12 cm) and ENO13-15 (< 28 cm) was identified and was characterized by higher OM input and evidence of minimal degradation (high CPI values, predominance of HMW n-alkanoic acids and good correspondence between the predominant n-alkane and n-alkanoic acid chains). These findings could be linked to the Little Ice Age, when cold and humid conditions may have favored an increase in total organic carbon (TOC) and n-alkane and n-alkanoic acid content (greater terrigenous OM in-wash), and may have also reduced bacterial activity. In Unit B (ca. 1860–1980 AD) the lack of correspondence between the n-alkane and n-alkanoic acid profiles of ENO13-10 (12–4 cm) and ENO13-15 (28–8 cm) suggested a certain preferential microbial synthesis of long chain saturated fatty acids from primary OM and/or bacterial activity, coinciding with a decrease in OM input, which could be linked to the global warming that started in the second half of the 19th century. In ENO13-7 the low OM input (low TOC) was accompanied by some bacterial degradation (predominance ofHMWn-alkanoic acids but with a bimodal distribution) in the lowermost 16–5 cm. Evidence of considerable phytoplankton productivity and microbial activity was especially significant in Unit C (ca. 1980–2013 AD) identified in the uppermost part of all three cores (5 cm in ENO13-7, 4 cm in ENO13-10 and 8 cm in ENO13-15), coinciding with higher concentrations of n-alkanes and n-alkanoic acids, which were considered to be linked to warmer and drier conditions, as well as to greater anthropogenic influence in modern times. Plant sterols, such as b-sitosterol, campesterol and stigmasterol, were significantly present in the cores. In addition, fecal stanols, such as 24-ethylcoprostanol from herbivores, were present, thereby indicating a continuous and significant pollution input derived from these animals since the 17th century, being more important in the last 20 years.