955 resultados para MULTI-COMPONENT ISOTHERMS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A methodology is presented to determine both the short-term and the long-term influence of the spectral variations on the performance of Multi-Junction (MJ) solar cells and Concentrating "This is the peer reviewed version of the following article: R. Núñez, C. Domínguez, S. Askins, M. Victoria, R. Herrero, I. Antón, and G. Sala, “Determination of spectral variations by means of component cells useful for CPV rating and design,” Prog. Photovolt: Res. Appl., 2015., which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1002/pip.2715/full. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving [http://olabout.wiley.com/WileyCDA/Section/id-820227.html#terms]." Photovoltaic (CPV) modules. Component cells with the same optical behavior as MJ solar cells are used to characterize the spectrum. A set of parameters, namely Spectral Matching Ratios (SMRs), is used to characterize spectrally a particular Direct Normal Irradiance (DNI) by comparison to the reference spectrum (AM1.5D-ASTM-G173-03). Furthermore, the spectrally corrected DNI for a given MJ solar cell technology is defined providing a way to estimate the losses associated to the spectral variations. The last section analyzes how the spectrum evolves throughout a year in a given place and the set of SMRs representative for that location are calculated. This information can be used to maximize the energy harvested by the MJ solar cell throughout the year. As an example, three years of data recorded in Madrid shows that losses lower than 5% are expected due to current mismatch for state-of-the-art MJ solar cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When studying genotype X environment interaction in multi-environment trials, plant breeders and geneticists often consider one of the effects, environments or genotypes, to be fixed and the other to be random. However, there are two main formulations for variance component estimation for the mixed model situation, referred to as the unconstrained-parameters (UP) and constrained-parameters (CP) formulations. These formulations give different estimates of genetic correlation and heritability as well as different tests of significance for the random effects factor. The definition of main effects and interactions and the consequences of such definitions should be clearly understood, and the selected formulation should be consistent for both fixed and random effects. A discussion of the practical outcomes of using the two formulations in the analysis of balanced data from multi-environment trials is presented. It is recommended that the CP formulation be used because of the meaning of its parameters and the corresponding variance components. When managed (fixed) environments are considered, users will have more confidence in prediction for them but will not be overconfident in prediction in the target (random) environments. Genetic gain (predicted response to selection in the target environments from the managed environments) is independent of formulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new approach accounting for the nonadditivity of attractive parts of solid-fluid and fluidfluid potentials to improve the quality of the description of nitrogen and argon adsorption isotherms on graphitized carbon black in the framework of non-local density functional theory. We show that the strong solid-fluid interaction in the first monolayer decreases the fluid-fluid interaction, which prevents the twodimensional phase transition to occur. This results in smoother isotherm, which agrees much better with experimental data. In the region of multi-layer coverage the conventional non-local density functional theory and grand canonical Monte Carlo simulations are known to over-predict the amount adsorbed against experimental isotherms. Accounting for the non-additivity factor decreases the solid-fluid interaction with the increase of intermolecular interactions in the dense adsorbed fluid, preventing the over-prediction of loading in the region of multi-layer adsorption. Such an improvement of the non-local density functional theory allows us to describe experimental nitrogen and argon isotherms on carbon black quite accurately with mean error of 2.5 to 5.8% instead of 17 to 26% in the conventional technique. With this approach, the local isotherms of model pores can be derived, and consequently a more reliab * le pore size distribution can be obtained. We illustrate this by applying our theory against nitrogen and argon isotherms on a number of activated carbons. The fitting between our model and the data is much better than the conventional NLDFT, suggesting the more reliable PSD obtained with our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adsorption of pure nitrogen, argon, acetone, chloroform and acetone-chloroform mixture on graphitized thermal carbon black is considered at sub-critical conditions by means of molecular layer structure theory (MLST). In the present version of the MLST an adsorbed fluid is considered as a sequence of 2D molecular layers, whose Helmholtz free energies are obtained directly from the analysis of experimental adsorption isotherm of pure components. The interaction of the nearest layers is accounted for in the framework of mean field approximation. This approach allows quantitative correlating of experimental nitrogen and argon adsorption isotherm both in the monolayer region and in the range of multi-layer coverage up to 10 molecular layers. In the case of acetone and chloroform the approach also leads to excellent quantitative correlation of adsorption isotherms, while molecular approaches such as the non-local density functional theory (NLDFT) fail to describe those isotherms. We extend our new method to calculate the Helmholtz free energy of an adsorbed mixture using a simple mixing rule, and this allows us to predict mixture adsorption isotherms from pure component adsorption isotherms. The approach, which accounts for the difference in composition in different molecular layers, is tested against the experimental data of acetone-chloroform mixture (non-ideal mixture) adsorption on graphitized thermal carbon black at 50 degrees C. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to determine whether an ophthalmophakometric technique could offer a feasible means of investigating ocular component contributions to residual astigmatism in human eyes. Current opinion was gathered on the prevalence, magnitude and source of residual astigmatism. It emerged that a comprehensive evaluation of the astigmatic contributions of the eye's internal ocular surfaces and their respective axial separations (effectivity) had not been carried out to date. An ophthalmophakometric technique was developed to measure astigmatism arising from the internal ocular components. Procedures included the measurement of refractive error (infra-red autorefractometry), anterior corneal surface power (computerised video keratography), axial distances (A-scan ultrasonography) and the powers of the posterior corneal surface in addition to both surfaces of the crystalline lens (multi-meridional still flash ophthalmophakometry). Computing schemes were developed to yield the required biometric data. These included (1) calculation of crystalline lens surface powers in the absence of Purkinje images arising from its anterior surface, (2) application of meridional analysis to derive spherocylindrical surface powers from notional powers calculated along four pre-selected meridians, (3) application of astigmatic decomposition and vergence analysis to calculate contributions to residual astigmatism of ocular components with obliquely related cylinder axes, (4) calculation of the effect of random experimental errors on the calculated ocular component data. A complete set of biometric measurements were taken from both eyes of 66 undergraduate students. Effectivity due to corneal thickness made the smallest cylinder power contribution (up to 0.25DC) to residual astigmatism followed by contributions of the anterior chamber depth (up to 0.50DC) and crystalline lens thickness (up to 1.00DC). In each case astigmatic contributions were predominantly direct. More astigmatism arose from the posterior corneal surface (up to 1.00DC) and both crystalline lens surfaces (up to 2.50DC). The astigmatic contributions of the posterior corneal and lens surfaces were found to be predominantly inverse whilst direct astigmatism arose from the anterior lens surface. Very similar results were found for right versus left eyes and males versus females. Repeatability was assessed on 20 individuals. The ophthalmophakometric method was found to be prone to considerable accumulated experimental errors. However, these errors are random in nature so that group averaged data were found to be reasonably repeatable. A further confirmatory study was carried out on 10 individuals which demonstrated that biometric measurements made with and without cycloplegia did not differ significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While the literature has suggested the possibility of breach being composed of multiple facets, no previous study has investigated this possibility empirically. This study examined the factor structure of typical component forms in order to develop a multiple component form measure of breach. Two studies were conducted. In study 1 (N = 420) multi-item measures based on causal indicators representing promissory obligations were developed for the five potential component forms (delay, magnitude, type/form, inequity and reciprocal imbalance). Exploratory factor analysis showed that the five components loaded onto one higher order factor, namely psychological contract breach suggesting that breach is composed of different aspects rather than types of breach. Confirmatory factor analysis provided further evidence for the proposed model. In addition, the model achieved high construct reliability and showed good construct, convergent, discriminant and predictive validity. Study 2 data (N = 189), used to validate study 1 results, compared the multiple-component measure with an established multiple item measure of breach (rather than a single item as in study 1) and also tested for discriminant validity with an established multiple item measure of violation. Findings replicated those in study 1. The findings have important implications for considering alternative, more comprehensive and elaborate ways of assessing breach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objectives of this research are to analyze and develop a modified Principal Component Analysis (PCA) and to develop a two-dimensional PCA with applications in image processing. PCA is a classical multivariate technique where its mathematical treatment is purely based on the eigensystem of positive-definite symmetric matrices. Its main function is to statistically transform a set of correlated variables to a new set of uncorrelated variables over $\IR\sp{n}$ by retaining most of the variations present in the original variables.^ The variances of the Principal Components (PCs) obtained from the modified PCA form a correlation matrix of the original variables. The decomposition of this correlation matrix into a diagonal matrix produces a set of orthonormal basis that can be used to linearly transform the given PCs. It is this linear transformation that reproduces the original variables. The two-dimensional PCA can be devised as a two successive of one-dimensional PCA. It can be shown that, for an $m\times n$ matrix, the PCs obtained from the two-dimensional PCA are the singular values of that matrix.^ In this research, several applications for image analysis based on PCA are developed, i.e., edge detection, feature extraction, and multi-resolution PCA decomposition and reconstruction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major challenge of modern teams lies in the coordination of the efforts not just of individuals within a team, but also of teams whose efforts are ultimately entwined with those of other teams. Despite this fact, much of the research on work teams fails to consider the external dependencies that exist in organizational teams and instead focuses on internal or within team processes. Multi-Team Systems Theory is used as a theoretical framework for understanding teams-of-teams organizational forms (Multi-Team Systems; MTS's); and leadership teams are proposed as one remedy that enable MTS members to dedicate needed resources to intra-team activities while ensuring effective synchronization of between-team activities. Two functions of leader teams were identified: strategy development and coordination facilitation; and a model was developed delineating the effects of the two leader roles on multi-team cognitions, processes, and performance.^ Three hundred eighty-four undergraduate psychology and business students participated in a laboratory simulation that modeled an MTS; each MTS was comprised of three, two-member teams each performing distinct but interdependent components of an F-22 battle simulation task. Two roles of leader teams supported in the literature were manipulated through training in a 2 (strategy training vs. control) x 2 (coordination training vs. control) design. Multivariate analysis of variance (MANOVA) and mediated regression analysis were used to test the study's hypotheses. ^ Results indicate that both training manipulations produced differences in the effectiveness of the intended form of leader behavior. The enhanced leader strategy training resulted in more accurate (but not more similar) MTS mental models, better inter-team coordination, and higher levels of multi-team (but not component team) performance. Moreover, mental model accuracy fully mediated the relationship between leader strategy and inter-team coordination; and inter-team coordination fully mediated the effect of leader strategy on multi-team performance. Leader coordination training led to better inter-team coordination, but not to higher levels of either team or multi-team performance. Mediated Input-Process-Output (I-P-O) relationships were not supported with leader coordination; rather, leader coordination facilitation and inter-team coordination uniquely contributed to component team and multi-team level performance. The implications of these findings and future research directions are also discussed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subtropical hardwood forests of southern Florida are formed by 120 frost-sensitive, broadleaved angiosperm species that range throughout the Caribbean. Previous work on a series of small sized forest component patches of a 20 km2, forest preserve in northern Key Largo indicate that a shift in species composition was associated with a 100 year forest developmental sequence, and this shift was associated with an increasingly evergreen canopy. This document investigates the underlying differences of the biology of trees that live in this habitat, and is specifically focused on the impact of leaf morphology on changing nutrient cycling patterns. Measurements of the area, thickness, dry mass, nutrient content and longevity of several leaves from 3-4 individuals of ten species were conducted in combination with a two-year leaf litter collection and nutrient analysis to determine that species with thicker, denser leaves cycled scarce nutrients up to 2-3 times more efficiently than thin leaved tree species, and the leaf thickness/density index predicts role in forest development in a parallel direction as the index predicts nutrient cycling efficiency. A three year set of observations on the relative abundance of new leaves, flowers and fruits of the same tree species provides an opportunity to evaluate the consequences the leaf morphology/nutrient cycling/forest development relationship to forest habitat quality. Results of the three documents support a mechanistic link between forest development and nutrient cycling, and suggests that older forests are likely to be better habitats based on the availability of valuable forest products like new leaves, flowers, and fruits throughout the year.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diabetes self-management, an essential component of diabetes care, includes weight control practices and requires guidance from providers. Minorities are likely to have less access to quality health care than White non-Hispanics (WNH) (American College of Physicians-American Society of Internal Medicine, 2000). Medical advice received and understood may differ by race/ethnicity as a consequence of the patient-provider communication process; and, may affect diabetes self-management. ^ This study examined the relationships among participants’ report of: (1) medical advice given; (2) diabetes self-management, and; (3) health outcomes for Mexican-Americans (MA) and Black non-Hispanics (BNH) as compared to WNH (reference group) using data available through the National Health and Nutrition Examination Survey (NHANES) for the years 2007–2008. This study was a secondary, single point analysis. Approximately 30 datasets were merged; and, the quality and integrity was assured by analysis of frequency, range and quartiles. The subjects were extracted based on the following inclusion criteria: belonging to either the MA, BNH or WNH categories; 21 years or older; responded yes to being diagnosed with diabetes. A final sample size of 654 adults [MA (131); BNH (223); WNH (300)] was used for the analyses. The findings revealed significant statistical differences in medical advice reported given. BNH [OR = 1.83 (1.16, 2.88), p = 0.013] were more likely than WNH to report being told to reduce fat or calories. Similarly, BNH [OR = 2.84 (1.45, 5.59), p = 0.005] were more likely than WNH to report that they were told to increase their physical activity. Mexican-Americans were less likely to self-monitor their blood glucose than WNH [OR = 2.70 (1.66, 4.38), p<0.001]. There were differences among ethnicities for reporting receiving recent diabetes education. Black, non-Hispanics were twice as likely to report receiving diabetes education than WNH [OR = 2.29 (1.36, 3.85), p = 0.004]. Medical advice reported given and ethnicity/race, together, predicted several health outcomes. Having recent diabetes education increased the likelihood of performing several diabetes self-management behaviors, independent of race. ^ These findings indicate a need for patient-provider communication and care to be assessed for effectiveness and, the importance of ongoing diabetes education for persons with diabetes.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is growing popularity in the use of composite indices and rankings for cross-organizational benchmarking. However, little attention has been paid to alternative methods and procedures for the computation of these indices and how the use of such methods may impact the resulting indices and rankings. This dissertation developed an approach for assessing composite indices and rankings based on the integration of a number of methods for aggregation, data transformation and attribute weighting involved in their computation. The integrated model developed is based on the simulation of composite indices using methods and procedures proposed in the area of multi-criteria decision making (MCDM) and knowledge discovery in databases (KDD). The approach developed in this dissertation was automated through an IT artifact that was designed, developed and evaluated based on the framework and guidelines of the design science paradigm of information systems research. This artifact dynamically generates multiple versions of indices and rankings by considering different methodological scenarios according to user specified parameters. The computerized implementation was done in Visual Basic for Excel 2007. Using different performance measures, the artifact produces a number of excel outputs for the comparison and assessment of the indices and rankings. In order to evaluate the efficacy of the artifact and its underlying approach, a full empirical analysis was conducted using the World Bank's Doing Business database for the year 2010, which includes ten sub-indices (each corresponding to different areas of the business environment and regulation) for 183 countries. The output results, which were obtained using 115 methodological scenarios for the assessment of this index and its ten sub-indices, indicated that the variability of the component indicators considered in each case influenced the sensitivity of the rankings to the methodological choices. Overall, the results of our multi-method assessment were consistent with the World Bank rankings except in cases where the indices involved cost indicators measured in per capita income which yielded more sensitive results. Low income level countries exhibited more sensitivity in their rankings and less agreement between the benchmark rankings and our multi-method based rankings than higher income country groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) in groundwater and surface water samples from the Florida coastal Everglades were studied using excitation–emission matrix fluorescence modeled through parallel factor analysis (EEM-PARAFAC). DOM in both surface and groundwater from the eastern Everglades S332 basin reflected a terrestrial-derived fingerprint through dominantly higher abundances of humic-like PARAFAC components. In contrast, surface water DOM from northeastern Florida Bay featured a microbial-derived DOM signature based on the higher abundance of microbial humic-like and protein-like components consistent with its marine source. Surprisingly, groundwater DOM from northeastern Florida Bay reflected a terrestrial-derived source except for samples from central Florida Bay well, which mirrored a combination of terrestrial and marine end-member origin. Furthermore, surface water and groundwater displayed effects of different degradation pathways such as photodegradation and biodegradation as exemplified by two PARAFAC components seemingly indicative of such degradation processes. Finally, Principal Component Analysis of the EEM-PARAFAC data was able to distinguish and classify most of the samples according to DOM origins and degradation processes experienced, except for a small overlap of S332 surface water and groundwater, implying rather active surface-to-ground water interaction in some sites particularly during the rainy season. This study highlights that EEM-PARAFAC could be used successfully to trace and differentiate DOM from diverse sources across both horizontal and vertical flow profiles, and as such could be a convenient and useful tool for the better understanding of hydrological interactions and carbon biogeochemical cycling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paleostudies of the Indonesian Throughflow (ITF) are largely based on temperature and salinity reconstructions of its near surface component, whereas the variability of its lower thermocline flow has rarely been investigated. We present a multi-proxy record of planktonic and benthic foraminiferal d18O, Mg/Ca-derived surface and lower thermocline temperatures, X-ray fluorescence (XRF)-derived runoff and sediment winnowing for the past 130 ka in marine sediment core SO18471. Core SO18471, retrieved from a water depth of 485 m at the southern edge of the Timor Strait close to the Sahul Shelf, sits in a strategic position to reconstruct variations in both the ITF surface and lower thermocline flow as well as to investigate hydrological changes related to monsoon variability and shelf dynamics over time. Sediment winnowing demonstrates that the ITF thermocline flow intensified during MIS 5d-a and MIS 1. In contrast during MIS 5e, winnowing was reduced and terrigenous input increased suggesting intensification of the local wet monsoon and a weaker ITF. Lower thermocline warming during globally cold periods (MIS 4 - MIS 2) appears to be related to a weaker and contracted thermocline ITF and advection of warm and salty Indian Ocean waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the scope of Russian-German palaeoenvironmental research, Two-Yurts Lake (TYL, Dvuh-Yurtochnoe in Russian) was chosen as the main scientific target area to decipher Holocene climate variability on Kamchatka. The 5x2 km large and 26 m deep lake is of proglacial origin and situated on the eastern flank of Sredinny Ridge at the northwestern end of the Central Kamchatka Valley, outside the direct influence of active volcanism. Here, we present results of a multi-proxy study on sediment cores, spanning about the last 7000 years. The general tenor of the TYL record is an increase in continentality and winter snow cover in conjunction with a decrease in temperature, humidity, and biological productivity after 5000-4500 cal yrs BP, inferred from pollen and diatom data and the isotopic composition of organic carbon. The TYL proxy data also show that the late Holocene was punctuated by two colder spells, roughly between 4500 and 3500 cal yrs BP and between 1000 and 200 cal yrs BP, as local expressions of the Neoglacial and Little Ice Age, respectively. These environmental changes can be regarded as direct and indirect responses to climate change, as also demonstrated by other records in the regional terrestrial and marine realm. Long-term climate deterioration was driven by decreasing insolation, while the short-term climate excursions are best explained by local climatic processes. The latter affect the configuration of atmospheric pressure systems that control the sources as well as the temperature and moisture of air masses reaching Kamchatka.