916 resultados para Short-term generation scheduling
Resumo:
In a cohort study among 2751 members (71.5% females) of the German and Swiss RLS patient organizations changes in restless legs syndrome (RLS) severity over time was assessed and the impact on quality of life, sleep quality and depressive symptoms was analysed. A standard set of scales (RLS severity scale IRLS, SF-36, Pittsburgh Sleep Quality Index and the Centre for Epidemiologic Studies Depression Scale) in mailed questionnaires was repeatedly used to assess RLS severity and health status over time and a 7-day diary once to assess short-term variations. A clinically relevant change of the RLS severity was defined by a change of at least 5 points on the IRLS scale. During 36 months follow-up minimal improvement of RLS severity between assessments was observed. Men consistently reported higher severity scores. RLS severity increased with age reaching a plateau in the age group 45-54 years. During 3 years 60.2% of the participants had no relevant (±5 points) change in RLS severity. RLS worsening was significantly related to an increase in depressive symptoms and a decrease in sleep quality and quality of life. The short-term variation showed distinctive circadian patterns with rhythm magnitudes strongly related to RLS severity. The majority of participants had a stable course of severe RLS over three years. An increase in RLS severity was accompanied by a small to moderate negative, a decrease by a small positive influence on quality of life, depressive symptoms and sleep quality.
Resumo:
Previous research agrees that approach goals have positive effects whereas avoidance goals have negative effects on performance. By contrast, the present chapter looks at the conditions under which even avoidance goals may have positive effects on performance. We will first review the previous research that supports the positive consequences of avoidance goals. Then we will argue that the positive and negative consequences of approach and avoidance goals on performance depend on an individual‘s neuroticism level and the time frame of their goal striving. Because neuroticism is positively related to avoidance goals, we assume that individuals with high levels of neuroticism may derive some benefits from avoidance goals. We have specified this assumption by hypothesizing that the fit between an individual‘s level of neuroticism and their avoidance goals leads to favorable consequences in the short term – but to negative outcomes in the long run. A short-term, experimental study with employees and a long-term correlative field study with undergraduate students were conducted to test whether neuroticism moderates the short- and long-term effects of avoidance versus approach goals on performance. Experimental study 1 showed that individuals with a high level of neuroticism performed best in the short term when they were assigned to avoidance goals, whereas individuals with a low level of neuroticism performed best when pursuing approach goals. However, study 2 indicated that in the long run individuals with a high level of neuroticism performed worse when striving for avoidance goals, whereas individuals with a low level of neuroticism were not impaired at all by avoidance goals. In summary, the pattern of results supports the hypothesis that a fit between a high level of neuroticism and avoidance goals has positive consequences in the short term, but leads to negative outcomes in the long run. We strongly encourage further research to investigate short- and long-term effects of approach and avoidance goals on performance in conjunction with an individual‘s personality, which may moderate these effects.
Resumo:
Undergraduate research programs have been used as a tool to attract and retain student interest in science careers. This study evaluates the short and long-term benefits of a Summer Science Internship (SSI) at the University of Texas Health Science Center at Houston– School of Public Health – in Brownsville, Texas, by analyzing survey data from alumni. Questions assessing short-term program impact were aimed at three main topics, student: satisfaction with program, self-efficacy for science after completing the program, and perceived benefits. Long-term program impact was assessed by looking at student school attendance and college majors along with perceived links between SSI and future college plans. Students reported high program satisfaction, a significant increase in science self-efficacy and high perceived benefits. At the time data were collected for the study, one-hundred percent of alumni were enrolled in school (high school or college). The majority of students indicated they were interested in completing a science major/career, heavily influenced by their participation in the program.^
Resumo:
Increasing pCO2 (partial pressure of CO2 ) in an "acidified" ocean will affect phytoplankton community structure, but manipulation experiments with assemblages briefly acclimated to simulated future conditions may not accurately predict the long-term evolutionary shifts that could affect inter-specific competitive success. We assessed community structure changes in a natural mixed dinoflagellate bloom incubated at three pCO2 levels (230, 433, and 765 ppm) in a short-term experiment (2 weeks). The four dominant species were then isolated from each treatment into clonal cultures, and maintained at all three pCO2 levels for approximately 1 year. Periodically (4, 8, and 12 months), these pCO2 -conditioned clones were recombined into artificial communities, and allowed to compete at their conditioning pCO2 level or at higher and lower levels. The dominant species in these artificial communities of CO2 -conditioned clones differed from those in the original short-term experiment, but individual species relative abundance trends across pCO2 treatments were often similar. Specific growth rates showed no strong evidence for fitness increases attributable to conditioning pCO2 level. Although pCO2 significantly structured our experimental communities, conditioning time and biotic interactions like mixotrophy also had major roles in determining competitive outcomes. New methods of carrying out extended mixed species experiments are needed to accurately predict future long-term phytoplankton community responses to changing pCO2 .
Resumo:
Ocean acidification and greenhouse warming will interactively influence competitive success of key phytoplankton groups such as diatoms, but how long-term responses to global change will affect community structure is unknown. We incubated a mixed natural diatom community from coastal New Zealand waters in a short-term (two-week) incubation experiment using a factorial matrix of warming and/or elevated pCO2 and measured effects on community structure. We then isolated the dominant diatoms in clonal cultures and conditioned them for 1 year under the same temperature and pCO2 conditions from which they were isolated, in order to allow for extended selection or acclimation by these abiotic environmental change factors in the absence of interspecific interactions. These conditioned isolates were then recombined into 'artificial' communities modelled after the original natural assemblage and allowed to compete under conditions identical to those in the short-term natural community experiment. In general, the resulting structure of both the unconditioned natural community and conditioned 'artificial' community experiments was similar, despite differences such as the loss of two species in the latter. pCO2 and temperature had both individual and interactive effects on community structure, but temperature was more influential, as warming significantly reduced species richness. In this case, our short-term manipulative experiment with a mixed natural assemblage spanning weeks served as a reasonable proxy to predict the effects of global change forcing on diatom community structure after the component species were conditioned in isolation over an extended timescale. Future studies will be required to assess whether or not this is also the case for other types of algal communities from other marine regimes.
Resumo:
Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.
Resumo:
Las terminales de contenedores son sistemas complejos en los que un elevado número de actores económicos interactúan para ofrecer servicios de alta calidad bajo una estricta planificación y objetivos económicos. Las conocidas como "terminales de nueva generación" están diseñadas para prestar servicio a los mega-buques, que requieren tasas de productividad que alcanzan los 300 movimientos/ hora. Estas terminales han de satisfacer altos estándares dado que la competitividad entre terminales es elevada. Asegurar la fiabilidad de las planificaciones del atraque es clave para atraer clientes, así como reducir al mínimo el tiempo que el buque permanece en el puerto. La planificación de las operaciones es más compleja que antaño, y las tolerancias para posibles errores, menores. En este contexto, las interrupciones operativas deben reducirse al mínimo. Las principales causas de dichas perturbaciones operacionales, y por lo tanto de incertidumbre, se identifican y caracterizan en esta investigación. Existen una serie de factores que al interactuar con la infraestructura y/o las operaciones desencadenan modos de fallo o parada operativa. Los primeros pueden derivar no solo en retrasos en el servicio sino que además puede tener efectos colaterales sobre la reputación de la terminal, o incluso gasto de tiempo de gestión, todo lo cual supone un impacto para la terminal. En el futuro inmediato, la monitorización de las variables operativas presenta gran potencial de cara a mejorar cualitativamente la gestión de las operaciones y los modelos de planificación de las terminales, cuyo nivel de automatización va en aumento. La combinación del criterio experto con instrumentos que proporcionen datos a corto y largo plazo es fundamental para el desarrollo de herramientas que ayuden en la toma de decisiones, ya que de este modo estarán adaptadas a las auténticas condiciones climáticas y operativas que existen en cada emplazamiento. Para el corto plazo se propone una metodología con la que obtener predicciones de parámetros operativos en terminales de contenedores. Adicionalmente se ha desarrollado un caso de estudio en el que se aplica el modelo propuesto para obtener predicciones de la productividad del buque. Este trabajo se ha basado íntegramente en datos proporcionados por una terminal semi-automatizada española. Por otro lado, se analiza cómo gestionar, evaluar y mitigar el efecto de las interrupciones operativas a largo plazo a través de la evaluación del riesgo, una forma interesante de evaluar el effecto que eventos inciertos pero probables pueden generar sobre la productividad a largo plazo de la terminal. Además se propone una definición de riesgo operativo junto con una discusión de los términos que representan con mayor fidelidad la naturaleza de las actividades y finalmente, se proporcionan directrices para gestionar los resultados obtenidos. Container terminals are complex systems where a large number of factors and stakeholders interact to provide high-quality services under rigid planning schedules and economic objectives. The socalled next generation terminals are conceived to serve the new mega-vessels, which are demanding productivity rates up to 300 moves/hour. These terminals need to satisfy high standards because competition among terminals is fierce. Ensuring reliability in berth scheduling is key to attract clients, as well as to reduce at a minimum the time that vessels stay the port. Because of the aforementioned, operations planning is becoming more complex, and the tolerances for errors are smaller. In this context, operational disturbances must be reduced at a minimum. The main sources of operational disruptions and thus, of uncertainty, are identified and characterized in this study. External drivers interact with the infrastructure and/or the activities resulting in failure or stoppage modes. The later may derive not only in operational delays but in collateral and reputation damage or loss of time (especially management times), all what implies an impact for the terminal. In the near future, the monitoring of operational variables has great potential to make a qualitative improvement in the operations management and planning models of terminals that use increasing levels of automation. The combination of expert criteria with instruments that provide short- and long-run data is fundamental for the development of tools to guide decision-making, since they will be adapted to the real climatic and operational conditions that exist on site. For the short-term a method to obtain operational parameter forecasts in container terminals. To this end, a case study is presented, in which forecasts of vessel performance are obtained. This research has been entirely been based on data gathered from a semi-automated container terminal from Spain. In the other hand it is analyzed how to manage, evaluate and mitigate disruptions in the long-term by means of the risk assessment, an interesting approach to evaluate the effect of uncertain but likely events on the long-term throughput of the terminal. In addition, a definition for operational risk evaluation in port facilities is proposed along with a discussion of the terms that better represent the nature of the activities involved and finally, guidelines to manage the results obtained are provided.
Resumo:
The generation time of HIV Type 1 (HIV-1) in vivo has previously been estimated using a mathematical model of viral dynamics and was found to be on the order of one to two days per generation. Here, we describe a new method based on coalescence theory that allows the estimate of generation times to be derived by using nucleotide sequence data and a reconstructed genealogy of sequences obtained over time. The method is applied to sequences obtained from a long-term nonprogressing individual at five sampling occasions. The estimate of viral generation time using the coalescent method is 1.2 days per generation and is close to that obtained by mathematical modeling (1.8 days per generation), thus strengthening confidence in estimates of a short viral generation time. Apart from the estimation of relevant parameters relating to viral dynamics, coalescent modeling also allows us to simulate the evolutionary behavior of samples of sequences obtained over time.
Resumo:
To investigate the proposed molecular characteristics of sugar-mediated repression of photosynthetic genes during plant acclimation to elevated CO2, we examined the relationship between the accumulation and metabolism of nonstructural carbohydrates and changes in ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco) gene expression in leaves of Arabidopsis thaliana exposed to elevated CO2. Long-term growth of Arabidopsis at high CO2 (1000 μL L−1) resulted in a 2-fold increase in nonstructural carbohydrates, a large decrease in the expression of Rubisco protein and in the transcript of rbcL, the gene encoding the large subunit of Rubisco (approximately 35–40%), and an even greater decline in mRNA of rbcS, the gene encoding the small subunit (approximately 60%). This differential response of protein and mRNAs suggests that transcriptional/posttranscriptional processes and protein turnover may determine the final amount of leaf Rubisco protein at high CO2. Analysis of mRNA levels of individual rbcS genes indicated that reduction in total rbcS transcripts was caused by decreased expression of all four rbcS genes. Short-term transfer of Arabidopsis plants grown at ambient CO2 to high CO2 resulted in a decrease in total rbcS mRNA by d 6, whereas Rubisco content and rbcL mRNA decreased by d 9. Transfer to high CO2 reduced the maximum expression level of the primary rbcS genes (1A and, particularly, 3B) by limiting their normal pattern of accumulation through the night period. The decreased nighttime levels of rbcS mRNA were associated with a nocturnal increase in leaf hexoses. We suggest that prolonged nighttime hexose metabolism resulting from exposure to elevated CO2 affects rbcS transcript accumulation and, ultimately, the level of Rubisco protein.
Resumo:
Based on the recent high-resolution laboratory experiments on propagating shear rupture, the constitutive law that governs shear rupture processes is discussed in view of the physical principles and constraints, and a specific constitutive law is proposed for shear rupture. It is demonstrated that nonuniform distributions of the constitutive law parameters on the fault are necessary for creating the nucleation process, which consists of two phases: (i) a stable, quasistatic phase, and (ii) the subsequent accelerating phase. Physical models of the breakdown zone and the nucleation zone are presented for shear rupture in the brittle regime. The constitutive law for shear rupture explicitly includes a scaling parameter Dc that enables one to give a common interpretation to both small scale rupture in the laboratory and large scale rupture as earthquake source in the Earth. Both the breakdown zone size Xc and the nucleation zone size L are prescribed and scaled by Dc, which in turn is prescribed by a characteristic length lambda c representing geometrical irregularities of the fault. The models presented here make it possible to understand the earthquake generation process from nucleation to unstable, dynamic rupture propagation in terms of physics. Since the nucleation process itself is an immediate earthquake precursor, deep understanding of the nucleation process in terms of physics is crucial for the short-term (or immediate) earthquake prediction.
Resumo:
Nongenetic inheritance mechanisms such as transgenerational plasticity (TGP) can buffer populations against rapid environmental change such as ocean warming. Yet, little is known about how long these effects persist and whether they are cumulative over generations. Here, we tested for adaptive TGP in response to simulated ocean warming across parental and grandparental generations of marine sticklebacks. Grandparents were acclimated for two months during reproductive conditioning, whereas parents experienced developmental acclimation, allowing us to compare the fitness consequences of short-term vs. prolonged exposure to elevated temperature across multiple generations. We found that reproductive output of F1 adults was primarily determined by maternal developmental temperature, but carry-over effects from grandparental acclimation environments resulted in cumulative negative effects of elevated temperature on hatching success. In very early stages of growth, F2 offspring reached larger sizes in their respective paternal and grandparental environment down the paternal line, suggesting that other factors than just the paternal genome may be transferred between generations. In later growth stages, maternal and maternal granddam environments strongly influenced offspring body size, but in opposing directions, indicating that the mechanism(s) underlying the transfer of environmental information may have differed between acute and developmental acclimation experienced by the two generations. Taken together, our results suggest that the fitness consequences of parental and grandparental TGP are highly context dependent, but will play an important role in mediating some of the impacts of rapid climate change in this system.
Resumo:
PURPOSE: To assess the correlation between changes in corneal aberrations and the 2-year change in axial length in children fitted with orthokeratology (OK) contact lenses. METHODS: Thirty-one subjects 6 to 12 years of age and with myopia −0.75 to −4.00DS and astigmatism ≤1.00DC were fitted with OK. Measurements of axial length and corneal topography were taken at regular intervals over a 2-year period. Corneal topography at baseline and after 3 and 24 months of OK lens wear was used to derive higher-order corneal aberrations (HOA) that were correlated with OK-induced axial length changes at 2 years. RESULTS: Significant changes in C3, C4, C4, root mean square (RMS) secondary astigmatism and fourth and total HOA were found with both 3 and 24 months of OK lens wear in comparison with baseline (all P0.05). Coma angle of orientation changed significantly pre-OK in comparison with 3 and 24 months post-OK as well as secondary astigmatism angle of orientation pre-OK in comparison with 24 months post-OK (all P0.05). DISCUSSION: Short-term and long-term OK lens wear induces significant changes in corneal aberrations that are not significantly correlated with changes in axial elongation after 2-years.
Resumo:
Arctic soils store close to 14% of the global soil carbon. Most of arctic carbon is stored below ground in the permafrost. With climate warming the decomposition of the soil carbon could represent a significant positive feedback to global greenhouse warming. Recent evidence has shown that the temperature of the Arctic is already increasing, and this change is associated mostly with anthropogenic activities. Warmer soils will contribute to permafrost degradation and accelerate organic matter decay and thus increase the flux of carbon dioxide and methane into the atmosphere. Temperature and water availability are also important drivers of ecosystem performance, but effects can be complex and in opposition. Temperature and moisture changes can affect ecosystem respiration (ER) and gross primary productivity (GPP) independently; an increase in the net ecosystem exchange can be a result of either a decrease in ER or an increase in GPP. Therefore, understanding the effects of changes in ecosystem water and temperature on the carbon flux components becomes key to predicting the responses of the Arctic to climate change. The overall goal of this work was to determine the response of arctic systems to simulated climate change scenarios with simultaneous changes in temperature and moisture. A temperature and hydrological manipulation in a naturally-drained lakebed was used to assess the short-term effect of changes in water and temperature on the carbon cycle. Also, as part of International Tundra Experiment Network (ITEX), I determined the long-term effect of warming on the carbon cycle in a natural hydrological gradient established in the mid 90's. I found that the carbon balance is highly sensitive to short-term changes in water table and warming. However, over longer time periods, hydrological and temperature changed soil biophysical properties, nutrient cycles, and other ecosystem structural and functional components that down regulated GPP and ER, especially in wet areas.
Resumo:
Personalized recommender systems aim to assist users in retrieving and accessing interesting items by automatically acquiring user preferences from the historical data and matching items with the preferences. In the last decade, recommendation services have gained great attention due to the problem of information overload. However, despite recent advances of personalization techniques, several critical issues in modern recommender systems have not been well studied. These issues include: (1) understanding the accessing patterns of users (i.e., how to effectively model users' accessing behaviors); (2) understanding the relations between users and other objects (i.e., how to comprehensively assess the complex correlations between users and entities in recommender systems); and (3) understanding the interest change of users (i.e., how to adaptively capture users' preference drift over time). To meet the needs of users in modern recommender systems, it is imperative to provide solutions to address the aforementioned issues and apply the solutions to real-world applications. ^ The major goal of this dissertation is to provide integrated recommendation approaches to tackle the challenges of the current generation of recommender systems. In particular, three user-oriented aspects of recommendation techniques were studied, including understanding accessing patterns, understanding complex relations and understanding temporal dynamics. To this end, we made three research contributions. First, we presented various personalized user profiling algorithms to capture click behaviors of users from both coarse- and fine-grained granularities; second, we proposed graph-based recommendation models to describe the complex correlations in a recommender system; third, we studied temporal recommendation approaches in order to capture the preference changes of users, by considering both long-term and short-term user profiles. In addition, a versatile recommendation framework was proposed, in which the proposed recommendation techniques were seamlessly integrated. Different evaluation criteria were implemented in this framework for evaluating recommendation techniques in real-world recommendation applications. ^ In summary, the frequent changes of user interests and item repository lead to a series of user-centric challenges that are not well addressed in the current generation of recommender systems. My work proposed reasonable solutions to these challenges and provided insights on how to address these challenges using a simple yet effective recommendation framework.^
Resumo:
Next-generation sequencing (NGS) technologies have enabled us to determine phytoplankton community compositions at high resolution. However, few studies have adopted this approach to assess the responses of natural phytoplankton communities to environmental change. Here, we report the impact of different CO2 levels on spring diatoms in the Oyashio region of the western North Pacific as estimated by NGS of the diatom-specific rbcL gene (DNA), which encodes the large subunit of RubisCO. We also examined the abundance and composition of rbcL transcripts (cDNA) in diatoms to assess their physiological responses to changing CO2 levels. A short-term (3-day) incubation experiment was carried out on-deck using surface Oyashio waters under different pCO2 levels (180, 350, 750, and 1000 µatm) in May 2011. During the incubation, the transcript abundance of the diatom-specific rbcL gene decreased with an increase in seawater pCO2 levels. These results suggest that CO2 fixation capacity of diatoms decreased rapidly under elevated CO2 levels. In the high CO2 treatments (750 and 1000 µatm), diversity of diatom-specific rbcL gene and its transcripts decreased relative to the control treatment (350µatm), as well as contributions of Chaetocerataceae, Thalassiosiraceae, and Fragilariaceae to the total population, but the contributions of Bacillariaceae increased. In the low CO2 treatment, contributions of Bacillariaceae also increased together with other eukaryotes. These suggest that changes in CO2 levels can alter the community composition of spring diatoms in the Oyashio region. Overall, the NGS technology provided us a deeper understanding of the response of diatoms to changes in CO2 levels in terms of their community composition, diversity, and photosynthetic physiology.