989 resultados para Dynamic storage deficit
Resumo:
The rheological behavior and the dynamic mechanical properties of syndiotactic 1,2-polybutadiene (sPB) were investigated by a rotational rheometer (MCR-300) and a dynamic mechanical analyzer (DMA-242C). Rheological behavior of sPB-830, a sPB with crystalline degree of 20.1% and syndiotactic content of 65.1%, showed that storage modulus (G ') and loss modulus (G '') decreased, and the zero shear viscosity (eta(0)) decreased slightly with increasing temperature when measuring temperatures were lower than 160 degrees C. However, G ' and G '' increased at the end region of relaxation curves with increasing temperature and)10 increased with increasing temperature as the measuring temperatures were higher than 160 degrees C. Furthermore, critical crosslinked reaction temperature was detected at about 160 degrees C for sPB-830. The crosslinked reaction was not detected when test temperature was lower than 150 degrees C for measuring the dynamic mechanical properties of sample. The relationship between processing temperature and crosslinked reaction was proposed for the sPB-830 sample.
Resumo:
Dynamic mechanical properties of sulfonated butyl rubber ionomers neutralized with different amine or metallic ion (zinc or barium) and their blends with polypropylene (PP), high-density polyethylene (HDPE), or styrene-butadiene-styrene (SBS) triblock copolymer were studied using viscoelastometry. The results showed that glass transition temperatures of ion pair-containing matrix and ionic domains (T-g1 and T-g2, respectively) of amine-neutralized ionomers were lower than those of ionomers neutralized with metallic ions, and the temperature range of the rubbery plateau on the storage modulus plot for amine-neutralized ionomers was narrower. The modulus of the rubbery plateau for amine-neutralized ionomers was lower than that of ionomers neutralized with zinc or barium ion. With increasing size of the amine, the temperature range for the rubbery plateau decreased, and the height of the loss peak at higher temperature increased. Dynamic mechanical properties of blends of the zinc ionomer with PP or HDPE showed that, with decreasing ionomer content, the T-m of PP or HDPE increased and T-g1 decreased, whereas T-g2 or the upper loss peak temperature changed only slightly. The T-g1 for the blend with SBS also decreased with decreasing ionomer content. The decrease of T-g1 is attributed to the enhanced compatibilization of the matrix of the ionomer-containing ion pairs with amorphous regions of PP or HDPE or the continuous phase of SBS due to the formation of thermoplastic interpenetrating polymer networks by ionic domains and crystalline or glassy domains.
Resumo:
The Scotia Sea has been a focus of biological- and physical oceanographic study since the Discovery expeditions in the early 1900s. It is a physically energetic region with some of the highest levels of productivity in the Southern Ocean. It is also a region within which there have been greater than average levels of change in upper water column temperature. We describe the results of three cruises transecting the central Scotia Sea from south to north in consecutive years and covering spring, summer and autumn periods. We also report on some community level syntheses using both current-day and historical data from this region. A wide range of parameters were measured during the field campaigns, covering the physical oceanography of the region, air–sea CO2 fluxes, macro- and micronutrient concentrations, the composition and biomass of the nano-, micro- and mesoplankton communities, and the distribution and biomass of Antarctic krill and mesopelagic fish. Process studies examined the effect of iron-stress on the physiology of primary producers, reproduction and egestion in Antarctic krill and the transfer of stable isotopes between trophic layers, from primary consumers up to birds and seals. Community level syntheses included an examination of the biomass-spectra, food-web modelling, spatial analysis of multiple trophic layers and historical species distributions. The spatial analyses in particular identified two distinct community types: a northern warmer water community and a southern cold community, their boundary being broadly consistent with the position of the Southern Antarctic Circumpolar Current Front (SACCF). Temperature and ice cover appeared to be the dominant, over-riding factors in driving this pattern. Extensive phytoplankton blooms were a major feature of the surveys, and were persistent in areas such as South Georgia. In situ and bioassay measurements emphasised the important role of iron inputs as facilitators of these blooms. Based on seasonal DIC deficits, the South Georgia bloom was found to contain the strongest seasonal carbon uptake in the ice-free zone of the Southern Ocean. The surveys also encountered low-production, iron-limited regions, a situation more typical of the wider Southern Ocean. The response of primary and secondary consumers to spatial and temporal heterogeneity in production was complex. Many of the life-cycles of small pelagic organisms showed a close coupling to the seasonal cycle of food availability. For instance, Antarctic krill showed a dependence on early, non-ice-associated blooms to facilitate early reproduction. Strategies to buffer against environmental variability were also examined, such as the prevalence of multiyear life-cycles and variability in energy storage levels. Such traits were seen to influence the way in which Scotia Sea communities were structured, with biomass levels in the larger size classes being higher than in other ocean regions. Seasonal development also altered trophic function, with the trophic level of higher predators increasing through the course of the year as additional predator-prey interactions emerged in the lower trophic levels. Finally, our studies re-emphasised the role that the simple phytoplankton-krill-higher predator food chain plays in this Southern Ocean region, particularly south of the SACCF. To the north, alternative food chains, such as those involving copepods, macrozooplankton and mesopelagic fish, were increasingly important. Continued ocean warming in this region is likely to increase the prevalence of such alternative such food chains with Antarctic krill predicted to move southwards.
Resumo:
The silicone elastomer solubilities of a range of drugs and pharmaceutical excipients employed in the development of silicone intravaginal drug delivery rings (polyethylene glycols, norethisterone acetate, estradiol, triclosan, oleyl alcohol, oxybutynin) have been determined using dynamic mechanical analysis. The method involves measuring the concentration-dependent decrease in the storage modulus associated with the melting of the incorporated drug/excipient, and extrapolation to zero change in storage modulus. The study also demonstrates the effect of drug/excipient concentrations on the mechanical stiffness of the silicone devices at 37°C.
Resumo:
Dynamic mechanical analysis (DMA) is an analytical technique in which an oscillating stress is applied to a sample and the resultant strain measured as functions of both oscillatory frequency and temperature. From this, a comprehensive knowledge of the relationships between the various viscoelastic parameters, e.g. storage and loss moduli, mechanical damping parameter (tan delta), dynamic viscosity, and temperature may be obtained. An introduction to the theory of DMA and pharmaceutical and biomedical examples of the use of this technique are presented in this concise review. In particular, examples are described in which DMA has been employed to quantify the storage and loss moduli of polymers, polymer damping properties, glass transition temperature(s), rate and extent of curing of polymer systems, polymer-polymer compatibility and identification of sol-gel transitions. Furthermore, future applications of the technique for the optimisation of the formulation of pharmaceutical and biomedical systems are discussed. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Although pumped hydro storage is seen as a strategic key asset by grid operators, financing it is complicated in new liberalised markets. It could be argued that the optimum generation portfolio is now determined by the economic viability of generators based on a short to medium term return on investment. This has meant that capital intensive projects such as pumped hydro storage are less attractive for wholesale electricity companies because the payback periods are too long. In tandem a significant amount of wind power has entered the generation mix, which has resulted in operating and planning integration issues due to wind's inherent uncertain, varying spatial and temporal nature. These integration issues can be overcome using fast acting gas peaking plant or energy storage. Most analysis of wind power integration using storage to date has used stochastic optimisation for power system balancing or arbitrage modelling to examine techno-economic viability. In this research a deterministic dynamic programming long term generation expansion model is employed to optimise the generation mix, total system costs and total carbon dioxide emissions, and unlike other studies calculates reserve to firm wind power. The key finding of this study is that the incentive to build capital-intensive pumped hydro storage to firm wind power is limited unless exogenous market costs come very strongly into play. Furthermore it was demonstrated that reserve increases with increasing wind power showing the importance of ancillary services in future power systems. © 2014 Elsevier Ltd. All rights reserved.
Resumo:
The proliferation of mobile devices in society accessing data via the ‘cloud’ is imposing a dramatic increase in the amount of information to be stored on hard disk drives (HDD) used in servers. Forecasts are that areal densities will need to increase by as much as 35% compound per annum and by 2020 cloud storage capacity will be around 7 zettabytes corresponding to areal densities of 2 Tb/in2. This requires increased performance from the magnetic pole of the electromagnetic writer in the read/write head in the HDD. Current state-of-art writing is undertaken by morphologically complex magnetic pole of sub 100 nm dimensions, in an environment of engineered magnetic shields and it needs to deliver strong directional magnetic field to areas on the recording media around 50 nm x 13 nm. This points to the need for a method to perform direct quantitative measurements of the magnetic field generated by the write pole at the nanometer scale. Here we report on the complete in situ quantitative mapping of the magnetic field generated by a functioning write pole in operation using electron holography. Opportunistically, it points the way towards a new nanoscale magnetic field source to further develop in situ Transmission Electron Microscopy.
Resumo:
Motivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy {\em churn} (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a {\em constant} degree graph with {\em high expansion} even under {\em continuous high adversarial} churn. Our protocol can tolerate a churn rate of up to $O(n/\poly\log(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\poly\log(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn.
Resumo:
In future power systems, in the smart grid and microgrids operation paradigms, consumers can be seen as an energy resource with decentralized and autonomous decisions in the energy management. It is expected that each consumer will manage not only the loads, but also small generation units, heating systems, storage systems, and electric vehicles. Each consumer can participate in different demand response events promoted by system operators or aggregation entities. This paper proposes an innovative method to manage the appliances on a house during a demand response event. The main contribution of this work is to include time constraints in resources management, and the context evaluation in order to ensure the required comfort levels. The dynamic resources management methodology allows a better resources’ management in a demand response event, mainly the ones of long duration, by changing the priorities of loads during the event. A case study with two scenarios is presented considering a demand response with 30 min duration, and another with 240 min (4 h). In both simulations, the demand response event proposes the power consumption reduction during the event. A total of 18 loads are used, including real and virtual ones, controlled by the presented house management system.
Resumo:
COD discharges out of processes have increased in line with elevating brightness demands for mechanical pulp and papers. The share of lignin-like substances in COD discharges is on average 75%. In this thesis, a plant dynamic model was created and validated as a means to predict COD loading and discharges out of a mill. The assays were carried out in one paper mill integrate producing mechanical printing papers. The objective in the modeling of plant dynamics was to predict day averages of COD load and discharges out of mills. This means that online data, like 1) the level of large storage towers of pulp and white water 2) pulp dosages, 3) production rates and 4) internal white water flows and discharges were used to create transients into the balances of solids and white water, referred to as “plant dynamics”. A conversion coefficient was verified between TOC and COD. The conversion coefficient was used for predicting the flows from TOC to COD to the waste water treatment plant. The COD load was modeled with similar uncertainty as in reference TOC sampling. The water balance of waste water treatment was validated by the reference concentration of COD. The difference of COD predictions against references was within the same deviation of TOC-predictions. The modeled yield losses and retention values of TOC in pulping and bleaching processes and the modeled fixing of colloidal TOC to solids between the pulping plant and the aeration basin in the waste water treatment plant were similar to references presented in literature. The valid water balances of the waste water treatment plant and the reduction model of lignin-like substances produced a valid prediction of COD discharges out of the mill. A 30% increase in the release of lignin-like substances in the form of production problems was observed in pulping and bleaching processes. The same increase was observed in COD discharges out of waste water treatment. In the prediction of annual COD discharge, it was noticed that the reduction of lignin has a wide deviation from year to year and from one mill to another. This made it difficult to compare the parameters of COD discharges validated in plant dynamic simulation with another mill producing mechanical printing papers. However, a trend of moving from unbleached towards high-brightness TMP in COD discharges was valid.
Resumo:
Dans le sillage de la récession mondiale de 2008-09, plusieurs questions ont été soulevées dans la littérature économique sur les effets à court et à long terme de la politique budgétaire sur l’activité économique par rapport à son signe, sa taille et sa durée. Ceux-ci ont des implications importantes pour mieux comprendre les canaux de transmission et l’efficacité des politiques budgétaires, avec la politique monétaire étant poursuivi, ainsi que pour leurs retombées économiques. Cette thèse fait partie de ce regain d’intérêt de la littérature d’examiner comment les changements dans la politique budgétaire affectent l’activité économique. Elle repose alors sur trois essais: les effets macroéconomiques des chocs de dépenses publiques et des recettes fiscales, les résultats macroéconomiques de l’interaction entre les politiques budgétaire et monétaire et le lien entre la politique budgétaire et la répartition des revenus. Le premier chapitre examine les effets des chocs de politique budgétaire (chocs de dépenses publiques et chocs de recettes fiscales) sur l’économie canadienne au cours de la période 1970-2010, en s’appuyant sur la méthode d’identification des restrictions de signe développée par Mountford et Uhlig [2009]. En réponse à la récession mondiale, les autorités fiscales dans les économies avancées, dont le Canada ont généralement mis en oeuvre une approche en deux phases pour la politique budgétaire. Tout d’abord, ils ont introduit des plans de relance sans précédent pour relancer leurs économies. Par exemple, les mesures de relance au Canada, introduites à travers le Plan d’action économique du Canada, ont été projetées à 3.2 pour cent du PIB dans le budget fédéral de 2009 tandis que l’ "American Recovery and Reinvestment Act"(ARRA) a été estimé à 7 pour cent du PIB. Par la suite, ils ont mis en place des plans d’ajustement en vue de réduire la dette publique et en assurer la soutenabilité à long terme. Dans ce contexte, évaluer les effets multiplicateurs de la politique budgétaire est important en vue d’informer sur l'efficacité de telles mesures dans la relance ou non de l'activité économique. Les résultats montrent que les multiplicateurs d'impôt varient entre 0.2 et 0.5, tandis que les multiplicateurs de dépenses varient entre 0.2 et 1.1. Les multiplicateurs des dépenses ont tendance à être plus grand que les multiplicateurs des recettes fiscales au cours des deux dernières décennies. Comme implications de politique économique, ces résultats tendent à suggérer que les ajustements budgétaires par le biais de grandes réductions de dépenses publiques pourraient être plus dommageable pour l'économie que des ajustements budgétaires par la hausse des impôts. Le deuxième chapitre, co-écrit avec Constant Lonkeng Ngouana, estime les effets multiplicateurs des dépenses publiques aux Etats-Unis en fonction du cycle de la politique monétaire. Les chocs de dépenses publiques sont identifiés comme étant des erreurs de prévision du taux de croissance des dépenses publiques à partir des données d'Enquêtes des prévisionnistes professionnels et des informations contenues dans le "Greenbook". L'état de la politique monétaire est déduite à partir de la déviation du taux des fonds fédéraux du taux cible de la Réserve Fédérale, en faisant recours à une fonction lisse de transition. L'application de la méthode des «projections locales» aux données trimestrielles américaines au cours de la période 1965-2012 suggère que les effets multiplicateurs des dépenses fédérales sont sensiblement plus élevées quand la politique monétaire est accommodante que lorsqu'elle ne l'est pas. Les résultats suggèrent aussi que les dépenses fédérales peuvent stimuler ou non la consommation privée, dépendamment du degré d’accommodation de la politique monétaire. Ce dernier résultat réconcilie ainsi, sur la base d’un cadre unifié des résultats autrement contradictoires à première vue dans la littérature. Ces résultats ont d'importantes implications de politique économique. Ils suggèrent globalement que la politique budgétaire est plus efficace lorsqu'on en a le plus besoin (par exemple, lorsque le taux de chômage est élevé), si elle est soutenue par la politique monétaire. Ils ont également des implications pour la normalisation des conditions monétaires dans les pays avancés: la sortie des politiques monétaires non-conventionnelles conduirait à des multiplicateurs de dépenses fédérales beaucoup plus faibles qu'autrement, même si le niveau de chômage restait élevé. Ceci renforce la nécessité d'une calibration prudente du calendrier de sortie des politiques monétaires non-conventionnelles. Le troisième chapitre examine l'impact des mesures d'expansion et de contraction budgétaire sur la distribution des revenus dans un panel de 18 pays d'Amérique latine au cours de la période 1990-2010, avec un accent sur les deniers 40 pour cent. Il explore alors comment ces mesures fiscales ainsi que leur composition affectent la croissance des revenus des dernier 40 pour cent, la croissance de leur part de revenu ainsi que la croissance économique. Les mesures d'expansion et de contraction budgétaire sont identifiées par des périodes au cours desquels il existe une variation significative du déficit primaire corrigé des variations conjoncturelles en pourcentage du PIB. Les résultats montrent qu'en moyenne l'expansion budgétaire par la hausse des dépenses publiques est plus favorable à la croissance des revenus des moins bien-nantis que celle par la baisse des impôts. Ce résultat est principalement soutenu par la hausse des dépenses gouvernementales de consommation courante, les transferts et subventions. En outre ces mesures d’expansion budgétaire sont favorables à la réduction des inégalités car elles permettent d'améliorer la part des revenus des moins bien-nantis tout en réduisant la part des revenus des mieux-nantis de la distribution des revenus. En outre ces mesures d’expansion budgétaire sont favorables à la réduction des inégalités car elles permettent d'améliorer la part des revenus des moins bien-nantis tout en réduisant la part des revenus des mieux-nantis de la distribution des revenus. Cependant, l'expansion budgétaire pourrait soit n'avoir aucun effet sur la croissance économique ou entraver cette dernière à travers la hausse des dépenses en capital. Les résultats relatifs à la contraction budgétaire sont quelque peu mitigés. Parfois, les mesures de contraction budgétaire sont associées à une baisse de la croissance des revenus des moins bien nantis et à une hausse des inégalités, parfois l'impact de ces mesures est non significatif. Par ailleurs, aucune des mesures n’affecte de manière significative la croissance du PIB. Comme implications de politique économique, les pays avec une certaine marge de manœuvre budgétaire pourraient entamer ou continuer à mettre en œuvre des programmes de "filets de sauvetage"--par exemple les programmes de transfert monétaire conditionnel--permettant aux segments vulnérables de la population de faire face à des chocs négatifs et aussi d'améliorer leur conditions de vie. Avec un potentiel de stimuler l'emploi peu qualifié, une relance budgétaire sage par les dépenses publique courantes pourrait également jouer un rôle important pour la réduction des inégalités. Aussi, pour éviter que les dépenses en capital freinent la croissance économique, les projets d'investissements publics efficients devraient être prioritaires dans le processus d'élaboration des politiques. Ce qui passe par la mise en œuvre des projets d'investissement avec une productivité plus élevée capable de générer la croissance économique nécessaire pour réduire les inégalités.
Resumo:
The dynamic mechanical properties such as storage modulus, loss modulus and damping properties of blends of nylon copolymer (PA6,66) with ethylene propylene diene (EPDM) rubber was investigated with special reference to the effect of blend ratio and compatibilisation over a temperature range –100°C to 150°C at different frequencies. The effect of change in the composition of the polymer blends on tanδ was studied to understand the extent of polymer miscibility and damping characteristics. The loss tangent curve of the blends exhibited two transition peaks, corresponding to the glass transition temperature (Tg) of individual components indicating incompatibility of the blend systems. The morphology of the blends has been examined by using scanning electron microscopy. The Arrhenius relationship was used to calculate the activation energy for the glass transition of the blends. Finally, attempts have been made to compare the experimental data with theoretical models.
Resumo:
We have investigated the dynamic mechanical behavior of two cross-linked polymer networks with very different topologies: one made of backbones randomly linked along their length; the other with fixed-length strands uniformly cross-linked at their ends. The samples were analyzed using oscillatory shear, at very small strains corresponding to the linear regime. This was carried out at a range of frequencies, and at temperatures ranging from the glass plateau, through the glass transition, and well into the rubbery region. Through the glass transition, the data obeyed the time-temperature superposition principle, and could be analyzed using WLF treatment. At higher temperatures, in the rubbery region, the storage modulus was found to deviate from this, taking a value that is independent of frequency. This value increased linearly with temperature, as expected for the entropic rubber elasticity, but with a substantial negative offset inconsistent with straightforward enthalpic effects. Conversely, the loss modulus continued to follow time-temperature superposition, decreasing with increasing temperature, and showing a power-law dependence on frequency.
Resumo:
Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.