938 resultados para solar system : general
Resumo:
The entropy budget is calculated of the coupled atmosphere–ocean general circulation model HadCM3. Estimates of the different entropy sources and sinks of the climate system are obtained directly from the diabatic heating terms, and an approximate estimate of the planetary entropy production is also provided. The rate of material entropy production of the climate system is found to be ∼50 mW m−2 K−1, a value intermediate in the range 30–70 mW m−2 K−1 previously reported from different models. The largest part of this is due to sensible and latent heat transport (∼38 mW m−2 K−1). Another 13 mW m−2 K−1 is due to dissipation of kinetic energy in the atmosphere by friction and Reynolds stresses. Numerical entropy production in the atmosphere dynamical core is found to be about 0.7 mW m−2 K−1. The material entropy production within the ocean due to turbulent mixing is ∼1 mW m−2 K−1, a very small contribution to the material entropy production of the climate system. The rate of change of entropy of the model climate system is about 1 mW m−2 K−1 or less, which is comparable with the typical size of the fluctuations of the entropy sources due to interannual variability, and a more accurate closure of the budget than achieved by previous analyses. Results are similar for FAMOUS, which has a lower spatial resolution but similar formulation to HadCM3, while more substantial differences are found with respect to other models, suggesting that the formulation of the model has an important influence on the climate entropy budget. Since this is the first diagnosis of the entropy budget in a climate model of the type and complexity used for projection of twenty-first century climate change, it would be valuable if similar analyses were carried out for other such models.
Resumo:
In developing techniques for monitoring the costs associated with different procurement routes, the central task is disentangling the various project costs incurred by organizations taking part in construction projects. While all firms are familiar with the need to analyse their own costs, it is unusual to apply the same kind of analysis to projects. The purpose of this research is to examine the claims that new ways of working such as strategic alliancing and partnering bring positive business benefits. This requires that costs associated with marketing, estimating, pricing, negotiation of terms, monitoring of performance and enforcement of contract are collected for a cross-section of projects under differing arrangements, and from those in the supply chain from clients to consultants, contractors, sub-contractors and suppliers. Collaboration with industrial partners forms the basis for developing a research instrument, based on time sheets, which will be relevant for all those taking part in the work. The signs are that costs associated with tendering are highly variable, 1-15%, depending upon what precisely is taken into account. The research to date reveals that there are mechanisms for measuring the costs of transactions and these will generate useful data for subsequent analysis.
Resumo:
We present a general Multi-Agent System framework for distributed data mining based on a Peer-to-Peer model. Agent protocols are implemented through message-based asynchronous communication. The framework adopts a dynamic load balancing policy that is particularly suitable for irregular search algorithms. A modular design allows a separation of the general-purpose system protocols and software components from the specific data mining algorithm. The experimental evaluation has been carried out on a parallel frequent subgraph mining algorithm, which has shown good scalability performances.
Resumo:
The commonly held view of the conditions in the North Atlantic at the last glacial maximum, based on the interpretation of proxy records, is of large-scale cooling compared to today, limited deep convection, and extensive sea ice, all associated with a southward displaced and weakened overturning thermohaline circulation (THC) in the North Atlantic. Not all studies support that view; in particular, the "strength of the overturning circulation" is contentious and is a quantity that is difficult to determine even for the present day. Quasi-equilibrium simulations with coupled climate models forced by glacial boundary conditions have produced differing results, as have inferences made from proxy records. Most studies suggest the weaker circulation, some suggest little or no change, and a few suggest a stronger circulation. Here results are presented from a three-dimensional climate model, the Hadley Centre Coupled Model version 3 (HadCM3), of the coupled atmosphere - ocean - sea ice system suggesting, in a qualitative sense, that these diverging views could all have occurred at different times during the last glacial period, with different modes existing at different times. One mode might have been characterized by an active THC associated with moderate temperatures in the North Atlantic and a modest expanse of sea ice. The other mode, perhaps forced by large inputs of meltwater from the continental ice sheets into the northern North Atlantic, might have been characterized by a sluggish THC associated with very cold conditions around the North Atlantic and a large areal cover of sea ice. The authors' model simulation of such a mode, forced by a large input of freshwater, bears several of the characteristics of the Climate: Long-range Investigation, Mapping, and Prediction (CLIMAP) Project's reconstruction of glacial sea surface temperature and sea ice extent.
Resumo:
Ozone and temperature profiles from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) have been assimilated, using three-dimensional variational assimilation, into a stratosphere troposphere version of the Met Office numerical weather-prediction system. Analyses are made for the month of September 2002, when there was an unprecedented split in the southern hemisphere polar vortex. The analyses are validated against independent ozone observations from sondes, limb-occultation and total column ozone satellite instruments. Through most of the stratosphere, precision varies from 5 to 15%, and biases are 15% or less of the analysed field. Problems remain in the vortex and below the 60 hPa. level, especially at the tropopause where the analyses have too much ozone and poor agreement with independent data. Analysis problems are largely a result of the model rather than the data, giving confidence in the MIPAS ozone retrievals, though there may be a small high bias in MIPAS ozone in the lower stratosphere. Model issues include an excessive Brewer-Dobson circulation, which results both from known problems with the tracer transport scheme and from the data assimilation of dynamical variables. The extreme conditions of the vortex split reveal large differences between existing linear ozone photochemistry schemes. Despite these issues, the ozone analyses are able to successfully describe the ozone hole split and compare well to other studies of this event. Recommendations are made for the further development of the ozone assimilation system.
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
A low resolution coupled ocean-atmosphere general circulation model OAGCM is used to study the characteristics of the large scale ocean circulation and its climatic impacts in a series of global coupled aquaplanet experiments. Three configurations, designed to produce fundamentally different ocean circulation regimes, are considered. The first has no obstruction to zonal flow, the second contains a low barrier that blocks zonal flow in the ocean at all latitudes, creating a single enclosed basin, whilst the third contains a gap in the barrier to allow circumglobal flow at high southern latitudes. Warm greenhouse climates with a global average air surface temperature of around 27C result in all cases. Equator to pole temperature gradients are shallower than that of a current climate simulation. Whilst changes in the land configuration cause regional changes in temperature, winds and rainfall, heat transports within the system are little affected. Inhibition of all ocean transport on the aquaplanet leads to a reduction in global mean surface temperature of 8C, along with a sharpening of the meridional temperature gradient. This results from a reduction in global atmospheric water vapour content and an increase in tropical albedo, both of which act to reduce global surface temperatures. Fitting a simple radiative model to the atmospheric characteristics of the OAGCM solutions suggests that a simpler atmosphere model, with radiative parameters chosen a priori based on the changing surface configuration, would have produced qualitatively different results. This implies that studies with reduced complexity atmospheres need to be guided by more complex OAGCM results on a case by case basis.
Resumo:
Solar electromagnetic radiation powers Earth’s climate system and, consequently, it is often naively assumed that changes in this solar output must be responsible for changes in Earth’s climate. However, the Sun is close to a blackbody radiator and so emits according to its surface temperature and the huge thermal time constant of the outer part of the Sun limits the variability in surface temperature and hence output. As a result, on all timescales of interest, changes in total power output are limited to small changes in effective surface temperature (associated with magnetic fields) and potential, although as yet undetected, solar radius variations. Larger variations are seen in the UV part of the spectrum which is emitted from the lower solar atmosphere (the chromosphere) and which influences Earth’s stratosphere. There is interest in“top-down” mechanisms whereby solar UV irradiance modulates stratospheric temperatures and winds which, in turn, may influence the underlying troposphere where Earth’s climate and weather reside. This contrasts with “bottom-up” effects in which the small total solar irradiance (dominated by the visible and near-IR) variations cause surface temperature changes which drive atmospheric circulations. In addition to these electromagnetic outputs, the Sun modulates energetic particle fluxes incident on the Earth. Solar Energetic Particles (SEP) are emitted by solar flares and from the shock fronts ahead of supersonic (and super-Alfvenic) ejections of material from the solar atmosphere. These SEPs enhance the destruction of polar stratospheric ozone which could be an additional form of top-down climate forcing. Even more energetic are Galactic Cosmic Rays (GCRs). These particles are not generated by the Sun, rather they originate at the shock fronts emanating from violent galactic events such as supernovae explosions; however, the expansion of the solar magnetic field into interplanetary space means that the Sun modulates the number of GCRs reaching Earth. These play a key role in enabling Earth’s global electric (thunderstorm) circuit and it has been proposed that they also modulate the formation of clouds. Both electromagnetic and corpuscular solar effects are known to vary over the solar magnetic cycle which is typically between 10 and 14 yrs in length (with an average close to 11 yrs). The solar magnetic field polarity at any one phase of one of these activity cycles is opposite to that at the same phase of the next cycle and this influences some phenomena, for example GCRs, which therefore show a 22 yr (“Hale”) cycle on average. Other phenomena, such as irradiance modulation, do not depend on the polarity of the magnetic field and so show only the basic 11-yr activity cycle. However, any effects on climate are much more significant for solar drifts over centennial timescales. This chapter discusses and evaluates potential effects on Earth’s climate system of variations in these solar inputs. Because of the great variety of proposed mechanisms, the wide range of timescales studied (from days to millennia) and the many debates (often triggered by the application of inadequate statistical methods), the literature on this subject is vast, complex, divergent and rapidly changing: consequently the number of references cited in this review is very large (yet still only a small fraction of the total).
Resumo:
Understanding the influence of solar variability on the Earth’s climate requires knowledge of solar variability, solar-terrestrial interactions and the mechanisms determining the response of the Earth’s climate system. We provide a summary of our current understanding in each of these three areas. Observations and mechanisms for the Sun's variability are described, including solar irradiance variations on both decadal and centennial timescales and their relation to galactic cosmic rays. Corresponding observations of variations of the Earth’s climate on associated timescales are described, including variations in ozone, temperatures, winds, clouds, precipitation and regional modes of variability such as the monsoons and the North Atlantic Oscillation. A discussion of the available solar and climate proxies is provided. Mechanisms proposed to explain these climate observations are described, including the effects of variations in solar irradiance and of charged particles. Finally, the contribution of solar variations to recent observations of global climate change are discussed.
Resumo:
We have previously placed the solar contribution to recent global warming in context using observations and without recourse to climate models. It was shown that all solar forcings of climate have declined since 1987. The present paper extends that analysis to include the effects of the various time constants with which the Earth’s climate system might react to solar forcing. The solar input waveform over the past 100 years is defined using observed and inferred galactic cosmic ray fluxes, valid for either a direct effect of cosmic rays on climate or an effect via their known correlation with total solar irradiance (TSI), or for a combination of the two. The implications, and the relative merits, of the various TSI composite data series are discussed and independent tests reveal that the PMOD composite used in our previous paper is the most realistic. Use of the ACRIM composite, which shows a rise in TSI over recent decades, is shown to be inconsistent with most published evidence for solar influences on pre-industrial climate. The conclusions of our previous paper, that solar forcing has declined over the past 20 years while surface air temperatures have continued to rise, are shown to apply for the full range of potential time constants for the climate response to the variations in the solar forcings.
Resumo:
We have developed a heterologous expression system for transmembrane lens main intrinsic protein (MIP) in Nicotiana tabacum plant tissue. A native bovine MIP26 amplicon was subcloned into an expression cassette under the control of a constitutive Cauliflower Mosaic Virus promoter, also containing a neomycin phosphotransferase operon. This cassette was transformed into Agrobacterium tumefaciens by triparental mating and used to infect plant tissue grown in culture. Recombinant plants were selected by their ability to grow and root on kanamycin-containing media. The presence of MIP in the plant tissues was confirmed by PCR, RT-PCR and immunohistochemistry. A number of benefits of this system for the study of MIP will be discussed, and also its application as a tool for the study of heterologously expressed proteins in general.
Resumo:
This article introduces a quantitative approach to e-commerce system evaluation based on the theory of process simulation. The general concept of e-commerce system simulation is presented based on the considerations of some limitations in e-commerce system development such as the huge amount of initial investments of time and money, and the long period from business planning to system development, then to system test and operation, and finally to exact return; in other words, currently used system analysis and development method cannot tell investors about some keen attentions such as how good their e-commerce system could be, how many investment repayments they could have, and which area they should improve regarding the initial business plan. In order to exam the value and its potential effects of an e-commerce business plan, it is necessary to use a quantitative evaluation approach and the authors of this article believe that process simulation is an appropriate option. The overall objective of this article is to apply the theory of process simulation to e-commerce system evaluation, and the authors will achieve this though an experimental study on a business plan for online construction and demolition waste exchange. The methodologies adopted in this article include literature review, system analysis and development, simulation modelling and analysis, and case study. The results from this article include the concept of e-commerce system simulation, a comprehensive review of simulation methods adopted in e-commerce system evaluation, and a real case study of applying simulation to e-commerce system evaluation. Furthermore, the authors hope that the adoption and implementation of the process simulation approach can effectively support business decision-making, and improve the efficiency of e-commerce systems.
Resumo:
As control systems have developed and the implications of poor hygienic practices have become better known, the evaluation of the hygienic status of premises has become more critical. The assessment of the overall status of premises hygiene call provide useful management data indicating whether the premises are improving or whether, whilst still meeting legal requirements, they might be failing to maintain previously high standards. Since the creation, for the United Kingdom, of the meat hygiene service (MHS), one of the aims of the service was to monitor hygiene on different premises to provide a means of comparing standards and to identify and encourage improvements. This desire led to the implementation of a scoring system known as the hygiene assessment system (HAS). This paper analyses English slaughterhouses HAS scores between 1998 and 2005 outlining the main incidents throughout this period, Although rising initially, the later results displayed a clear decrease in the general hygiene scores. These revealing results coincide with the start of a new meat inspection system where, after several years of discussion, risk based inspection is finally coming to a reality within Europe. The paper considers the implications of these changes in the way hygiene standards will be monitored in the future.
Resumo:
Following a number of major food safety problems in Europe, including in particular the issues of BSE and dioxin, consumers have become increasingly concerned about food safety. This has led authorities in Europe to revise their systems of food control. The establishment of the European Food Safety Authority (EFSA) is one of the main structural changes made at the moment within the European Union, and similar action at national level has been or is being taken by many EU member states. In Spain a law creating the Spanish Agency of Food Safety has been approved. This has general objectives that include the promotion of food security and offering guarantees and the provision of objective information to consumers and food businesses in the Spanish agrifood sector. This paper reviews the general structure of the current food control system in Spain. At a national level this involves three different Ministries. Spain however also has a devolved system involving Autonomous Communities the paper considers Castilla y Leon as an example. In conclusion the paper recognises that Spain has a complex system for food control. and considers that it will take time before a full evaluation of the new system is possible. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Background: This study was carried out as part of a European Union funded project (PharmDIS-e+), to develop and evaluate software aimed at assisting physicians with drug dosing. A drug that causes particular problems with drug dosing in primary care is digoxin because of its narrow therapeutic range and low therapeutic index. Objectives: To determine (i) accuracy of the PharmDIS-e+ software for predicting serum digoxin levels in patients who are taking this drug regularly; (ii) whether there are statistically significant differences between predicted digoxin levels and those measured by a laboratory and (iii) whether there are differences between doses prescribed by general practitioners and those suggested by the program. Methods: We needed 45 patients to have 95% Power to reject the null hypothesis that the mean serum digoxin concentration was within 10% of the mean predicted digoxin concentration. Patients were recruited from two general practices and had been taking digoxin for at least 4 months. Exclusion criteria were dementia, low adherence to digoxin and use of other medications known to interact to a clinically important extent with digoxin. Results: Forty-five patients were recruited. There was a correlation of 0·65 between measured and predicted digoxin concentrations (P < 0·001). The mean difference was 0·12 μg/L (SD 0·26; 95% CI 0·04, 0·19, P = 0·005). Forty-seven per cent of the patients were prescribed the same dose as recommended by the software, 44% were prescribed a higher dose and 9% a lower dose than recommended. Conclusion: PharmDIS-e+ software was able to predict serum digoxin levels with acceptable accuracy in most patients.