956 resultados para First order theories
Resumo:
The vertical profile of global-mean stratospheric temperature changes has traditionally represented an important diagnostic for the attribution of the cooling effects of stratospheric ozone depletion and CO2 increases. However, CO2-induced cooling alters ozone abundance by perturbing ozone chemistry, thereby coupling the stratospheric ozone and temperature responses to changes in CO2 and ozone-depleting substances (ODSs). Here we untangle the ozone-temperature coupling and show that the attribution of global-mean stratospheric temperature changes to CO2 and ODS changes (which are the true anthropogenic forcing agents) can be quite different from the traditional attribution to CO2 and ozone changes. The significance of these effects is quantified empirically using simulations from a three-dimensional chemistry-climate model. The results confirm the essential validity of the traditional approach in attributing changes during the past period of rapid ODS increases, although we find that about 10% of the upper stratospheric ozone decrease from ODS increases over the period 1975–1995 was offset by the increase in CO2, and the CO2-induced cooling in the upper stratosphere has been somewhat overestimated. When considering ozone recovery, however, the ozone-temperature coupling is a first-order effect; fully 2/5 of the upper stratospheric ozone increase projected to occur from 2010–2040 is attributable to CO2 increases. Thus, it has now become necessary to base attribution of global-mean stratospheric temperature changes on CO2 and ODS changes rather than on CO2 and ozone changes.
Resumo:
Total ozone trends are typically studied using linear regression models that assume a first-order autoregression of the residuals [so-called AR(1) models]. We consider total ozone time series over 60°S–60°N from 1979 to 2005 and show that most latitude bands exhibit long-range correlated (LRC) behavior, meaning that ozone autocorrelation functions decay by a power law rather than exponentially as in AR(1). At such latitudes the uncertainties of total ozone trends are greater than those obtained from AR(1) models and the expected time required to detect ozone recovery correspondingly longer. We find no evidence of LRC behavior in southern middle-and high-subpolar latitudes (45°–60°S), where the long-term ozone decline attributable to anthropogenic chlorine is the greatest. We thus confirm an earlier prediction based on an AR(1) analysis that this region (especially the highest latitudes, and especially the South Atlantic) is the optimal location for the detection of ozone recovery, with a statistically significant ozone increase attributable to chlorine likely to be detectable by the end of the next decade. In northern middle and high latitudes, on the other hand, there is clear evidence of LRC behavior. This increases the uncertainties on the long-term trend attributable to anthropogenic chlorine by about a factor of 1.5 and lengthens the expected time to detect ozone recovery by a similar amount (from ∼2030 to ∼2045). If the long-term changes in ozone are instead fit by a piecewise-linear trend rather than by stratospheric chlorine loading, then the strong decrease of northern middle- and high-latitude ozone during the first half of the 1990s and its subsequent increase in the second half of the 1990s projects more strongly on the trend and makes a smaller contribution to the noise. This both increases the trend and weakens the LRC behavior at these latitudes, to the extent that ozone recovery (according to this model, and in the sense of a statistically significant ozone increase) is already on the verge of being detected. The implications of this rather controversial interpretation are discussed.
Resumo:
Psoriasis is a common, chronic and relapsing inflammatory skin disease. It affects approximately 2% of the western population and has no cure. Combination therapy for psoriasis often proves more efficacious and better tolerated than monotherapy with a single drug. Combination therapy could be administered in the form of a co-drug, where two or more therapeutic compounds active against the same condition are linked by a cleavable covalent bond. Similar to the pro-drug approach, the liberation of parent moieties post-administration, by enzymatic and/or chemical mechanisms, is a pre-requisite for effective treatment. In this study, a series of co-drugs incorporating dithranol in combination with one of several non-steroidal anti-inflammatory drugs, both useful for the treatment of psoriasis, were designed, synthesized and evaluated. An ester co-drug comprising dithranol and naproxen in a 1:1 stoichiometric ratio was determined to possess the optimal physicochemical properties for topical delivery. The co-drug was fully hydrolyzed in vitro by porcine liver esterase within four hours. When incubated with homogenized porcine skin, 9.5% of the parent compounds were liberated after 24 h, suggesting in situ esterase-mediated cleavage of the co-drug would occur within the skin. The kinetics of the reaction revealed first order kinetics, Vmax = 10.3 μM/min and Km = 65.1 μM. The co-drug contains a modified dithranol chromophore that was just 37% of the absorbance of dithranol at 375 nm and suggests reduced skin/clothes staining. Overall, these findings suggest that the dithranol-naproxen co-drug offers an attractive, novel approach for the treatment of psoriasis.
Resumo:
Many operational weather forecasting centres use semi-implicit time-stepping schemes because of their good efficiency. However, as computers become ever more parallel, horizontally explicit solutions of the equations of atmospheric motion might become an attractive alternative due to the additional inter-processor communication of implicit methods. Implicit and explicit (IMEX) time-stepping schemes have long been combined in models of the atmosphere using semi-implicit, split-explicit or HEVI splitting. However, most studies of the accuracy and stability of IMEX schemes have been limited to the parabolic case of advection–diffusion equations. We demonstrate how a number of Runge–Kutta IMEX schemes can be used to solve hyperbolic wave equations either semi-implicitly or HEVI. A new form of HEVI splitting is proposed, UfPreb, which dramatically improves accuracy and stability of simulations of gravity waves in stratified flow. As a consequence it is found that there are HEVI schemes that do not lose accuracy in comparison to semi-implicit ones. The stability limits of a number of variations of trapezoidal implicit and some Runge–Kutta IMEX schemes are found and the schemes are tested on two vertical slice cases using the compressible Boussinesq equations split into various combinations of implicit and explicit terms. Some of the Runge–Kutta schemes are found to be beneficial over trapezoidal, especially since they damp high frequencies without dropping to first-order accuracy. We test schemes that are not formally accurate for stiff systems but in stiff limits (nearly incompressible) and find that they can perform well. The scheme ARK2(2,3,2) performs the best in the tests.
Resumo:
Background: Psychotic phenomena appear to form a continuum with normal experience and beliefs, and may build on common emotional interpersonal concerns. Aims: We tested predictions that paranoid ideation is exponentially distributed and hierarchically arranged in the general population, and that persecutory ideas build on more common cognitions of mistrust, interpersonal sensitivity and ideas of reference. Method: Items were chosen from the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II) questionnaire and the Psychosis Screening Questionnaire in the second British National Survey of Psychiatric Morbidity (n = 8580), to test a putative hierarchy of paranoid development using confirmatory factor analysis, latent class analysis and factor mixture modelling analysis. Results: Different types of paranoid ideation ranged in frequency from less than 2% to nearly 30%. Total scores on these items followed an almost perfect exponential distribution (r = 0.99). Our four a priori first-order factors were corroborated (interpersonal sensitivity; mistrust; ideas of reference; ideas of persecution). These mapped onto four classes of individual respondents: a rare, severe, persecutory class with high endorsement of all item factors, including persecutory ideation; a quasi-normal class with infrequent endorsement of interpersonal sensitivity, mistrust and ideas of reference, and no ideas of persecution; and two intermediate classes, characterised respectively by relatively high endorsement of items relating to mistrust and to ideas of reference. Conclusions: The paranoia continuum has implications for the aetiology, mechanisms and treatment of psychotic disorders, while confirming the lack of a clear distinction from normal experiences and processes.
Resumo:
Anthropogenic emissions of heat and exhaust gases play an important role in the atmospheric boundary layer, altering air quality, greenhouse gas concentrations and the transport of heat and moisture at various scales. This is particularly evident in urban areas where emission sources are integrated in the highly heterogeneous urban canopy layer and directly linked to human activities which exhibit significant temporal variability. It is common practice to use eddy covariance observations to estimate turbulent surface fluxes of latent heat, sensible heat and carbon dioxide, which can be attributed to a local scale source area. This study provides a method to assess the influence of micro-scale anthropogenic emissions on heat, moisture and carbon dioxide exchange in a highly urbanized environment for two sites in central London, UK. A new algorithm for the Identification of Micro-scale Anthropogenic Sources (IMAS) is presented, with two aims. Firstly, IMAS filters out the influence of micro-scale emissions and allows for the analysis of the turbulent fluxes representative of the local scale source area. Secondly, it is used to give a first order estimate of anthropogenic heat flux and carbon dioxide flux representative of the building scale. The algorithm is evaluated using directional and temporal analysis. The algorithm is then used at a second site which was not incorporated in its development. The spatial and temporal local scale patterns, as well as micro-scale fluxes, appear physically reasonable and can be incorporated in the analysis of long-term eddy covariance measurements at the sites in central London. In addition to the new IMAS-technique, further steps in quality control and quality assurance used for the flux processing are presented. The methods and results have implications for urban flux measurements in dense urbanised settings with significant sources of heat and greenhouse gases.
Resumo:
Observations of volcanoes extruding andesitic lava to produce lava domes often reveal cyclic behaviour. At Soufriere Hills Volcano, Montserrat, cycles with sub-daily and multi-week periods have been recognised on many occasions. These two types of cycle have been modelled separately as stick-slip magma flow at the junction between a dyke and an overlying cylindrical conduit (Costa et al. 2012), and as the filling and discharge of magma through the elastic-walled dyke (Costa et al., 2007a) respectively. Here, we couple these two models to simulate the behaviour over a period of well-observed multi-week cycles, with accompanying sub-daily cycles, from 13 May to 21 September 1997. The coupled model captures well the asymmetrical first-order behaviour: the first 40% of the multi-week cycle consists of high rates of lava extrusion during short period/high amplitude sub-daily cycles as the dyke reservoir discharges itself. The remainder of the cycle involves increasing pressurization as more magma is stored, and extrusion rate falls, followed by a gradual increase in the period of the sub-daily cycles.
Resumo:
This paper uses the linear modulation technique to study red IRSL emission of potassium feldspars. Sub-samples were subjected to various pre-treatment and measurement conditions in an attempt to understand the relevant mechanisms of charge transfer. The linear modulation curves fitted most successfully to a sum of three first order components and we present supporting empirical evidence for the presence of three separate signal components. Additionally, the form of the red emission was observed to closely resemble the UV emission, implying the same donor charge concentrations may supply different recombination centres (assuming emission wavelength depends on centre type).
Resumo:
Sea ice friction models are necessary to predict the nature of interactions between sea ice floes. These interactions are of interest on a range of scales, for example, to predict loads on engineering structures in icy waters or to understand the basin-scale motion of sea ice. Many models use Amonton's friction law due to its simplicity. More advanced models allow for hydrodynamic lubrication and refreezing of asperities; however, modeling these processes leads to greatly increased complexity. In this paper we propose, by analogy with rock physics, that a rate- and state-dependent friction law allows us to incorporate memory (and thus the effects of lubrication and bonding) into ice friction models without a great increase in complexity. We support this proposal with experimental data on both the laboratory (∼0.1 m) and ice tank (∼1 m) scale. These experiments show that the effects of static contact under normal load can be incorporated into a friction model. We find the parameters for a first-order rate and state model to be A = 0.310, B = 0.382, and μ0 = 0.872. Such a model then allows us to make predictions about the nature of memory effects in moving ice-ice contacts.
Resumo:
Observations at the Mauna Loa Observatory, Hawaii, established the systematic increase of anthropogenic CO2 in the atmosphere. For the same reasons that this site provides excellent globally averaged CO2 data, it may provide temperature data with global significance. Here, we examine hourly temperature records, averaged annually for 1977–2006, to determine linear trends as a function of time of day. For night-time data (22:00 to 06:00 LST (local standard time)) there is a near-uniform warming of 0.040 °C yr−1. During the day, the linear trend shows a slight cooling of −0.014 °C yr−1 at 12:00 LST (noon). Overall, at Mauna Loa Observatory, there is a mean warming trend of 0.021 °C yr−1. The dominance of night-time warming results in a relatively large annual decrease in the diurnal temperature range (DTR) of −0.050 °C yr−1 over the period 1977–2006. These trends are consistent with the observed increases in the concentrations of CO2 and its role as a greenhouse gas (demonstrated here by first-order radiative forcing calculations), and indicate the possible relevance of the Mauna Loa temperature measurements to global warming.
Resumo:
Artificial diagenesis of the intra-crystalline proteins isolated from Patella vulgata was induced by isothermal heating at 140 °C, 110 °C and 80 °C. Protein breakdown was quantified for multiple amino acids, measuring the extent of peptide bond hydrolysis, amino acid racemisation and decomposition. The patterns of diagenesis are complex; therefore the kinetic parameters of the main reactions were estimated by two different methods: 1) a well-established approach based on fitting mathematical expressions to the experimental data, e.g. first-order rate equations for hydrolysis and power-transformed first-order rate equations for racemisation; and 2) an alternative model-free approach, which was developed by estimating a “scaling” factor for the independent variable (time) which produces the best alignment of the experimental data. This method allows the calculation of the relative reaction rates for the different temperatures of isothermal heating. High-temperature data were compared with the extent of degradation detected in sub-fossil Patella specimens of known age, and we evaluated the ability of kinetic experiments to mimic diagenesis at burial temperature. The results highlighted a difference between patterns of degradation at low and high temperature and therefore we recommend caution for the extrapolation of protein breakdown rates to low burial temperatures for geochronological purposes when relying solely on kinetic data.
Resumo:
Natural mineral aerosol (dust) is an active component of the climate system and plays multiple roles in mediating physical and biogeochemical exchanges between the atmosphere, land surface and ocean. Changes in the amount of dust in the atmosphere are caused both by changes in climate (precipitation, wind strength, regional moisture balance) and changes in the extent of dust sources caused by either anthropogenic or climatically induced changes in vegetation cover. Models of the global dust cycle take into account the physical controls on dust deflation from prescribed source areas (based largely on soil wetness and vegetation cover thresholds), dust transport within the atmospheric column, and dust deposition through sedimentation and scavenging by precipitation. These models successfully reproduce the first-order spatial and temporal patterns in atmospheric dust loading under modern conditions. Atmospheric dust loading was as much as an order-of-magnitude larger than today during the last glacial maximum (LGM). While the observed increase in emissions from northern Africa can be explained solely in terms of climate changes (colder, drier and windier glacial climates), increased emissions from other regions appear to have been largely a response to climatically induced changes in vegetation cover and hence in the extent of dust source areas. Model experiments suggest that the increased dust loading in tropical regions had an effect on radiative forcing comparable to that of low glacial CO2 levels. Changes in land-use are already increasing the dust loading of the atmosphere. However, simulations show that anthropogenically forced climate changes substantially reduce the extent and productivity of natural dust sources. Positive feedbacks initiated by a reduction of dust emissions from natural source areas on both radiative forcing and atmospheric CO2 could substantially mitigate the impacts of land-use changes, and need to be considered in climate change assessments.
Resumo:
Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.
Resumo:
Many communication signal processing applications involve modelling and inverting complex-valued (CV) Hammerstein systems. We develops a new CV B-spline neural network approach for efficient identification of the CV Hammerstein system and effective inversion of the estimated CV Hammerstein model. Specifically, the CV nonlinear static function in the Hammerstein system is represented using the tensor product from two univariate B-spline neural networks. An efficient alternating least squares estimation method is adopted for identifying the CV linear dynamic model’s coefficients and the CV B-spline neural network’s weights, which yields the closed-form solutions for both the linear dynamic model’s coefficients and the B-spline neural network’s weights, and this estimation process is guaranteed to converge very fast to a unique minimum solution. Furthermore, an accurate inversion of the CV Hammerstein system can readily be obtained using the estimated model. In particular, the inversion of the CV nonlinear static function in the Hammerstein system can be calculated effectively using a Gaussian-Newton algorithm, which naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. The effectiveness of our approach is demonstrated using the application to equalisation of Hammerstein channels.
Resumo:
The work involves investigation of a type of wireless power system wherein its analysis will yield the construction of a prototype modeled as a singular technological artifact. It is through exploration of the artifact that forms the intellectual basis for not only its prototypical forms, but suggestive of variant forms not yet discovered. Through the process it is greatly clarified the role of the artifact, its most suitable application given the constraints on the delivery problem, and optimization strategies to improve it. In order to improve maturity and contribute to a body of knowledge, this document proposes research utilizing mid-field region, efficient inductive-transfer for the purposes of removing wired connections and electrical contacts. While the description seems enough to state the purpose of this work, it does not convey the compromises of having to redraw the lines of demarcation between near and far-field in the traditional method of broadcasting. Two striking scenarios are addressed in this thesis: Firstly, the mathematical explanation of wireless power is due to J.C. Maxwell's original equations, secondly, the behavior of wireless power in the circuit is due to Joseph Larmor's fundamental works on the dynamics of the field concept. A model of propagation will be presented which matches observations in experiments. A modified model of the dipole will be presented to address the phenomena observed in the theory and experiments. Two distinct sets of experiments will test the concept of single and two coupled-modes. In a more esoteric context of the zero and first-order magnetic field, the suggestion of a third coupled-mode is presented. Through the remaking of wireless power in this context, it is the intention of the author to show the reader that those things lost to history, bound to a path of complete obscurity, are once again innovative and useful ideas.