933 resultados para Scalar Functions of one Variable
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
In this paper stability of one-step ahead predictive controllers based on non-linear models is established. It is shown that, under conditions which can be fulfilled by most industrial plants, the closed-loop system is robustly stable in the presence of plant uncertainties and input–output constraints. There is no requirement that the plant should be open-loop stable and the analysis is valid for general forms of non-linear system representation including the case out when the problem is constraint-free. The effectiveness of controllers designed according to the algorithm analyzed in this paper is demonstrated on a recognized benchmark problem and on a simulation of a continuous-stirred tank reactor (CSTR). In both examples a radial basis function neural network is employed as the non-linear system model.
Resumo:
Background: Serine proteases are a major component of viper venoms and are thought to disrupt several distinct elements of the blood coagulation system of envenomed victims. A detailed understanding of the functions of these enzymes is important both for acquiring a fuller understanding of the pathology of envenoming and because these venom proteins have shown potential in treating blood coagulation disorders. Methodology/Principal Findings: In this study a novel, highly abundant serine protease, which we have named rhinocerase, has been isolated and characterised from the venom of Bitis gabonica rhinoceros using liquid phase isoelectric focusing and gel filtration. Like many viper venom serine proteases, this enzyme is glycosylated; the estimated molecular mass of the native enzyme is approximately 36kDa, which reduces to 31kDa after deglycosylation. The partial amino acid sequence shows similarity to other viper venom serine proteases, but is clearly distinct from the sequence of the only other sequenced serine protease from Bitis gabonica. Other viper venom serine proteases have been shown to exert distinct biological effects, and our preliminary functional characterization of rhinocerase suggest it to be multifunctional. It is capable of degrading α and β chains of fibrinogen, dissolving plasma clots and of hydrolysing a kallikrein substrate. Conclusions/Significance: A novel multifunctional viper venom serine protease has been isolated and characterised. The activities of the enzyme are consistent with the known in vivo effects of Bitis gabonica envenoming, including bleeding disorders, clotting disorders and hypotension. This study will form the basis for future research to understand the mechanisms of serine protease action, and examine the potential for rhinocerase to be used clinically to reduce the risk of human haemostatic disorders such as heart attacks and strokes.
Resumo:
Transient epileptic amnesia (TEA) is characterized by deficits in autobiographical memory (AM). One of the functions of AM is to maintain the self, suggesting that the self may undergo changes as a result of memory loss in temporal lobe epilepsy. To examine this, we used a modification of a task used to assess the relationship between self and memory (the IAM task) in a single case, E.B. Despite complaints of AM loss, E.B. had no difficulty in producing a range of self-images (e.g., I am a husband) and collections of self-defining AMs in support of these statements. E.B. produced fewer episodic memories at times of self-formation, but this did not seem to impact on the maintenance of self. The results support recent work suggesting the self may be maintained in the absence of episodic memory. The application of tasks such as that used here will further elucidate AM impairment in temporal lobe epilepsy. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
In this paper, a continuation of a variable radius niche technique called Dynamic Niche Clustering developed by (Gan & Warwick, 1999) is presented. The technique employs a separate dynamic population of overlapping niches that coexists alongside the normal population. An empirical analysis of the updated methodology on a large group of standard optimisation test-bed functions is also given. The technique is shown to perform almost as well as standard fitness sharing with regards to stability and the accuracy of peak identification, but it outperforms standard fitness sharing with regards to time complexity. It is also shown that the technique is capable of forming niches of varying size depending on the characteristics of the underlying peak that the niche is populating.
Resumo:
UV–Vis absorption spectra of one-electron reduction products and 3MLCT excited states of [ReICl(CO)3- (N,N)] (N,N = 2,20-bipyridine, bpy; 1,10-phenanthroline, phen) have been measured by low-temperature spectroelectrochemistry and UV–Vis transient absorption spectroscopy, respectively, and assigned by open-shell TD-DFT calculations. The characters of the electronic transitions are visualized and analyzed using electron density redistribution maps. It follows that reduced and excited states can be approximately formulated as [ReICl(CO)3(N,Nÿ)]ÿ and ⁄[ReIICl(CO)3(N,Nÿ)], respectively. UV–Vis spectra of the reduced complexes are dominated by IL transitions, plus weaker MLCT contributions. Excited-state spectra show an intense band in the UV region of 50% IL origin mixed with LMCT (bpy, 373 nm) or MLCT (phen, 307 nm) excitations. Because of the significant IL contribution, this spectral feature is akin to the principal IL band of the anions. In contrast, the excited-state visible spectral pattern arises from predominantly LMCT transitions, any resemblance with the reduced-state visible spectra being coincidental. The Re complexes studied herein are representatives of a broad class of metal a-diimines, for which similar spectroscopic behavior can be expected.
Resumo:
Background. In separate studies and research from different perspectives, five factors are found to be among those related to higher quality outcomes of student learning (academic achievement). Those factors are higher self-efficacy, deeper approaches to learning, higher quality teaching, students’ perceptions that their workload is appropriate, and greater learning motivation. University learning improvement strategies have been built on these research results. Aim. To investigate how students’ evoked prior experience, perceptions of their learning environment, and their approaches to learning collectively contribute to academic achievement. This is the first study to investigate motivation and self-efficacy in the same educational context as conceptions of learning, approaches to learning and perceptions of the learning environment. Sample. Undergraduate students (773) from the full range of disciplines were part of a group of over 2,300 students who volunteered to complete a survey of their learning experience. On completing their degrees 6 and 18 months later, their academic achievement was matched with their learning experience survey data. Method. A 77-item questionnaire was used to gather students’ self-report of their evoked prior experience (self-efficacy, learning motivation, and conceptions of learning), perceptions of learning context (teaching quality and appropriate workload), and approaches to learning (deep and surface). Academic achievement was measured using the English honours degree classification system. Analyses were conducted using correlational and multi-variable (structural equation modelling) methods. Results. The results from the correlation methods confirmed those found in numerous earlier studies. The results from the multi-variable analyses indicated that surface approach to learning was the strongest predictor of academic achievement, with self-efficacy and motivation also found to be directly related. In contrast to the correlation results, a deep approach to learning was not related to academic achievement, and teaching quality and conceptions of learning were only indirectly related to achievement. Conclusions. Research aimed at understanding how students experience their learning environment and how that experience relates to the quality of their learning needs to be conducted using a wider range of variables and more sophisticated analytical methods. In this study of one context, some of the relations found in earlier bivariate studies, and on which learning intervention strategies have been built, are not confirmed when more holistic teaching–learning contexts are analysed using multi-variable methods.
Resumo:
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Resumo:
The single scattering albedo w_0l in atmospheric radiative transfer is the ratio of the scattering coefficient to the extinction coefficient. For cloud water droplets both the scattering and absorption coefficients, thus the single scattering albedo, are functions of wavelength l and droplet size r. This note shows that for water droplets at weakly absorbing wavelengths, the ratio w_0l(r)/w_0l(r0) of two single scattering albedo spectra is a linear function of w_0l(r). The slope and intercept of the linear function are wavelength independent and sum to unity. This relationship allows for a representation of any single scattering albedo spectrum w_0l(r) via one known spectrum w_0l(r0). We provide a simple physical explanation of the discovered relationship. Similar linear relationships were found for the single scattering albedo spectra of non-spherical ice crystals.
Resumo:
This study presents a model intercomparison of four regional climate models (RCMs) and one variable resolution atmospheric general circulation model (AGCM) applied over Europe with special focus on the hydrological cycle and the surface energy budget. The models simulated the 15 years from 1979 to 1993 by using quasi-observed boundary conditions derived from ECMWF re-analyses (ERA). The model intercomparison focuses on two large atchments representing two different climate conditions covering two areas of major research interest within Europe. The first is the Danube catchment which represents a continental climate dominated by advection from the surrounding land areas. It is used to analyse the common model error of a too dry and too warm simulation of the summertime climate of southeastern Europe. This summer warming and drying problem is seen in many RCMs, and to a less extent in GCMs. The second area is the Baltic Sea catchment which represents maritime climate dominated by advection from the ocean and from the Baltic Sea. This catchment is a research area of many studies within Europe and also covered by the BALTEX program. The observed data used are monthly mean surface air temperature, precipitation and river discharge. For all models, these are used to estimate mean monthly biases of all components of the hydrological cycle over land. In addition, the mean monthly deviations of the surface energy fluxes from ERA data are computed. Atmospheric moisture fluxes from ERA are compared with those of one model to provide an independent estimate of the convergence bias derived from the observed data. These help to add weight to some of the inferred estimates and explain some of the discrepancies between them. An evaluation of these biases and deviations suggests possible sources of error in each of the models. For the Danube catchment, systematic errors in the dynamics cause the prominent summer drying problem for three of the RCMs, while for the fourth RCM this is related to deficiencies in the land surface parametrization. The AGCM does not show this drying problem. For the Baltic Sea catchment, all models similarily overestimate the precipitation throughout the year except during the summer. This model deficit is probably caused by the internal model parametrizations, such as the large-scale condensation and the convection schemes.
Resumo:
Scale functions play a central role in the fluctuation theory of spectrally negative Lévy processes and often appear in the context of martingale relations. These relations are often require excursion theory rather than Itô calculus. The reason for the latter is that standard Itô calculus is only applicable to functions with a sufficient degree of smoothness and knowledge of the precise degree of smoothness of scale functions is seemingly incomplete. The aim of this article is to offer new results concerning properties of scale functions in relation to the smoothness of the underlying Lévy measure. We place particular emphasis on spectrally negative Lévy processes with a Gaussian component and processes of bounded variation. An additional motivation is the very intimate relation of scale functions to renewal functions of subordinators. The results obtained for scale functions have direct implications offering new results concerning the smoothness of such renewal functions for which there seems to be very little existing literature on this topic.
Resumo:
Abstract This study presents a model intercomparison of four regional climate models (RCMs) and one variable resolution atmospheric general circulation model (AGCM) applied over Europe with special focus on the hydrological cycle and the surface energy budget. The models simulated the 15 years from 1979 to 1993 by using quasi-observed boundary conditions derived from ECMWF re-analyses (ERA). The model intercomparison focuses on two large atchments representing two different climate conditions covering two areas of major research interest within Europe. The first is the Danube catchment which represents a continental climate dominated by advection from the surrounding land areas. It is used to analyse the common model error of a too dry and too warm simulation of the summertime climate of southeastern Europe. This summer warming and drying problem is seen in many RCMs, and to a less extent in GCMs. The second area is the Baltic Sea catchment which represents maritime climate dominated by advection from the ocean and from the Baltic Sea. This catchment is a research area of many studies within Europe and also covered by the BALTEX program. The observed data used are monthly mean surface air temperature, precipitation and river discharge. For all models, these are used to estimate mean monthly biases of all components of the hydrological cycle over land. In addition, the mean monthly deviations of the surface energy fluxes from ERA data are computed. Atmospheric moisture fluxes from ERA are compared with those of one model to provide an independent estimate of the convergence bias derived from the observed data. These help to add weight to some of the inferred estimates and explain some of the discrepancies between them. An evaluation of these biases and deviations suggests possible sources of error in each of the models. For the Danube catchment, systematic errors in the dynamics cause the prominent summer drying problem for three of the RCMs, while for the fourth RCM this is related to deficiencies in the land surface parametrization. The AGCM does not show this drying problem. For the Baltic Sea catchment, all models similarily overestimate the precipitation throughout the year except during the summer. This model deficit is probably caused by the internal model parametrizations, such as the large-scale condensation and the convection schemes.
Resumo:
Forgetting immediate physical reality and having awareness of one�s location in the simulated world is critical to enjoyment and performance in virtual environments be it an interactive 3D game such as Quake or an online virtual 3d community space such as Second Life. Answer to the question "where am I?" at two levels, whether the locus is in the immediate real world as opposed to the virtual world and whether one is aware of the spatial co-ordinates of that locus, hold the key to any virtual 3D experience. While 3D environments, especially virtual environments and their impact on spatial comprehension has been studied in disciplines such as architecture, it is difficult to determine the relative contributions of specific attributes such as screen size or stereoscopy towards spatial comprehension since most of them treat the technology as monolith (box-centered). Using a variable-centered approach put forth by Nass and Mason (1990) which breaks down the technology into its component variables and their corresponding values as its theoretical basis, this paper looks at the contributions of five variables (Stereoscopy, screen size, field of view, level of realism and level of detail) common to most virtual environments on spatial comprehension and presence. The variable centered approach can be daunting as the increase in the number of variables can exponentially increase the number of conditions and resources required. We overcome this drawback posed by adoption of such a theoretical approach by the use of a fractional factorial design for the experiment. This study has completed the first wave of data collection and starting the next phase in January 2007 and expected to complete by February 2007. Theoretical and practical implications of the study are discussed.
Resumo:
Oceanography is concerned with understanding the mechanisms controlling the movement of seawater and its contents. A fundamental tool in this process is the characterization of the thermophysical properties of seawater as functions of measured temperature and electrical conductivity, the latter used as a proxy for the concentration of dissolved matter in seawater. For many years a collection of algorithms denoted the Equation of State 1980 (EOS-80) has been the internationally accepted standard for calculating such properties. However, modern measurement technology now allows routine observations of temperature and electrical conductivity to be made to at least one order of magnitude more accurately than the uncertainty in this standard. Recently, a new standard has been developed, the Thermodynamical Equation of Seawater 2010 (TEOS-10). This new standard is thermodynamically consistent, valid over a wider range of temperature and salinity, and includes a mechanism to account for composition variations in seawater. Here we review the scientific development of this standard, and describe the literature involved in its development, which includes many of the articles in this special issue.
Resumo:
Although there is a strong policy interest in the impacts of climate change corresponding to different degrees of climate change, there is so far little consistent empirical evidence of the relationship between climate forcing and impact. This is because the vast majority of impact assessments use emissions-based scenarios with associated socio-economic assumptions, and it is not feasible to infer impacts at other temperature changes by interpolation. This paper presents an assessment of the global-scale impacts of climate change in 2050 corresponding to defined increases in global mean temperature, using spatially-explicit impacts models representing impacts in the water resources, river flooding, coastal, agriculture, ecosystem and built environment sectors. Pattern-scaling is used to construct climate scenarios associated with specific changes in global mean surface temperature, and a relationship between temperature and sea level used to construct sea level rise scenarios. Climate scenarios are constructed from 21 climate models to give an indication of the uncertainty between forcing and response. The analysis shows that there is considerable uncertainty in the impacts associated with a given increase in global mean temperature, due largely to uncertainty in the projected regional change in precipitation. This has important policy implications. There is evidence for some sectors of a non-linear relationship between global mean temperature change and impact, due to the changing relative importance of temperature and precipitation change. In the socio-economic sectors considered here, the relationships are reasonably consistent between socio-economic scenarios if impacts are expressed in proportional terms, but there can be large differences in absolute terms. There are a number of caveats with the approach, including the use of pattern-scaling to construct scenarios, the use of one impacts model per sector, and the sensitivity of the shape of the relationships between forcing and response to the definition of the impact indicator.