16 resultados para Linear function
em CentAUR: Central Archive University of Reading - UK
Resumo:
Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.
Resumo:
Poly(acrylic acid) forms insoluble hydrogen-bonded interpolymer complexes with methylcellulose in aqueous solutions under acidic conditions. In this work the reaction heats and binding constants were determined for the complexation between poly(acrylic acid) and methylcellulose by isothermal titration calorimetry at different pH and findings are correlated with the aggregation processes occurring in this system. The principal contribution to the complexation heat results from primary polycomplex particle aggregation. Transmission electron microscopy of nanoparticles produced at pH 1.4 and 2.4 demonstrated that they are spherical and dense structures. The nanoparticles ranged from 80 to 200 nm, whereas particles formed at pH 3.2 were 20-30 nm and were stabilized against aggregation by a network of uncomplexed macromolecules. For the first time, multilayered materials were developed on the basis of hydrogen-bonded complexes of poly(acrylic acid) and methylcellulose using layer-by-layer deposition on a glass surface. The thickness of these films was a linear function of the number of deposition cycles. The materials were subsequently cross-linked by thermal treatment, resulting in ultrathin hydrogels which detached from the glass substrate upon swelling. The swelling capacity of ultrathin hydrogels differed from the swelling of the thicker films of a similar chemical composition.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
The project investigated whether it would be possible to remove the main technical hindrance to precision application of herbicides to arable crops in the UK, namely creating geo-referenced weed maps for each field. The ultimate goal is an information system so that agronomists and farmers can plan precision weed control and create spraying maps. The project focussed on black-grass in wheat, but research was also carried out on barley and beans and on wild-oats, barren brome, rye-grass, cleavers and thistles which form stable patches in arable fields. Farmers may also make special efforts to control them. Using cameras mounted on farm machinery, the project explored the feasibility of automating the process of mapping black-grass in fields. Geo-referenced images were captured from June to December 2009, using sprayers, a tractor, combine harvesters and on foot. Cameras were mounted on the sprayer boom, on windows or on top of tractor and combine cabs and images were captured with a range of vibration levels and at speeds up to 20 km h-1. For acceptability to farmers, it was important that every image containing black-grass was classified as containing black-grass; false negatives are highly undesirable. The software algorithms recorded no false negatives in sample images analysed to date, although some black-grass heads were unclassified and there were also false positives. The density of black-grass heads per unit area estimated by machine vision increased as a linear function of the actual density with a mean detection rate of 47% of black-grass heads in sample images at T3 within a density range of 13 to 1230 heads m-2. A final part of the project was to create geo-referenced weed maps using software written in previous HGCA-funded projects and two examples show that geo-location by machine vision compares well with manually-mapped weed patches. The consortium therefore demonstrated for the first time the feasibility of using a GPS-linked computer-controlled camera system mounted on farm machinery (tractor, sprayer or combine) to geo-reference black-grass in winter wheat between black-grass head emergence and seed shedding.
Resumo:
The single scattering albedo w_0l in atmospheric radiative transfer is the ratio of the scattering coefficient to the extinction coefficient. For cloud water droplets both the scattering and absorption coefficients, thus the single scattering albedo, are functions of wavelength l and droplet size r. This note shows that for water droplets at weakly absorbing wavelengths, the ratio w_0l(r)/w_0l(r0) of two single scattering albedo spectra is a linear function of w_0l(r). The slope and intercept of the linear function are wavelength independent and sum to unity. This relationship allows for a representation of any single scattering albedo spectrum w_0l(r) via one known spectrum w_0l(r0). We provide a simple physical explanation of the discovered relationship. Similar linear relationships were found for the single scattering albedo spectra of non-spherical ice crystals.
Resumo:
Data on the vibrational energy levels and rotational constants of carbon suboxide for the low-wavenumber bending mode ν7 are reviewed, in the ground-state manifold, and in the ν2-, ν3-, ν4-, and ν2 + ν4-state manifolds. Following the procedure developed by Duckett, Mills, and Robiette [J. Mol. Spectrosc. 63, 249 (1976)] the data have been inverted to give the effective bending potential in ν7 for each of these five states. Values are obtained for various other parameters in the effective vibration-rotation Hamiltonian. The potential and rotational constants in ν2 + ν4 are given to a close approximation by linear extrapolation from the ground state through the ν2 and ν4 states.
Nonlinear system identification using particle swarm optimisation tuned radial basis function models
Resumo:
A novel particle swarm optimisation (PSO) tuned radial basis function (RBF) network model is proposed for identification of non-linear systems. At each stage of orthogonal forward regression (OFR) model construction process, PSO is adopted to tune one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is often more efficient in model construction. The effectiveness of the proposed PSO aided OFR algorithm for constructing tunable node RBF models is demonstrated using three real data sets.
Resumo:
In this paper stability of one-step ahead predictive controllers based on non-linear models is established. It is shown that, under conditions which can be fulfilled by most industrial plants, the closed-loop system is robustly stable in the presence of plant uncertainties and input–output constraints. There is no requirement that the plant should be open-loop stable and the analysis is valid for general forms of non-linear system representation including the case out when the problem is constraint-free. The effectiveness of controllers designed according to the algorithm analyzed in this paper is demonstrated on a recognized benchmark problem and on a simulation of a continuous-stirred tank reactor (CSTR). In both examples a radial basis function neural network is employed as the non-linear system model.
Resumo:
Radial basis function networks can be trained quickly using linear optimisation once centres and other associated parameters have been initialised. The authors propose a small adjustment to a well accepted initialisation algorithm which improves the network accuracy over a range of problems. The algorithm is described and results are presented.
Resumo:
This paper considers the use of radial basis function and multi-layer perceptron networks for linear or linearizable, adaptive feedback control schemes in a discrete-time environment. A close look is taken at the model structure selected and the extent of the resulting parameterization. A comparison is made with standard, nonneural network algorithms, e.g. self-tuning control.
Resumo:
We present molecular dynamics (MD) and slip-springs model simulations of the chain segmental dynamics in entangled linear polymer melts. The time-dependent behavior of the segmental orientation autocorrelation functions and mean-square segmental displacements are analyzed for both flexible and semiflexible chains, with particular attention paid to the scaling relations among these dynamic quantities. Effective combination of the two simulation methods at different coarse-graining levels allows us to explore the chain dynamics for chain lengths ranging from Z ≈ 2 to 90 entanglements. For a given chain length of Z ≈ 15, the time scales accessed span for more than 10 decades, covering all of the interesting relaxation regimes. The obtained time dependence of the monomer mean square displacements, g1(t), is in good agreement with the tube theory predictions. Results on the first- and second-order segmental orientation autocorrelation functions, C1(t) and C2(t), demonstrate a clear power law relationship of C2(t) C1(t)m with m = 3, 2, and 1 in the initial, free Rouse, and entangled (constrained Rouse) regimes, respectively. The return-to-origin hypothesis, which leads to inverse proportionality between the segmental orientation autocorrelation functions and g1(t) in the entangled regime, is convincingly verified by the simulation result of C1(t) g1(t)−1 t–1/4 in the constrained Rouse regime, where for well-entangled chains both C1(t) and g1(t) are rather insensitive to the constraint release effects. However, the second-order correlation function, C2(t), shows much stronger sensitivity to the constraint release effects and experiences a protracted crossover from the free Rouse to entangled regime. This crossover region extends for at least one decade in time longer than that of C1(t). The predicted time scaling behavior of C2(t) t–1/4 is observed in slip-springs simulations only at chain length of 90 entanglements, whereas shorter chains show higher scaling exponents. The reported simulation work can be applied to understand the observations of the NMR experiments.
Resumo:
Existing numerical characterizations of the optimal income tax have been based on a limited number of model specifications. As a result, they do not reveal which properties are general. We determine the optimal tax in the quasi-linear model under weaker assumptions than have previously been used; in particular, we remove the assumption of a lower bound on the utility of zero consumption and the need to permit negative labor incomes. A Monte Carlo analysis is then conducted in which economies are selected at random and the optimal tax function constructed. The results show that in a significant proportion of economies the marginal tax rate rises at low skills and falls at high. The average tax rate is equally likely to rise or fall with skill at low skill levels, rises in the majority of cases in the centre of the skill range, and falls at high skills. These results are consistent across all the specifications we test. We then extend the analysis to show that these results also hold for Cobb-Douglas utility.
Resumo:
In this paper we study Dirichlet convolution with a given arithmetical function f as a linear mapping 'f that sends a sequence (an) to (bn) where bn = Pdjn f(d)an=d.
We investigate when this is a bounded operator on l2 and ¯nd the operator norm. Of particular interest is the case f(n) = n¡® for its connection to the Riemann zeta
function on the line 1, 'f is bounded with k'f k = ³(®). For the unbounded case, we show that 'f : M2 ! M2 where M2 is the subset of l2 of multiplicative sequences, for many f 2 M2. Consequently, we study the `quasi'-norm sup kak = T a 2M2 k'fak kak
for large T, which measures the `size' of 'f on M2. For the f(n) = n¡® case, we show this quasi-norm has a striking resemblance to the conjectured maximal order of
j³(® + iT )j for ® > 12 .
Resumo:
The time-mean quasi-geostrophic potential vorticity equation of the atmospheric flow on isobaric surfaces can explicitly include an atmospheric (internal) forcing term of the stationary-eddy flow. In fact, neglecting some non-linear terms in this equation, this forcing can be mathematically expressed as a single function, called Empirical Forcing Function (EFF), which is equal to the material derivative of the time-mean potential vorticity. Furthermore, the EFF can be decomposed as a sum of seven components, each one representing a forcing mechanism of different nature. These mechanisms include diabatic components associated with the radiative forcing, latent heat release and frictional dissipation, and components related to transient eddy transports of heat and momentum. All these factors quantify the role of the transient eddies in forcing the atmospheric circulation. In order to assess the relevance of the EFF in diagnosing large-scale anomalies in the atmospheric circulation, the relationship between the EFF and the occurrence of strong North Atlantic ridges over the Eastern North Atlantic is analyzed, which are often precursors of severe droughts over Western Iberia. For such events, the EFF pattern depicts a clear dipolar structure over the North Atlantic; cyclonic (anticyclonic) forcing of potential vorticity is found upstream (downstream) of the anomalously strong ridges. Results also show that the most significant components are related to the diabatic processes. Lastly, these results highlight the relevance of the EFF in diagnosing large-scale anomalies, also providing some insight into their interaction with different physical mechanisms.