918 resultados para Non-linear systems


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An extensive statistical ‘downscaling’ study is done to relate large-scale climate information from a general circulation model (GCM) to local-scale river flows in SW France for 51 gauging stations ranging from nival (snow-dominated) to pluvial (rainfall-dominated) river-systems. This study helps to select the appropriate statistical method at a given spatial and temporal scale to downscale hydrology for future climate change impact assessment of hydrological resources. The four proposed statistical downscaling models use large-scale predictors (derived from climate model outputs or reanalysis data) that characterize precipitation and evaporation processes in the hydrological cycle to estimate summary flow statistics. The four statistical models used are generalized linear (GLM) and additive (GAM) models, aggregated boosted trees (ABT) and multi-layer perceptron neural networks (ANN). These four models were each applied at two different spatial scales, namely at that of a single flow-gauging station (local downscaling) and that of a group of flow-gauging stations having the same hydrological behaviour (regional downscaling). For each statistical model and each spatial resolution, three temporal resolutions were considered, namely the daily mean flows, the summary statistics of fortnightly flows and a daily ‘integrated approach’. The results show that flow sensitivity to atmospheric factors is significantly different between nival and pluvial hydrological systems which are mainly influenced, respectively, by shortwave solar radiations and atmospheric temperature. The non-linear models (i.e. GAM, ABT and ANN) performed better than the linear GLM when simulating fortnightly flow percentiles. The aggregated boosted trees method showed higher and less variable R2 values to downscale the hydrological variability in both nival and pluvial regimes. Based on GCM cnrm-cm3 and scenarios A2 and A1B, future relative changes of fortnightly median flows were projected based on the regional downscaling approach. The results suggest a global decrease of flow in both pluvial and nival regimes, especially in spring, summer and autumn, whatever the considered scenario. The discussion considers the performance of each statistical method for downscaling flow at different spatial and temporal scales as well as the relationship between atmospheric processes and flow variability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chain in both its forms - common (or stud-less) and stud-link - has many engineering applications. It is widely used as a component in the moorings of offshore floating systems, where its ruggedness and corrosion resistance make it an attractive choice. Chain exhibits some interesting behaviour in that when straight and subject to an axial load it does not twist or generate any torque, but if twisted or loaded when in a twisted condition it behaves in a highly non-linear manner, with the torque dependent upon the level of twist and axial load. Clearly an understanding of the way in which chains may behave and interact with other mooring components (such as wire rope, which also exhibits coupling between axial load and generated torque) when they are in service is essential. However, the sizes of chain that are in use in offshore moorings (typical bar diameters are 75 mm and greater) are too large to allow easy testing. This paper, which is in two parts, aims to address the issues and considerations relevant to torque in mooring chain. The first part introduces a frictionless theory that predicts the resultant torques and 'lift' in the links as non-dimensionalized functions of the angle of twist. Fortran code is presented in an Appendix, which allows the reader to make use of the analysis. The second part of the paper presents results from experimental work on both stud-less (41 mm) and stud-link (20.5 and 56 mm) chains. Torsional data are presented in both 'constant twist' and 'constant load' forms, as well as considering the lift between the links.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a kinetic double layer model coupling aerosol surface and bulk chemistry (K2-SUB) based on the PRA framework of gas-particle interactions (Poschl-Rudich-Ammann, 2007). K2-SUB is applied to a popular model system of atmospheric heterogeneous chemistry: the interaction of ozone with oleic acid. We show that our modelling approach allows de-convoluting surface and bulk processes, which has been a controversial topic and remains an important challenge for the understanding and description of atmospheric aerosol transformation. In particular, we demonstrate how a detailed treatment of adsorption and reaction at the surface can be coupled to a description of bulk reaction and transport that is consistent with traditional resistor model formulations. From literature data we have derived a consistent set of kinetic parameters that characterise mass transport and chemical reaction of ozone at the surface and in the bulk of oleic acid droplets. Due to the wide range of rate coefficients reported from different experimental studies, the exact proportions between surface and bulk reaction rates remain uncertain. Nevertheless, the model results suggest an important role of chemical reaction in the bulk and an approximate upper limit of similar to 10(-11) cm(2) s(-1) for the surface reaction rate coefficient. Sensitivity studies show that the surface accommodation coefficient of the gas-phase reactant has a strong non-linear influence on both surface and bulk chemical reactions. We suggest that K2-SUB may be used to design, interpret and analyse future experiments for better discrimination between surface and bulk processes in the oleic acid-ozone system as well as in other heterogeneous reaction systems of atmospheric relevance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nineteen wheat cultivars, released from 1934 to 2000, were grown at two organic and two non-organic sites in each of 3 years. Assessments included grain yield, grain protein concentration, protein yield, disease incidence and green leaf area. The superiority of each cultivar (the sum of the squares of the differences between its mean in each environment and the mean of the best cultivar there, divided by twice the number of environments; CS) was calculated for yield, grain protein concentration and protein yield, and ranked in each environment. The yield and grain protein concentration CS were more closely correlated with cultivar release date at the non-organic sites than at organic sites. This difference may be attributed to higher yield levels with larger differences among cultivars at the non-organic sites, rather than to improved stability (i.e. similar ranks) across sites. The significant difference in the correlation of protein yield CS and cultivar age between organic and non-organic sites would support evidence that the ability to take up mineral nitrogen (N) compared to soil N has been a component of the selection conditions of more modern cultivars (released after 1989). This is supported by assessment of green leaf area (GLA), where more modern cultivars in the non-organic systems had greater late-season GLA, a trend that was not identified in organic conditions. This effect could explain the poor correlation between age and protein yield CS in organic compared to non-organic conditions where modern cultivars are selected to benefit from later nitrogen (N) availability which includes the spring nitrogen applications tailored to coincide with peak crop demand. Under organic management, N release is largely based on the breakdown of fertility-building crops incorporated (ploughed-in) in the previous autumn. The release of nutrients from these residues is dependent on the soil conditions, which includes temperature and microbial populations, in addition to the potential leaching effect of high winter rainfall in the UK. In organic cereal crops, early resource capture is a major advantage for maximizing the utilization of nutrients from residue breakdown. It is concluded that selection of cultivars under conditions of high agrochemical inputs selects for cultivars that yield well under maximal conditions in terms of nutrient availability and pest, disease and weed control. The selection conditions for breeding have a tendency to select cultivars which perform relatively better in non-organic compared to organic systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper shows that a wavelet network and a linear term can be advantageously combined for the purpose of non linear system identification. The theoretical foundation of this approach is laid by proving that radial wavelets are orthogonal to linear functions. A constructive procedure for building such nonlinear regression structures, termed linear-wavelet models, is described. For illustration, sim ulation data are used to identify a model for a two-link robotic manipulator. The results show that the introduction of wavelets does improve the prediction ability of a linear model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the general response theory recently proposed by Ruelle for describing the impact of small perturbations to the non-equilibrium steady states resulting from Axiom A dynamical systems. We show that the causality of the response functions entails the possibility of writing a set of Kramers-Kronig (K-K) relations for the corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only a special class of directly observable susceptibilities obey K-K relations. Specific results are provided for the case of arbitrary order harmonic response, which allows for a very comprehensive K-K analysis and the establishment of sum rules connecting the asymptotic behavior of the harmonic generation susceptibility to the short-time response of the perturbed system. These results set in a more general theoretical framework previous findings obtained for optical systems and simple mechanical models, and shed light on the very general impact of considering the principle of causality for testing self-consistency: the described dispersion relations constitute unavoidable benchmarks that any experimental and model generated dataset must obey. The theory exposed in the present paper is dual to the time-dependent theory of perturbations to equilibrium states and to non-equilibrium steady states, and has in principle similar range of applicability and limitations. In order to connect the equilibrium and the non equilibrium steady state case, we show how to rewrite the classical response theory by Kubo so that response functions formally identical to those proposed by Ruelle, apart from the measure involved in the phase space integration, are obtained. These results, taking into account the chaotic hypothesis by Gallavotti and Cohen, might be relevant in several fields, including climate research. In particular, whereas the fluctuation-dissipation theorem does not work for non-equilibrium systems, because of the non-equivalence between internal and external fluctuations, K-K relations might be robust tools for the definition of a self-consistent theory of climate change.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this lecture is to review recent development in data analysis, initialization and data assimilation. The development of 3-dimensional multivariate schemes has been very timely because of its suitability to handle the many different types of observations during FGGE. Great progress has taken place in the initialization of global models by the aid of non-linear normal mode technique. However, in spite of great progress, several fundamental problems are still unsatisfactorily solved. Of particular importance is the question of the initialization of the divergent wind fields in the Tropics and to find proper ways to initialize weather systems driven by non-adiabatic processes. The unsatisfactory ways in which such processes are being initialized are leading to excessively long spin-up times.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents and implements a number of tests for non-linear dependence and a test for chaos using transactions prices on three LIFFE futures contracts: the Short Sterling interest rate contract, the Long Gilt government bond contract, and the FTSE 100 stock index futures contract. While previous studies of high frequency futures market data use only those transactions which involve a price change, we use all of the transaction prices on these contracts whether they involve a price change or not. Our results indicate irrefutable evidence of non-linearity in two of the three contracts, although we find no evidence of a chaotic process in any of the series. We are also able to provide some indications of the effect of the duration of the trading day on the degree of non-linearity of the underlying contract. The trading day for the Long Gilt contract was extended in August 1994, and prior to this date there is no evidence of any structure in the return series. However, after the extension of the trading day we do find evidence of a non-linear return structure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A number of tests for non-linear dependence in time series are presented and implemented on a set of 10 daily sterling exchange rates covering the entire post Bretton-Woods era until the present day. Irrefutable evidence of non-linearity is shown in many of the series, but most of this dependence can apparently be explained by reference to the GARCH family of models. It is suggested that the literature in this area has reached an impasse, with the presence of ARCH effects clearly demonstrated in a large number of papers, but with the tests for non-linearity which are currently available being unable to classify any additional non-linear structure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

On-going human population growth and changing patterns of resource consumption are increasing global demand for ecosystem services, many of which are provided by soils. Some of these ecosystem services are linearly related to the surface area of pervious soil, whereas others show non-linear relationships, making ecosystem service optimization a complex task. As limited land availability creates conflicting demands among various types of land use, a central challenge is how to weigh these conflicting interests and how to achieve the best solutions possible from a perspective of sustainable societal development. These conflicting interests become most apparent in soils that are the most heavily used by humans for specific purposes: urban soils used for green spaces, housing, and other infrastructure and agricultural soils for producing food, fibres and biofuels. We argue that, despite their seemingly divergent uses of land, agricultural and urban soils share common features with regards to interactions between ecosystem services, and that the trade-offs associated with decision-making, while scale- and context-dependent, can be surprisingly similar between the two systems. We propose that the trade-offs within land use types and their soil-related ecosystems services are often disproportional, and quantifying these will enable ecologists and soil scientists to help policy makers optimizing management decisions when confronted with demands for multiple services under limited land availability.