918 resultados para Non Linear Systems


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although collaboration manifestly takes place in time, the role of time in shaping the behaviour of collaborations, and collaborative systems, is not well understood. Time is more than clock-time or the subjective experience of time; its effects on systems include differential rates of change of system elements, temporally non-linear behaviour and phenomena such as entrainment and synchronization. As a system driver, it generates emergent effects shaping systems and their behaviour. In the paper we present a systems view of time, and consider the implications of such a view through the case of collaborative development of a new university timetabling system. Teasing out the key temporal phenomena using the notion of temporal trajectories helps us understand the emergent temporal behaviour and suggests a means for improving outcomes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present the first experimental observation of several bifurcations in a controllable non-linear Hamiltonian system. Dynamics of cold atoms are used to test predictions of non-linear, non-dissipative Hamiltonian dynamics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is currently considerable interest in developing general non-linear density models based on latent, or hidden, variables. Such models have the ability to discover the presence of a relatively small number of underlying `causes' which, acting in combination, give rise to the apparent complexity of the observed data set. Unfortunately, to train such models generally requires large computational effort. In this paper we introduce a novel latent variable algorithm which retains the general non-linear capabilities of previous models but which uses a training procedure based on the EM algorithm. We demonstrate the performance of the model on a toy problem and on data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Radial Basis Function networks with linear outputs are often used in regression problems because they can be substantially faster to train than Multi-layer Perceptrons. For classification problems, the use of linear outputs is less appropriate as the outputs are not guaranteed to represent probabilities. We show how RBFs with logistic and softmax outputs can be trained efficiently using the Fisher scoring algorithm. This approach can be used with any model which consists of a generalised linear output function applied to a model which is linear in its parameters. We compare this approach with standard non-linear optimisation algorithms on a number of datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Blurred edges appear sharper in motion than when they are stationary. We (Vision Research 38 (1998) 2108) have previously shown how such distortions in perceived edge blur may be accounted for by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. If the form of the transducer is fixed (independent of contrast) for a given speed, then a strong prediction of the model is that motion sharpening should increase with increasing contrast. We measured the sharpening of periodic patterns over a large range of contrasts, blur widths and speeds. The results indicate that whilst sharpening increases with speed it is practically invariant with contrast. The contrast invariance of motion sharpening is not explained by an early, static compressive non-linearity alone. However, several alternative explanations are also inconsistent with these results. We show that if a dynamic contrast gain control precedes the static non-linear transducer then motion sharpening, its speed dependence, and its invariance with contrast, can be predicted with reasonable accuracy. © 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Linear typing schemes can be used to guarantee non-interference and so the soundness of in-place update with respect to a functional semantics. But linear schemes are restrictive in practice, and more restrictive than necessary to guarantee soundness of in-place update. This limitation has prompted research into static analysis and more sophisticated typing disciplines to determine when in-place update may be safely used, or to combine linear and non-linear schemes. Here we contribute to this direction by defining a new typing scheme that better approximates the semantic property of soundness of in-place update for a functional semantics. We begin from the observation that some data are used only in a read-only context, after which it may be safely re-used before being destroyed. Formalising the in-place update interpretation in a machine model semantics allows us to refine this observation, motivating three usage aspects apparent from the semantics that are used to annotate function argument types. The aspects are (1) used destructively, (2), used read-only but shared with result, and (3) used read-only and not shared with the result. The main novelty is aspect (2), which allows a linear value to be safely read and even aliased with a result of a function without being consumed. This novelty makes our type system more expressive than previous systems for functional languages in the literature. The system remains simple and intuitive, but it enjoys a strong soundness property whose proof is non-trivial. Moreover, our analysis features principal types and feasible type reconstruction, as shown in M. Konen'y (In TYPES 2002 workshop, Nijmegen, Proceedings, Springer-Verlag, 2003).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The underlying work to this thesis focused on the exploitation and investigation of photosensitivity mechanisms in optical fibres and planar waveguides for the fabrication of advanced integrated optical devices for telecoms and sensing applications. One major scope is the improvement of grating fabrication specifications by introducing new writing techniques and the use of advanced characterisation methods for grating testing. For the first time the polarisation control method for advanced grating fabrication has successfully been converted to apodised planar waveguide fabrication and the development of a holographic method for the inscription of chirped gratings at arbitrary wavelength is presented. The latter resulted in the fabrication of gratings for pulse-width suppression and wavelength selection in diode lasers. In co-operation with research partners a number of samples were tested using optical frequency domain and optical low coherence reflectometry for a better insight into the limitations of grating writing techniques. Using a variety of different fabrication methods, custom apodised and chirped fibre Bragg gratings were written for the use as filter elements for multiplexer-demultiplexer devices, as well as for short pulse generation and wavelength selection in telecommunication transmission systems. Long period grating based devices in standard, speciality and tapered fibres are presented, showing great potential for multi-parameter sensing. One particular scope is the development of vectorial curvature and refractive index sensors with potential for medical, chemical and biological sensing. In addition the design of an optically tunable Mach-Zehnder based multiwavelength filter is introduced. The discovery of a Type IA grating type through overexposure of hydrogen loaded standard and Boron-Germanium co-doped fibres strengthened the assumption of UV-photosensitivity being a highly non-linear process. Gratings of this type show a significantly lower thermal sensitivity compared to standard gratings, which makes them useful for sensing applications. An Oxford Lasers copper-vapour laser operating at 255 nm in pulsed mode was used for their inscription, in contrast to previous work using CW-Argon-Ion lasers and contributing to differences in the processes of the photorefractive index change

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis experimentally examines the use of different techniques for optical fibre transmission over ultra long haul distances. Its format firstly examines the use of dispersion management as a means of achieving long haul communications. Secondly, examining the use concatenated NOLMs for DM autosoliton ultra long haul propagation, by comparing their performance with a generic system without NOLMs. Thirdly, timing jitter in concatenated NOLM system is examined and compared to the generic system and lastly issues of OTDM amplitude non-uniformity from channel to channel in a saturable absorber, specifically a NOLM, are raised. Transmission at a rate of 40Gbit/s is studied in an all-Raman amplified standard fibre link with amplifier spacing of the order of 80km. We demonstrate in this thesis that the detrimental effects associated with high power Raman amplification can be minimized by dispersion map optimization. As a result, a transmission distance of 1600 km (2000km including dispersion compensating fibre) has been achieved in standard single mode fibre. The use of concatenated NOLMs to provide a stable propagation regime has been proposed theoretically. In this thesis, the observation experimentally of autosoliton propagation is shown for the first time in a dispersion managed optical transmission system. The system is based on a strong dispersion map with large amplifier spacing. Operation at transmission rates of 10, 40 and 80Gbit/s is demonstrated. With an insertion of a stabilizing element to the NOLM, the transmission of a 10 and 20Gbit/s data stream was extended and demonstrated experimentally. Error-free propagation over 100 and 20 thousand kilometres has been achieved at 10 and 20Gbit/s respectively, with terrestrial amplifier spacing. The monitor of timing jitter is of importance to all optical systems. Evolution of timing jitter in a DM autosoliton system has been studied in this thesis and analyzed at bit ranges from 10Gbit/s to 80Gbit/s. Non-linear guiding by in-line regenerators considerably changes the dynamics of jitter accumulation. As transmission systems require higher data rates, the use of OTDM will become more prolific. The dynamics of switching and transmission of an optical signal comprising individual OTDM channels of unequal amplitudes in a dispersion-managed link with in-line non-linear fibre loop mirrors is investigated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes an experimental and analytic study of the effects of magnetic non-linearity and finite length on the loss and field distribution in solid iron due to a travelling mmf wave. In the first half of the thesis, a two-dimensional solution is developed which accounts for the effects of both magnetic non-linearity and eddy-current reaction; this solution is extended, in the second half, to a three-dimensional model. In the two-dimensional solution, new equations for loss and flux/pole are given; these equations contain the primary excitation, the machine parameters and factors describing the shape of the normal B-H curve. The solution applies to machines of any air-gap length. The conditions for maximum loss are defined, and generalised torque/frequency curves are obtained. A relationship between the peripheral component of magnetic field on the surface of the iron and the primary excitation is given. The effects of magnetic non-linearity and finite length are combined analytically by introducing an equivalent constant permeability into a linear three-dimensional analysis. The equivalent constant permeability is defined from the non-linear solution for the two-dimensional magnetic field at the axial centre of the machine to avoid iterative solutions. In the linear three-dimensional analysis, the primary excitation in the passive end-regions of the machine is set equal to zero and the secondary end faces are developed onto the air-gap surface. The analyses, and the assumptions on which they are based, were verified on an experimental machine which consists of a three-phase rotor and alternative solid iron stators, one with copper end rings, and one without copper end rings j the main dimensions of the two stators are identical. Measurements of torque, flux /pole, surface current density and radial power flow were obtained for both stators over a range of frequencies and excitations. Comparison of the measurements on the two stators enabled the individual effects of finite length and saturation to be identified, and the definition of constant equivalent permeability to be verified. The penetration of the peripheral flux into the stator with copper end rings was measured and compared with theoretical penetration curves. Agreement between measured and theoretical results was generally good.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In Information Filtering (IF) a user may be interested in several topics in parallel. But IF systems have been built on representational models derived from Information Retrieval and Text Categorization, which assume independence between terms. The linearity of these models results in user profiles that can only represent one topic of interest. We present a methodology that takes into account term dependencies to construct a single profile representation for multiple topics, in the form of a hierarchical term network. We also introduce a series of non-linear functions for evaluating documents against the profile. Initial experiments produced positive results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The spatial patterns of diffuse, primitive, classic and compact beta-amyloid (Abeta) deposits were studied in the medial temporal lobe in 14 elderly, non-demented patients (ND) and in nine patients with Alzheimer’s disease (AD). In both patient groups, Abeta deposits were clustered and in a number of tissues, a regular periodicity of Abeta deposit clusters was observed parallel to the tissue boundary. The primitive deposit clusters were significantly larger in the AD cases but there were no differences in the sizes of the diffuse and classic deposit clusters between patient groups. In AD, the relationship between Abeta deposit cluster size and density in the tissue was non-linear. This suggested that cluster size increased with increasing Abeta deposit density in some tissues while in others, Abeta deposit density was high but contained within smaller clusters. It was concluded that the formation of large clusters of primitive deposits could be a factor in the development of AD.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As a basis for the commercial separation of normal paraffins a detailed study has been made of factors affecting the adsorption of binary liquid mixtures of high molecular weight normal paraffins (C12, C16, and C20) from isooctane on type 5A molecular sieves. The literature relating to molecular sieve properties and applications, and to liquid-phase adsorption of high molecular weight normal paraffin compounds by zeolites, was reviewed. Equilibrium isotherms were determined experimentally for the normal paraffins under investigation at temperatures of 303oK, 323oK and 343oK and showed a non-linear, favourable- type of isotherm. A higher equilibrium amount was adsorbed with lower molecular weight normal paraffins. An increase in adsorption temperature resulted in a decrease in the adsorption value. Kinetics of adsorption were investigated for the three normal paraffins at different temperatures. The effective diffusivity and the rate of adsorption of each normal paraffin increased with an increase in temperature in the range 303 to 343oK. The value of activation energy was between 2 and 4 kcal/mole. The dynamic properties of the three systems were investigated over a range of operating conditions (i.e. temperature, flow rate, feed concentration, and molecular sieve size in the range 0.032 x 10-3 to 2 x 10-3m) with a packed column. The heights of adsorption zones calculated by two independent equations (one based on a constant width, constant velocity and adsorption zone and the second on a solute material balance within the adsorption zone) agreed within 3% which confirmed the validity of using the mass transfer zone concept to provide a simple design procedure for the systems under study. The dynamic capacity of type 5A sieves for n-eicosane was lower than for n-hexadecane and n-dodecane corresponding to a lower equilibrium loading capacity and lower overall mass transfer coefficient. The values of individual external, internal, theoretical and experimental overall mass transfer coefficient were determined. The internal resistance was in all cases rate-controlling. A mathematical model for the prediction of dynamic breakthrough curves was developed analytically and solved from the equilibrium isotherm and the mass transfer rate equation. The experimental breakthrough curves were tested against both the proposed model and a graphical method developed by Treybal. The model produced the best fit with mean relative percent deviations of 26, 22, and 13% for the n-dodecane, n-hexadecane, and n-eicosane systems respectively.