957 resultados para linearity
Resumo:
Solid phase microextraction (SPME) offers a solvent-free and less labour-intensive alternative to traditional flavour isolation techniques. In this instance, SPME was optimised for the extraction of 17 stale flavour volatiles (C3-11,13 methyl ketones and C4-10 saturated aldehydes) from the headspace of full-cream ultrahigh-temperature (UHT)-processed milk. A comparison of relative extraction efficiencies was made using three fibre coatings, three extraction times and three extraction temperatures. Linearity of calibration curves, limits of detection and repeatability (coefficients of variation) were also used in determining the optimum extraction conditions. A 2 cm fibre coating of 50130 gm divinylbenzene/Carboxen/polydimethylsiloxane in conjunction with a 15 min extraction at 40 degrees C were chosen as the final optimum conditions. This method can be used as an objective tool for monitoring the flavour quality of UHT milk during storage. (c) 2005 Society of Chemical Industry.
Resumo:
This paper proposes a theoretical explanation of the variations of the sediment delivery ratio (SDR) versus catchment area relationships and the complex patterns in the behavior of sediment transfer processes at catchment scale. Taking into account the effects of erosion source types, deposition, and hydrological controls, we propose a simple conceptual model that consists of two linear stores arranged in series: a hillslope store that addresses transport to the nearest streams and a channel store that addresses sediment routing in the channel network. The model identifies four dimensionless scaling factors, which enable us to analyze a variety of effects on SDR estimation, including (1) interacting processes of erosion sources and deposition, (2) different temporal averaging windows, and (3) catchment runoff response. We show that the interactions between storm duration and hillslope/channel travel times are the major controls of peak-value-based sediment delivery and its spatial variations. The interplay between depositional timescales and the travel/residence times determines the spatial variations of total-volume-based SDR. In practical terms this parsimonious, minimal complexity model could provide a sound physical basis for diagnosing catchment to catchment variability of sediment transport if the proposed scaling factors can be quantified using climatic and catchment properties.
Resumo:
The bispectrum and third-order moment can be viewed as equivalent tools for testing for the presence of nonlinearity in stationary time series. This is because the bispectrum is the Fourier transform of the third-order moment. An advantage of the bispectrum is that its estimator comprises terms that are asymptotically independent at distinct bifrequencies under the null hypothesis of linearity. An advantage of the third-order moment is that its values in any subset of joint lags can be used in the test, whereas when using the bispectrum the entire (or truncated) third-order moment is required to construct the Fourier transform. In this paper, we propose a test for nonlinearity based upon the estimated third-order moment. We use the phase scrambling bootstrap method to give a nonparametric estimate of the variance of our test statistic under the null hypothesis. Using a simulation study, we demonstrate that the test obtains its target significance level, with large power, when compared to an existing standard parametric test that uses the bispectrum. Further we show how the proposed test can be used to identify the source of nonlinearity due to interactions at specific frequencies. We also investigate implications for heuristic diagnosis of nonstationarity.
Resumo:
Network building and exchange of information by people within networks is crucial to the innovation process. Contrary to older models, in social networks the flow of information is noncontinuous and nonlinear. There are critical barriers to information flow that operate in a problematic manner. New models and new analytic tools are needed for these systems. This paper introduces the concept of virtual circuits and draws on recent concepts of network modelling and design to introduce a probabilistic switch theory that can be described using matrices. It can be used to model multistep information flow between people within organisational networks, to provide formal definitions of efficient and balanced networks and to describe distortion of information as it passes along human communication channels. The concept of multi-dimensional information space arises naturally from the use of matrices. The theory and the use of serial diagonal matrices have applications to organisational design and to the modelling of other systems. It is hypothesised that opinion leaders or creative individuals are more likely to emerge at information-rich nodes in networks. A mathematical definition of such nodes is developed and it does not invariably correspond with centrality as defined by early work on networks.
Resumo:
In electronic support, receivers must maintain surveillance over the very wide portion of the electromagnetic spectrum in which threat emitters operate. A common approach is to use a receiver with a relatively narrow bandwidth that sweeps its centre frequency over the threat bandwidth to search for emitters. The sequence and timing of changes in the centre frequency constitute a search strategy. The search can be expedited, if there is intelligence about the operational parameters of the emitters that are likely to be found. However, it can happen that the intelligence is deficient, untrustworthy or absent. In this case, what is the best search strategy to use? A random search strategy based on a continuous-time Markov chain (CTMC) is proposed. When the search is conducted for emitters with a periodic scan, it is shown that there is an optimal configuration for the CTMC. It is optimal in the sense that the expected time to intercept an emitter approaches linearity most quickly with respect to the emitter's scan period. A fast and smooth approach to linearity is important, as other strategies can exhibit considerable and abrupt variations in the intercept time as a function of scan period. In theory and numerical examples, the optimum CTMC strategy is compared with other strategies to demonstrate its superior properties.
Resumo:
We present the first characterization of the mechanical properties of lysozyme films formed by self-assembly at the air-water interface using the Cambridge interfacial tensiometer (CIT), an apparatus capable of subjecting protein films to a much higher level of extensional strain than traditional dilatational techniques. CIT analysis, which is insensitive to surface pressure, provides a direct measure of the extensional stress-strain behavior of an interfacial film without the need to assume a mechanical model (e.g., viscoelastic), and without requiring difficult-to-test assumptions regarding low-strain material linearity. This testing method has revealed that the bulk solution pH from which assembly of an interfacial lysozyme film occurs influences the mechanical properties of the film more significantly than is suggested by the observed differences in elastic moduli or surface pressure. We have also identified a previously undescribed pH dependency in the effect of solution ionic strength on the mechanical strength of the lysozyme films formed at the air-water interface. Increasing solution ionic strength was found to increase lysozyme film strength when assembly occurred at pH 7, but it caused a decrease in film strength at pH 11, close to the pI of lysozyme. This result is discussed in terms of the significant contribution made to protein film strength by both electrostatic interactions and the hydrophobic effect. Washout experiments to remove protein from the bulk phase have shown that a small percentage of the interfacially adsorbed lysozyme molecules are reversibly adsorbed. Finally, the washout tests have probed the role played by additional adsorption to the fresh interface formed by the application of a large strain to the lysozyme film and have suggested the movement of reversibly bound lysozyme molecules from a subinterfacial layer to the interface.
Resumo:
-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
Resumo:
Esta pesquisa reflete sobre questões da ética contemporânea na publicidade dirigida ao público feminino. A discussão de tais questões debruça sobre a vertente deontológica (convicção). O objetivo do estudo é investigar como os anúncios publicados nas revistas Claudia e Nova articulam questões de tal ética. Assim, buscou-se verificar, por meio da análise de conteúdo, se os anúncios seguiam os princípios contidos no Código Brasileiro de Auto-Regulamentação Publicitária. Em uma segunda etapa, pretendeu-se investigar, por meio da análise de discurso, como se deu a construção dos anúncios sob o enfoque da ética e da mulher na sociedade dos dias de hoje. Concluiu-se que as representações da ética deontológica na publicidade feminina ocorrem de maneira não linear e fragmentada. A não linearidade se refere ao não cumprimento dos princípios éticos por parte de alguns anúncios analisados. Já a fragmentação diz respeito ao modo como a mulher é retratada e como os produtos são divulgados nos anúncios, a partir de diferentes padrões de conduta (princípios) e baseados em valores diversificados. Ora os anúncios apresentam os produtos de maneira verdadeira ou não, ora as mulheres aparecem sob um enfoque baseado em valores contemporâneos ou em valores tradicionais de modo diferenciado.(AU)
Resumo:
Since 1996 direct femtosecond inscription in transparent dielectrics has become the subject of intensive research. This enabling technology significantly expands the technological boundaries for direct fabrication of 3D structures in a wide variety of materials. It allows modification of non-photosensitive materials, which opens the door to numerous practical applications. In this work we explored the direct femtosecond inscription of waveguides and demonstrated at least one order of magnitude enhancement in the most critical parameter - the induced contrast of the refractive index in a standard borosilicate optical glass. A record high induced refractive contrast of 2.5×10-2 is demonstrated. The waveguides fabricated possess one of the lowest losses, approaching level of Fresnel reflection losses at the glassair interface. High refractive index contrast allows the fabrication of curvilinear waveguides with low bend losses. We also demonstrated the optimisation of the inscription regimes in BK7 glass over a broad range of experimental parameters and observed a counter-intuitive increase of the induced refractive index contrast with increasing translation speed of a sample. Examples of inscription in a number of transparent dielectrics hosts using high repetition rate fs laser system (both glasses and crystals) are also presented. Sub-wavelength scale periodic inscription inside any material often demands supercritical propagation regimes, when pulse peak power is more than the critical power for selffocusing, sometimes several times higher than the critical power. For a sub-critical regime, when the pulse peak power is less than the critical power for self-focusing, we derive analytic expressions for Gaussian beam focusing in the presence of Kerr non-linearity as well as for a number of other beam shapes commonly used in experiments, including astigmatic and ring-shaped ones. In the part devoted to the fabrication of periodic structures, we report on recent development of our point-by-point method, demonstrating the shortest periodic perturbation created in the bulk of a pure fused silica sample, by using third harmonics (? =267 nm) of fundamental laser frequency (? =800 nm) and 1 kHz femtosecond laser system. To overcome the fundamental limitations of the point-by-point method we suggested and experimentally demonstrated the micro-holographic inscription method, which is based on using the combination of a diffractive optical element and standard micro-objectives. Sub-500 nm periodic structures with a much higher aspect ratio were demonstrated. From the applications point of view, we demonstrate examples of photonics devices by direct femtosecond fabrication method, including various vectorial bend-sensors fabricated in standard optical fibres, as well as a highly birefringent long-period gratings by direct modulation method. To address the intrinsic limitations of femtosecond inscription at very shallow depths we suggested the hybrid mask-less lithography method. The method is based on precision ablation of a thin metal layer deposited on the surface of the sample to create a mask. After that an ion-exchange process in the melt of Ag-containing salts allows quick and low-cost fabrication of shallow waveguides and other components of integrated optics. This approach covers the gap in direct fs inscription of shallow waveguide. Perspectives and future developments of direct femtosecond micro-fabrication are also discussed.
Resumo:
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.
Resumo:
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study,GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk,despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.
Resumo:
This empirical study examines the extent of non-linearity in a multivariate model of monthly financial series. To capture the conditional heteroscedasticity in the series, both the GARCH(1,1) and GARCH(1,1)-in-mean models are employed. The conditional errors are assumed to follow the normal and Student-t distributions. The non-linearity in the residuals of a standard OLS regression are also assessed. It is found that the OLS residuals as well as conditional errors of the GARCH models exhibit strong non-linearity. Under the Student density, the extent of non-linearity in the GARCH conditional errors was generally similar to those of the standard OLS. The GARCH-in-mean regression generated the worse out-of-sample forecasts.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
A self-reference fiber Michelson interferometer measurement system, which employs fiber Bragg gratings (FBGs) as in-fiber reflective mirrors and interleaves together two fiber Michelson interferometers that share the common-interferometric-optical path, is presented. One of the fiber interferometers is used to stabilise the system by the use of an electronic feedback loop to compensate the influences resulting from the environmental disturbances, while the other one is used to perform the measurement task. The influences resulting from the environmental disturbances have been eliminated by the compensating action of the electronic feedback loop, this makes the system suitable for on-line precision measurement. By means of the homodyne phase-tracking technique, the linearity of the measurement results of displacement measurements has been very high.
Resumo:
The object of this thesis is to develop a method for calculating the losses developed in steel conductors of circular cross-section and at temperatures below 100oC, by the direct passage of a sinusoidally alternating current. Three cases are considered. 1. Isolated solid or tubular conductor. 2. Concentric arrangement of tube and solid return conductor. 3. Concentric arrangement of two tubes. These cases find applications in process temperature maintenance of pipelines, resistance heating of bars and design of bus-bars. The problems associated with the non-linearity of steel are examined. Resistance heating of bars and methods of surface heating of pipelines are briefly described. Magnetic-linear solutions based on Maxwell's equations are critically examined and conditions under which various formulae apply investigated. The conditions under which a tube is electrically equivalent to a solid conductor and to a semi-infinite plate are derived. Existing solutions for the calculation of losses in isolated steel conductors of circular cross-section are reviewed, evaluated and compared. Two methods of solution are developed for the three cases considered. The first is based on the magnetic-linear solutions and offers an alternative to the available methods which are not universal. The second solution extends the existing B/H step-function approximation method to small diameter conductors and to tubes in isolation or in a concentric arrangement. A comprehensive experimental investigation is presented for cases 1 and 2 above which confirms the validity of the proposed methods of solution. These are further supported by experimental results reported in the literature. Good agreement is obtained between measured and calculated loss values for surface field strengths beyond the linear part of the d.c. magnetisation characteristic. It is also shown that there is a difference in the electrical behaviour of a small diameter conductor or thin tube under resistance or induction heating conditions.