894 resultados para Linearity
Resumo:
A simple method for the measurement of the active leflunomide metabolite A77 1726 in human plasma by HPLC is presented. The sample workup was simple, using acetonitrile for protein precipitation. Chromatographic separation of A77 1726 and the internal standard, alpha-phenylcinnamic acid, was achieved using a C-18 column with UV detection at 305 nm. The assay displayed reproducible linearity for A77 1726 with determination coefficients (r(2)) > 0.997 over the concentration range 0.5-60.0 mug/ml. The reproducibility (%CV) for intra- and inter-day assays of spiked controls was
Resumo:
Medical microbiology and virology laboratories use nucleic acid tests (NAT) to detect genomic material of infectious organisms in clinical samples. Laboratories choose to perform assembled (or in-house) NAT if commercial assays are not available or if assembled NAT are more economical or accurate. One reason commercial assays are more expensive is because extensive validation is necessary before the kit is marketed, as manufacturers must accept liability for the performance of their assays, assuming their instructions are followed. On the other hand, it is a particular laboratory's responsibility to validate an assembled NAT prior to using it for testing and reporting results on human samples. There are few published guidelines for the validation of assembled NAT. One procedure that laboratories can use to establish a validation process for an assay is detailed in this document. Before validating a method, laboratories must optimise it and then document the protocol. All instruments must be calibrated and maintained throughout the testing process. The validation process involves a series of steps including: (i) testing of dilution series of positive samples to determine the limits of detection of the assay and their linearity over concentrations to be measured in quantitative NAT; (ii) establishing the day-to-day variation of the assay's performance; (iii) evaluating the sensitivity and specificity of the assay as far as practicable, along with the extent of cross-reactivity with other genomic material; and (iv) assuring the quality of assembled assays using quality control procedures that monitor the performance of reagent batches before introducing new lots of reagent for testing.
Resumo:
Photon counting induces an effective non-linear optical phase shift in certain states derived by linear optics from single photons. Although this non-linearity is non-deterministic, it is sufficient in principle to allow scalable linear optics quantum computation (LOQC). The most obvious way to encode a qubit optically is as a superposition of the vacuum and a single photon in one mode-so-called 'single-rail' logic. Until now this approach was thought to be prohibitively expensive (in resources) compared to 'dual-rail' logic where a qubit is stored by a photon across two modes. Here we attack this problem with real-time feedback control, which can realize a quantum-limited phase measurement on a single mode, as has been recently demonstrated experimentally. We show that with this added measurement resource, the resource requirements for single-rail LOQC are not substantially different from those of dual-rail LOQC. In particular, with adaptive phase measurements an arbitrary qubit state a alpha/0 > + beta/1 > can be prepared deterministically.
Resumo:
A rapid method has been developed for the quantification of the prototypic cyclotide kalata B I in water and plasma utilizing matrix-assisted laser desorption ionisation time-of-flight (MALDI-TOF) mass spectrometry. The unusual structure of the cyclotides means that they do not ionise as readily as linear peptides and as a result of their low ionisation efficiency, traditional LC/MS analyses were not able to reach the levels of detection required for the quantification of cyclotides in plasma for pharmacokinetic studies. MALDI-TOF-MS analysis showed linearity (R-2 > 0.99) in the concentration range 0.05-10 mu g/mL with a limit of detection of 0.05 mu g/mL (9 fmol) in plasma. This paper highlights the applicability of MALDI-TOF mass spectrometry for the rapid and sensitive quantification of peptides in biological samples without the need for extensive extraction procedures. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Solid phase microextraction (SPME) offers a solvent-free and less labour-intensive alternative to traditional flavour isolation techniques. In this instance, SPME was optimised for the extraction of 17 stale flavour volatiles (C3-11,13 methyl ketones and C4-10 saturated aldehydes) from the headspace of full-cream ultrahigh-temperature (UHT)-processed milk. A comparison of relative extraction efficiencies was made using three fibre coatings, three extraction times and three extraction temperatures. Linearity of calibration curves, limits of detection and repeatability (coefficients of variation) were also used in determining the optimum extraction conditions. A 2 cm fibre coating of 50130 gm divinylbenzene/Carboxen/polydimethylsiloxane in conjunction with a 15 min extraction at 40 degrees C were chosen as the final optimum conditions. This method can be used as an objective tool for monitoring the flavour quality of UHT milk during storage. (c) 2005 Society of Chemical Industry.
Resumo:
This paper proposes a theoretical explanation of the variations of the sediment delivery ratio (SDR) versus catchment area relationships and the complex patterns in the behavior of sediment transfer processes at catchment scale. Taking into account the effects of erosion source types, deposition, and hydrological controls, we propose a simple conceptual model that consists of two linear stores arranged in series: a hillslope store that addresses transport to the nearest streams and a channel store that addresses sediment routing in the channel network. The model identifies four dimensionless scaling factors, which enable us to analyze a variety of effects on SDR estimation, including (1) interacting processes of erosion sources and deposition, (2) different temporal averaging windows, and (3) catchment runoff response. We show that the interactions between storm duration and hillslope/channel travel times are the major controls of peak-value-based sediment delivery and its spatial variations. The interplay between depositional timescales and the travel/residence times determines the spatial variations of total-volume-based SDR. In practical terms this parsimonious, minimal complexity model could provide a sound physical basis for diagnosing catchment to catchment variability of sediment transport if the proposed scaling factors can be quantified using climatic and catchment properties.
Resumo:
The bispectrum and third-order moment can be viewed as equivalent tools for testing for the presence of nonlinearity in stationary time series. This is because the bispectrum is the Fourier transform of the third-order moment. An advantage of the bispectrum is that its estimator comprises terms that are asymptotically independent at distinct bifrequencies under the null hypothesis of linearity. An advantage of the third-order moment is that its values in any subset of joint lags can be used in the test, whereas when using the bispectrum the entire (or truncated) third-order moment is required to construct the Fourier transform. In this paper, we propose a test for nonlinearity based upon the estimated third-order moment. We use the phase scrambling bootstrap method to give a nonparametric estimate of the variance of our test statistic under the null hypothesis. Using a simulation study, we demonstrate that the test obtains its target significance level, with large power, when compared to an existing standard parametric test that uses the bispectrum. Further we show how the proposed test can be used to identify the source of nonlinearity due to interactions at specific frequencies. We also investigate implications for heuristic diagnosis of nonstationarity.
Resumo:
Network building and exchange of information by people within networks is crucial to the innovation process. Contrary to older models, in social networks the flow of information is noncontinuous and nonlinear. There are critical barriers to information flow that operate in a problematic manner. New models and new analytic tools are needed for these systems. This paper introduces the concept of virtual circuits and draws on recent concepts of network modelling and design to introduce a probabilistic switch theory that can be described using matrices. It can be used to model multistep information flow between people within organisational networks, to provide formal definitions of efficient and balanced networks and to describe distortion of information as it passes along human communication channels. The concept of multi-dimensional information space arises naturally from the use of matrices. The theory and the use of serial diagonal matrices have applications to organisational design and to the modelling of other systems. It is hypothesised that opinion leaders or creative individuals are more likely to emerge at information-rich nodes in networks. A mathematical definition of such nodes is developed and it does not invariably correspond with centrality as defined by early work on networks.
Resumo:
In electronic support, receivers must maintain surveillance over the very wide portion of the electromagnetic spectrum in which threat emitters operate. A common approach is to use a receiver with a relatively narrow bandwidth that sweeps its centre frequency over the threat bandwidth to search for emitters. The sequence and timing of changes in the centre frequency constitute a search strategy. The search can be expedited, if there is intelligence about the operational parameters of the emitters that are likely to be found. However, it can happen that the intelligence is deficient, untrustworthy or absent. In this case, what is the best search strategy to use? A random search strategy based on a continuous-time Markov chain (CTMC) is proposed. When the search is conducted for emitters with a periodic scan, it is shown that there is an optimal configuration for the CTMC. It is optimal in the sense that the expected time to intercept an emitter approaches linearity most quickly with respect to the emitter's scan period. A fast and smooth approach to linearity is important, as other strategies can exhibit considerable and abrupt variations in the intercept time as a function of scan period. In theory and numerical examples, the optimum CTMC strategy is compared with other strategies to demonstrate its superior properties.
Resumo:
We present the first characterization of the mechanical properties of lysozyme films formed by self-assembly at the air-water interface using the Cambridge interfacial tensiometer (CIT), an apparatus capable of subjecting protein films to a much higher level of extensional strain than traditional dilatational techniques. CIT analysis, which is insensitive to surface pressure, provides a direct measure of the extensional stress-strain behavior of an interfacial film without the need to assume a mechanical model (e.g., viscoelastic), and without requiring difficult-to-test assumptions regarding low-strain material linearity. This testing method has revealed that the bulk solution pH from which assembly of an interfacial lysozyme film occurs influences the mechanical properties of the film more significantly than is suggested by the observed differences in elastic moduli or surface pressure. We have also identified a previously undescribed pH dependency in the effect of solution ionic strength on the mechanical strength of the lysozyme films formed at the air-water interface. Increasing solution ionic strength was found to increase lysozyme film strength when assembly occurred at pH 7, but it caused a decrease in film strength at pH 11, close to the pI of lysozyme. This result is discussed in terms of the significant contribution made to protein film strength by both electrostatic interactions and the hydrophobic effect. Washout experiments to remove protein from the bulk phase have shown that a small percentage of the interfacially adsorbed lysozyme molecules are reversibly adsorbed. Finally, the washout tests have probed the role played by additional adsorption to the fresh interface formed by the application of a large strain to the lysozyme film and have suggested the movement of reversibly bound lysozyme molecules from a subinterfacial layer to the interface.
Resumo:
-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
Resumo:
Esta pesquisa reflete sobre questões da ética contemporânea na publicidade dirigida ao público feminino. A discussão de tais questões debruça sobre a vertente deontológica (convicção). O objetivo do estudo é investigar como os anúncios publicados nas revistas Claudia e Nova articulam questões de tal ética. Assim, buscou-se verificar, por meio da análise de conteúdo, se os anúncios seguiam os princípios contidos no Código Brasileiro de Auto-Regulamentação Publicitária. Em uma segunda etapa, pretendeu-se investigar, por meio da análise de discurso, como se deu a construção dos anúncios sob o enfoque da ética e da mulher na sociedade dos dias de hoje. Concluiu-se que as representações da ética deontológica na publicidade feminina ocorrem de maneira não linear e fragmentada. A não linearidade se refere ao não cumprimento dos princípios éticos por parte de alguns anúncios analisados. Já a fragmentação diz respeito ao modo como a mulher é retratada e como os produtos são divulgados nos anúncios, a partir de diferentes padrões de conduta (princípios) e baseados em valores diversificados. Ora os anúncios apresentam os produtos de maneira verdadeira ou não, ora as mulheres aparecem sob um enfoque baseado em valores contemporâneos ou em valores tradicionais de modo diferenciado.(AU)
Resumo:
Since 1996 direct femtosecond inscription in transparent dielectrics has become the subject of intensive research. This enabling technology significantly expands the technological boundaries for direct fabrication of 3D structures in a wide variety of materials. It allows modification of non-photosensitive materials, which opens the door to numerous practical applications. In this work we explored the direct femtosecond inscription of waveguides and demonstrated at least one order of magnitude enhancement in the most critical parameter - the induced contrast of the refractive index in a standard borosilicate optical glass. A record high induced refractive contrast of 2.5×10-2 is demonstrated. The waveguides fabricated possess one of the lowest losses, approaching level of Fresnel reflection losses at the glassair interface. High refractive index contrast allows the fabrication of curvilinear waveguides with low bend losses. We also demonstrated the optimisation of the inscription regimes in BK7 glass over a broad range of experimental parameters and observed a counter-intuitive increase of the induced refractive index contrast with increasing translation speed of a sample. Examples of inscription in a number of transparent dielectrics hosts using high repetition rate fs laser system (both glasses and crystals) are also presented. Sub-wavelength scale periodic inscription inside any material often demands supercritical propagation regimes, when pulse peak power is more than the critical power for selffocusing, sometimes several times higher than the critical power. For a sub-critical regime, when the pulse peak power is less than the critical power for self-focusing, we derive analytic expressions for Gaussian beam focusing in the presence of Kerr non-linearity as well as for a number of other beam shapes commonly used in experiments, including astigmatic and ring-shaped ones. In the part devoted to the fabrication of periodic structures, we report on recent development of our point-by-point method, demonstrating the shortest periodic perturbation created in the bulk of a pure fused silica sample, by using third harmonics (? =267 nm) of fundamental laser frequency (? =800 nm) and 1 kHz femtosecond laser system. To overcome the fundamental limitations of the point-by-point method we suggested and experimentally demonstrated the micro-holographic inscription method, which is based on using the combination of a diffractive optical element and standard micro-objectives. Sub-500 nm periodic structures with a much higher aspect ratio were demonstrated. From the applications point of view, we demonstrate examples of photonics devices by direct femtosecond fabrication method, including various vectorial bend-sensors fabricated in standard optical fibres, as well as a highly birefringent long-period gratings by direct modulation method. To address the intrinsic limitations of femtosecond inscription at very shallow depths we suggested the hybrid mask-less lithography method. The method is based on precision ablation of a thin metal layer deposited on the surface of the sample to create a mask. After that an ion-exchange process in the melt of Ag-containing salts allows quick and low-cost fabrication of shallow waveguides and other components of integrated optics. This approach covers the gap in direct fs inscription of shallow waveguide. Perspectives and future developments of direct femtosecond micro-fabrication are also discussed.
Resumo:
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.
Resumo:
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study,GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk,despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.