23 resultados para Single Intraoperative Application
em CentAUR: Central Archive University of Reading - UK
Resumo:
Single-carrier frequency division multiple access (SC-FDMA) has appeared to be a promising technique for high data rate uplink communications. Aimed at SC-FDMA applications, a cyclic prefixed version of the offset quadrature amplitude modulation based OFDM (OQAM-OFDM) is first proposed in this paper. We show that cyclic prefixed OQAMOFDM CP-OQAM-OFDM) can be realized within the framework of the standard OFDM system, and perfect recovery condition in the ideal channel is derived. We then apply CP-OQAMOFDM to SC-FDMA transmission in frequency selective fading channels. Signal model and joint widely linear minimum mean square error (WLMMSE) equalization using a prior information with low complexity are developed. Compared with the existing DFTS-OFDM based SC-FDMA, the proposed SC-FDMA can significantly reduce envelope fluctuation (EF) of the transmitted signal while maintaining the bandwidth efficiency. The inherent structure of CP-OQAM-OFDM enables low-complexity joint equalization in the frequency domain to combat both the multiple access interference and the intersymbol interference. The joint WLMMSE equalization using a prior information guarantees optimal MMSE performance and supports Turbo receiver for improved bit error rate (BER) performance. Simulation resultsconfirm the effectiveness of the proposed SC-FDMA in termsof EF (including peak-to-average power ratio, instantaneous-toaverage power ratio and cubic metric) and BER performances.
Resumo:
Three sludge types from the same treatment stream (undigested liquid, anaerobically digested liquid and dewatered, anaerobically digested cake) were used in a field based tub study. Amendments (4, 8, and 16 Mg dry solid (ds)ha(-1)) were incorporated into the upper 15 cm of a sandy loam soil prior to sowing with rye-grass (Lolium perenne L.). Nitrogen transformations in the soil were determined for the 80 d period following incorporation. Nitrogen uptake and crop yield were measured in the cut sward 35 and 70 d after sowing. The study showed that application of sewage sludge at rates as low as 4 Mgha(-1) can have a nutritional benefit to rye-grass over the two harvests. Differences in N transformation, and hence crop nutritional benefit, between sludge types were evident throughout the experiment. In particular, the dewatering process changed the mineral N characteristics of the anaerobically digested sludge, which, when not dewatered, outperformed the other sludges in terms of yield and mineralisation rate at both harvests. The dewatered sludge produced the lowest yield of rye-grass. The undigested liquid sludge had the lowest foliar N and soil NO(3)-N concentrations, possibly immobilised as the large oxidisable C component of this sludge was metabolised by the microbial biomass. Correlation data support the concept of preferential uptake of NH(4)-N over NO(3)-N in Lolium perenne. Results are discussed in the context of managing sludge type and application for a plant nutrient source and NO(3)-N release.
Resumo:
The constant-density Charney model describes the simplest unstable basic state with a planetary-vorticity gradient, which is uniform and positive, and baroclinicity that is manifest as a negative contribution to the potential-vorticity (PV) gradient at the ground and positive vertical wind shear. Together, these ingredients satisfy the necessary conditions for baroclinic instability. In Part I it was shown how baroclinic growth on a general zonal basic state can be viewed as the interaction of pairs of ‘counter-propagating Rossby waves’ (CRWs) that can be constructed from a growing normal mode and its decaying complex conjugate. In this paper the normal-mode solutions for the Charney model are studied from the CRW perspective.
Clear parallels can be drawn between the most unstable modes of the Charney model and the Eady model, in which the CRWs can be derived independently of the normal modes. However, the dispersion curves for the two models are very different; the Eady model has a short-wave cut-off, while the Charney model is unstable at short wavelengths. Beyond its maximum growth rate the Charney model has a neutral point at finite wavelength (r=1). Thereafter follows a succession of unstable branches, each with weaker growth than the last, separated by neutral points at integer r—the so-called ‘Green branches’. A separate branch of westward-propagating neutral modes also originates from each neutral point. By approximating the lower CRW as a Rossby edge wave and the upper CRW structure as a single PV peak with a spread proportional to the Rossby scale height, the main features of the ‘Charney branch’ (0
Resumo:
Bioturbation at all scales, which tends to replace the primary fabric of a sediment by the ichnofabric (the overall fabric of a sediment that has been bioturbated), is now recognised as playing a major role in facies interpretation. The manner in which the substrate may be colonized, and the physical, chemical and ecological controls (grainsize, sedimentation rate, oxygenation, nutrition, salinity, ethology, community structure and succession), together with the several ways in which the substrate is tiered by bioturbators, are the factors and processes that determine the nature of the ichnofabric. Eleven main styles of substrate tiering are described, ranging from single, pioneer colonization to complex tiering under equilibria, their modification under environmental deterioration and amelioration, and diagenetic enhancement or obscuration. Ichnofabrics may be assessed by four attributes: primary sedimentary factors, Bioturbation Index (BI), burrow size and frequency, and ichnological diversity. Construction of tier and ichnofabric constituent diagrams aid visualization and comparison. The breaks or changes in colonization and style of tiering at key stratal surfaces accentuate the surfaces, and many reflect a major environmental shift of the trace-forming biota. due to change in hydrodynamic regime (leading to non-deposition and/or erosion and/or lithification), change in salinity regime, or subaerial exposure. The succession of gradational or abrupt changes in ichnofabric through genetically related successions, together with changes in colonization and tiering across event beds, may also be interpreted in terms of changes in environmental parameters. It is not the ichnotaxa per se that are important in discriminating between ichnofabrics, but rather the environmental conditions that determine the overall style of colonization. Fabrics composed of different ichnotaxa (and different taphonomies) but similar tier structure and ichnoguild may form in similar environments of different age or different latitude. Appreciation of colonization and tiering styles places ancient ichnofabrics on a sound processrelated basis for environmental interpretation. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The calculation of accurate and reliable vibrational potential functions and normal co-ordinates is discussed, for such simple polyatomic molecules as it may be possible. Such calculations should be corrected for the effects of anharmonicity and of resonance interactions between the vibrational states, and should be fitted to all the available information on all isotopic species: particularly the vibrational frequencies, Coriolis zeta constants and centrifugal distortion constants. The difficulties of making these corrections, and of making use of the observed data are reviewed. A programme for the Ferranti Mercury Computer is described by means of which harmonic vibration frequencies and normal co-ordinate vectors, zeta factors and centrifugal distortion constants can be calculated, from a given force field and from given G-matrix elements, etc. The programme has been used on up to 5 × 5 secular equations for which a single calculation and output of results takes approximately l min; it can readily be extended to larger determinants. The best methods of using such a programme and the possibility of reversing the direction of calculation are discussed. The methods are applied to calculating the best possible vibrational potential function for the methane molecule, making use of all the observed data.
Resumo:
Capillary electrophoresis (CE) offers the analyst a number of key advantages for the analysis of the components of foods. CE offers better resolution than, say, high-performance liquid chromatography (HPLC), and is more adept at the simultaneous separation of a number of components of different chemistries within a single matrix. In addition, CE requires less rigorous sample cleanup procedures than HPLC, while offering the same degree of automation. However, despite these advantages, CE remains under-utilized by food analysts. Therefore, this review consolidates and discusses the currently reported applications of CE that are relevant to the analysis of foods. Some discussion is also devoted to the development of these reported methods and to the advantages/disadvantages compared with the more usual methods for each particular analysis. It is the aim of this review to give practicing food analysts an overview of the current scope of CE.
Resumo:
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to he stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase H and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
Space applications are challenged by the reliability of parallel computing systems (FPGAs) employed in space crafts due to Single-Event Upsets. The work reported in this paper aims to achieve self-managing systems which are reliable for space applications by applying autonomic computing constructs to parallel computing systems. A novel technique, 'Swarm-Array Computing' inspired by swarm robotics, and built on the foundations of autonomic and parallel computing is proposed as a path to achieve autonomy. The constitution of swarm-array computing comprising for constituents, namely the computing system, the problem / task, the swarm and the landscape is considered. Three approaches that bind these constituents together are proposed. The feasibility of one among the three proposed approaches is validated on the SeSAm multi-agent simulator and landscapes representing the computing space and problem are generated using the MATLAB.
Resumo:
Abstract. Different types of mental activity are utilised as an input in Brain-Computer Interface (BCI) systems. One such activity type is based on Event-Related Potentials (ERPs). The characteristics of ERPs are not visible in single-trials, thus averaging over a number of trials is necessary before the signals become usable. An improvement in ERP-based BCI operation and system usability could be obtained if the use of single-trial ERP data was possible. The method of Independent Component Analysis (ICA) can be utilised to separate single-trial recordings of ERP data into components that correspond to ERP characteristics, background electroencephalogram (EEG) activity and other components with non- cerebral origin. Choice of specific components and their use to reconstruct “denoised” single-trial data could improve the signal quality, thus allowing the successful use of single-trial data without the need for averaging. This paper assesses single-trial ERP signals reconstructed using a selection of estimated components from the application of ICA on the raw ERP data. Signal improvement is measured using Contrast-To-Noise measures. It was found that such analysis improves the signal quality in all single-trials.
Resumo:
We are developing computational tools supporting the detailed analysis of the dependence of neural electrophysiological response on dendritic morphology. We approach this problem by combining simulations of faithful models of neurons (experimental real life morphological data with known models of channel kinetics) with algorithmic extraction of morphological and physiological parameters and statistical analysis. In this paper, we present the novel method for an automatic recognition of spike trains in voltage traces, which eliminates the need for human intervention. This enables classification of waveforms with consistent criteria across all the analyzed traces and so it amounts to reduction of the noise in the data. This method allows for an automatic extraction of relevant physiological parameters necessary for further statistical analysis. In order to illustrate the usefulness of this procedure to analyze voltage traces, we characterized the influence of the somatic current injection level on several electrophysiological parameters in a set of modeled neurons. This application suggests that such an algorithmic processing of physiological data extracts parameters in a suitable form for further investigation of structure-activity relationship in single neurons.
Resumo:
BACKGROUND AND AIM: The atherogenic potential of dietary derived lipids, chylomicrons (CM) and their remnants (CMr) is now becoming more widely recognised. To investigate factors effecting levels of CM and CMr and their importance in coronary heart disease risk it is essential to use a specific method of quantification. Two studies were carried out to investigate: (i) effects of increased daily intake of long chain n-3 polyunsaturated fatty acid (LC n-3 PUFA), and (ii) effects of increasing meal monounsaturated fatty acid (MUFA) content on the postprandial response of intestinally-derived lipoproteins. The contribution of the intestinally-derived lipoproteins to total lipaemia was assessed by triacylglycerol-rich lipoprotein (TRL) apolipoprotein B-48 (apo B-48) and retinyl ester (RE) concentrations. METHODS AND RESULTS: In a randomised controlled crossover trial (placebo vs LC n-3 PUFA) a mean daily intake of 1.4 g/day of LC n-3 PUFA failed to reduce fasting and postprandial triacylglycerol (TAG) response in 9 healthy male volunteers. Although the pattern and nature of the apo B-48 response was consistent with the TAG response following the two diets, the postprandial RE response differed on the LC n-3 PUFA diet with a lower early RE response and a delayed and more marked increase in RE in the late postprandial period compared with the control diet, but the differences did not reach levels of statistical significance. In the meal study there was no effect of MUFA/SFA content on the total lipaemic response to the meals nor on the contribution of intestinally derived lipoproteins evaluated as TAG, apo B-48 and RE responses in the TRL fraction. In both studies, the RE and apo B-48 measurements provided broadly similar information with respect to lack of effects of dietary or meal fatty acid composition and the presence of single or multiple peak responses. However the apo B-48 and RE measurements differed with respect to the timing of their peak response times, with a delayed RE peak, relalive to apo B-48, of approximately 2-3 hours for the LC n-3 PUFA diet (p = 0.002) study and 1-1.5 hours for the meal MUFA/SFA study. CONCLUSIONS: It was concluded that there are limitations of using RE as a specific CM marker, apo B-48 quantitation was found to be a more appropriate method for CM and CMr quantitation. However it was still considered of value to measure RE as it provided additional information regarding the incorporation of other constituents into the CM particle.
Conditioning of incremental variational data assimilation, with application to the Met Office system
Resumo:
Implementations of incremental variational data assimilation require the iterative minimization of a series of linear least-squares cost functions. The accuracy and speed with which these linear minimization problems can be solved is determined by the condition number of the Hessian of the problem. In this study, we examine how different components of the assimilation system influence this condition number. Theoretical bounds on the condition number for a single parameter system are presented and used to predict how the condition number is affected by the observation distribution and accuracy and by the specified lengthscales in the background error covariance matrix. The theoretical results are verified in the Met Office variational data assimilation system, using both pseudo-observations and real data.
Resumo:
In this paper we examine the order of integration of EuroSterling interest rates by employing techniques that can allow for a structural break under the null and/or alternative hypothesis of the unit-root tests. In light of these results, we investigate the cointegrating relationship implied by the single, linear expectations hypothesis of the term structure of interest rates employing two techniques, one of which allows for the possibility of a break in the mean of the cointegrating relationship. The aim of the paper is to investigate whether or not the interest rate series can be viewed as I(1) processes and furthermore, to consider whether there has been a structural break in the series. We also determine whether, if we allow for a break in the cointegration analysis, the results are consistent with those obtained when a break is not allowed for. The main results reported in this paper support the conjecture that the ‘short’ Euro-currency rates are characterised as I(1) series that exhibit a structural break on or near Black Wednesday, 16 September 1992, whereas the ‘long’ rates are I(1) series that do not support the presence of a structural break. The evidence from the cointegration analysis suggests that tests of the expectations hypothesis based on data sets that include the ERM crisis period, or a period that includes a structural break, might be problematic if the structural break is not explicitly taken into account in the testing framework.
Resumo:
This paper describes an experimental application of constrained predictive control and feedback linearisation based on dynamic neural networks. It also verifies experimentally a method for handling input constraints, which are transformed by the feedback linearisation mappings. A performance comparison with a PID controller is also provided. The experimental system consists of a laboratory based single link manipulator arm, which is controlled in real time using MATLAB/SIMULINK together with data acquisition equipment.