67 resultados para Analysis Phase
Resumo:
This paper considers the contribution of pollen analysis to conservation strategies aimed at restoring planted ancient woodland. Pollen and charcoal data are presented from organic deposits located adjacent to the Wentwood, a large planted ancient woodland in southeast Wales. Knowledge of the ecosystems preceding conifer planting can assist in restoring ancient woodlands by placing fragmented surviving ancient woodland habitats in a broader ecological, historical and cultural context. These habitats derive largely from secondary woodland that regenerated in the 3rd–5th centuries A.D. following largescale clearance of Quercus-Corylus woodland during the Romano-British period. Woodland regeneration favoured Fraxinus and Betula. Wood pasture and common land dominated the Wentwood during the medieval period until the enclosures of the 17th century. Surviving ancient woodland habitats contain an important Fagus component that probably reflects an earlier phase of planting preceding conifer planting in the 1880s. It is recommended that restoration measures should not aim to recreate static landscapes or woodland that existed under natural conditions. Very few habitats within the Wentwood can be considered wholly natural because of the long history of human impact. In these circumstances, restoration should focus on restoring those elements of the cultural landscape that are of most benefit to a range of flora and fauna, whilst taking into account factors that present significant issues for future conservation management, such as the adverse effects from projected climate change.
Resumo:
Rate coefficients for reactions of nitrate radicals (NO3) with (Z)-pent-2-ene, (E)-pent-2-ene, (Z)-hex-2-ene, (E)-hex-2-ene, (Z)-hex-3-ene, (E)-hex-3-ene and (E)-3-methylpent-2-ene were determined to be (6.55 +/- 0.78) x 10(-13) cm(3) molecule(-1) s(-1), (3.78 +/- 0.45) x 10(-13) cm(3) molecule(-1) s(-1), (5.30 +/- 0.73) x 10(-13) cm(3) molecule(-1) s(-1), (3.83 +/- 0.47) x 10(-13) cm(3) molecule(-1) s(-1), (4.37 +/- 0.49) x 10(-13) cm(3) molecule(-1) s(-1), (3.61 +/- 0.40) x 10(-13) cm(3) molecule(-1) s(-1) and (8.9 +/- 1.5) x 10(-12) cm(3) molecule(-1) s(-1), respectively. We performed kinetic experiments at room temperature and atmospheric pressure using a relative-rate technique with GC-FID analysis. The experimental results demonstrate a surprisingly large cis-trans (Z-E) effect, particularly in the case of the pent-2-enes, where the ratio of rate coefficients is ca. 1.7. Rate coefficients are discussed in terms of electronic and steric influences, and our results give some insight into the effects of chain length and position of the double bond on the reaction of NO3 with unsaturated hydrocarbons. Atmospheric lifetimes were calculated with respect to important oxidants in the troposphere for the alkenes studied, and NO3-initiated oxidation is found to be the dominant degradation route for (Z)-pent-2-ene, (Z)-hex-3-ene and (E)-3-methylpent-2-ene.
Resumo:
The differential phase (ΦDP) measured by polarimetric radars is recognized to be a very good indicator of the path integrated by rain. Moreover, if a linear relationship is assumed between the specific differential phase (KDP) and the specific attenuation (AH) and specific differential attenuation (ADP), then attenuation can easily be corrected. The coefficients of proportionality, γH and γDP, are, however, known to be dependent in rain upon drop temperature, drop shapes, drop size distribution, and the presence of large drops causing Mie scattering. In this paper, the authors extensively apply a physically based method, often referred to as the “Smyth and Illingworth constraint,” which uses the constraint that the value of the differential reflectivity ZDR on the far side of the storm should be low to retrieve the γDP coefficient. More than 30 convective episodes observed by the French operational C-band polarimetric Trappes radar during two summers (2005 and 2006) are used to document the variability of γDP with respect to the intrinsic three-dimensional characteristics of the attenuating cells. The Smyth and Illingworth constraint could be applied to only 20% of all attenuated rays of the 2-yr dataset so it cannot be considered the unique solution for attenuation correction in an operational setting but is useful for characterizing the properties of the strongly attenuating cells. The range of variation of γDP is shown to be extremely large, with minimal, maximal, and mean values being, respectively, equal to 0.01, 0.11, and 0.025 dB °−1. Coefficient γDP appears to be almost linearly correlated with the horizontal reflectivity (ZH), differential reflectivity (ZDR), and specific differential phase (KDP) and correlation coefficient (ρHV) of the attenuating cells. The temperature effect is negligible with respect to that of the microphysical properties of the attenuating cells. Unusually large values of γDP, above 0.06 dB °−1, often referred to as “hot spots,” are reported for 15%—a nonnegligible figure—of the rays presenting a significant total differential phase shift (ΔϕDP > 30°). The corresponding strongly attenuating cells are shown to have extremely high ZDR (above 4 dB) and ZH (above 55 dBZ), very low ρHV (below 0.94), and high KDP (above 4° km−1). Analysis of 4 yr of observed raindrop spectra does not reproduce such low values of ρHV, suggesting that (wet) ice is likely to be present in the precipitation medium and responsible for the attenuation and high phase shifts. Furthermore, if melting ice is responsible for the high phase shifts, this suggests that KDP may not be uniquely related to rainfall rate but can result from the presence of wet ice. This hypothesis is supported by the analysis of the vertical profiles of horizontal reflectivity and the values of conventional probability of hail indexes.
Resumo:
Higher order cumulant analysis is applied to the blind equalization of linear time-invariant (LTI) nonminimum-phase channels. The channel model is moving-average based. To identify the moving average parameters of channels, a higher-order cumulant fitting approach is adopted in which a novel relay algorithm is proposed to obtain the global solution. In addition, the technique incorporates model order determination. The transmitted data are considered as independently identically distributed random variables over some discrete finite set (e.g., set {±1, ±3}). A transformation scheme is suggested so that third-order cumulant analysis can be applied to this type of data. Simulation examples verify the feasibility and potential of the algorithm. Performance is compared with that of the noncumulant-based Sato scheme in terms of the steady state MSE and convergence rate.
Resumo:
Little has been reported on the performance of near-far resistant CDMA detectors in the presence of system parameter estimation errors (SPEEs). Starting with the general mathematical model of matched filters, the paper examines the effects of three classes of SPEEs, i.e., time-delay, carrier phase, and carrier frequency errors, on the performance (BER) of an emerging type of near-far resistant coherent DS/SSMA detector, i.e., the linear decorrelating detector. For comparison, the corresponding results for the conventional detector are also presented. It is shown that the linear decorrelating detector can still maintain a considerable performance advantage over the conventional detector even when some SPEEs exist.
Resumo:
The paper analyzes the performance of the unconstrained filtered-x LMS (FxLMS) algorithm for active noise control (ANC), where we remove the constraints on the controller that it must be causal and has finite impulse response. It is shown that the unconstrained FxLMS algorithm always converges to, if stable, the true optimum filter, even if the estimation of the secondary path is not perfect, and its final mean square error is independent of the secondary path. Moreover, we show that the sufficient and necessary stability condition for the feedforward unconstrained FxLMS is that the maximum phase error of the secondary path estimation must be within 90°, which is the only necessary condition for the feedback unconstrained FxLMS. The significance of the analysis on a practical system is also discussed. Finally we show how the obtained results can guide us to design a robust feedback ANC headset.
Resumo:
Sensitive methods that are currently used to monitor proteolysis by plasmin in milk are limited due to 7 their high cost and lack of standardisation for quality assurance in the various dairy laboratories. In 8 this study, four methods, trinitrobenzene sulphonic acid (TNBS), reverse phase high pressure liquid 9 chromatography (RP-HPLC), gel electrophoresis and fluorescamine, were selected to assess their 10 suitability for the detection of proteolysis in milk by plasmin. Commercial UHT milk was incubated 11 with plasmin at 37 °C for one week. Clarification was achieved by isoelectric precipitation (pH 4·6 12 soluble extracts)or 6% (final concentration) trichloroacetic acid (TCA). The pH 4·6 and 6% TCA 13 soluble extracts of milk showed high correlations (R2 > 0·93) by the TNBS, fluorescamine and 14 RP-HPLC methods, confirming increased proteolysis during storage. For gel electrophoresis,15 extensive proteolysis was confirmed by the disappearance of α- and β-casein bands on the seventh 16 day, which was more evident in the highest plasmin concentration. This was accompanied by the 17 appearance of α- and β-casein proteolysis products with higher intensities than on previous days, 18 implying that more products had been formed as a result of casein breakdown. The fluorescamine 19 method had a lower detection limit compared with the other methods, whereas gel electrophoresis 20 was the best qualitative method for monitoring β-casein proteolysis products. Although HPLC was the 21 most sensitive, the TNBS method is recommended for use in routine laboratory analysis on the basis 22 of its accuracy, reliability and simplicity.
Resumo:
We examined the relationship between blood antioxidant enzyme activities, indices of inflammatory status and a number of lifestyle factors in the Caerphilly prospective cohort study of ischaemic heart disease. The study began in 1979 and is based on a representative male population sample. Initially 2512 men were seen in phase I, and followed-up every 5 years in phases II and III; they have recently been seen in phase IV. Data on social class, smoking habit, alcohol consumption were obtained by questionnaire, and body mass index was measured. Antioxidant enzyme activities and indices of inflammatory status were estimated by standard techniques. Significant associations were observed for: age with α-1-antichymotrypsin (p<0.0001) and with caeruloplasmin, both protein and oxidase (p<0.0001); smoking habit with α-1-antichymotrypsin (p<0.0001), with caeruloplasmin, both protein and oxidase (p<0.0001) and with glutathione peroxidose (GPX) (p<0.0001); social class with α-1-antichymotrypsin (p<0.0001), with caeruloplasmin both protein (p<0.001) and oxidase (p<0.01) and with GPX (p<0.0001); body mass index with α-1-antichymotrypsin (p<0.0001) and with caeruloplasmin protein (p<0.001). There was no significant association between alcohol consumption and any of the blood enzymes measured. Factor analysis produced a three-factor model (explaining 65.9% of the variation in the data set) which appeared to indicate close inter-relationships among antioxidants.
Resumo:
In recent years, there has been a drive to save development costs and shorten time-to-market of new therapies. Research into novel trial designs to facilitate this goal has led to, amongst other approaches, the development of methodology for seamless phase II/III designs. Such designs allow treatment or dose selection at an interim analysis and comparative evaluation of efficacy with control, in the same study. Methods have gained much attention because of their potential advantages compared to conventional drug development programmes with separate trials for individual phases. In this article, we review the various approaches to seamless phase II/III designs based upon the group-sequential approach, the combination test approach and the adaptive Dunnett method. The objective of this article is to describe the approaches in a unified framework and highlight their similarities and differences to allow choice of an appropriate methodology by a trialist considering conducting such a trial.
Resumo:
This paper examines two hydrochemical time-series derived from stream samples taken in the Upper Hafren catchment, Plynlimon, Wales. One time-series comprises data collected at 7-hour intervals over 22 months (Neal et al., submitted, this issue), while the other is based on weekly sampling over 20 years. A subset of determinands: aluminium, calcium, chloride, conductivity, dissolved organic carbon, iron, nitrate, pH, silicon and sulphate are examined within a framework of non-stationary time-series analysis to identify determinand trends, seasonality and short-term dynamics. The results demonstrate that both long-term and high-frequency monitoring provide valuable and unique insights into the hydrochemistry of a catchment. The long-term data allowed analysis of long-termtrends, demonstrating continued increases in DOC concentrations accompanied by declining SO4 concentrations within the stream, and provided new insights into the changing amplitude and phase of the seasonality of the determinands such as DOC and Al. Additionally, these data proved invaluable for placing the short-term variability demonstrated within the high-frequency data within context. The 7-hour data highlighted complex diurnal cycles for NO3, Ca and Fe with cycles displaying changes in phase and amplitude on a seasonal basis. The high-frequency data also demonstrated the need to consider the impact that the time of sample collection can have on the summary statistics of the data and also that sampling during the hours of darkness provides additional hydrochemical information for determinands which exhibit pronounced diurnal variability. Moving forward, this research demonstrates the need for both long-term and high-frequency monitoring to facilitate a full and accurate understanding of catchment hydrochemical dynamics.
Resumo:
Government policies have backed intermediate housing market mechanisms like shared equity, intermediate rented and shared ownership (SO) as potential routes for some households, who are otherwise squeezed between the social housing and the private market. The rhetoric deployed around such housing has regularly contained claims about its social progressiveness and role in facilitating socio-economic mobility, centring on a claim that SO schemes can encourage people to move from rented accommodation through a shared equity phase and into full owner-occupation. SO has been justified on the grounds of it being transitional state, rather than a permanent tenure. However SO buyers may be laden with economic cost-benefit structures that do not stack up evenly and as a consequence there may be little realistic prospect of ever reaching a preferred outcome. Such behaviours have received little empirical attention as yet despite, the SO model arguably offers a sub-optimal solution towards homeownership, or in terms of wider quality of life. Given the paucity of rigorous empirical work on this issue, this paper delineates the evidence so far and sets out a research agenda. Our analysis is based on a large dataset of new shared owners, observing an information base that spans the past decade. We then set out an agenda to further examine the behaviours of the SO occupants and to examine the implications for future public policy based on existing literature and our outline findings. This paper is particularly opportune at a time of economic uncertainty and an overriding ‘austerity’ drive in public funding in the UK, through which SO schemes have enjoyed support uninterruptedly thus far.
Resumo:
Seamless phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages, with stage 1 used to answer phase II objectives such as treatment selection and stage 2 used for the confirmatory analysis, which is a phase III objective. Although seamless phase II/III clinical trials are efficient because the confirmatory analysis includes phase II data from stage 1, inference can pose statistical challenges. In this paper, we consider point estimation following seamless phase II/III clinical trials in which stage 1 is used to select the most effective experimental treatment and to decide if, compared with a control, the trial should stop at stage 1 for futility. If the trial is not stopped, then the phase III confirmatory part of the trial involves evaluation of the selected most effective experimental treatment and the control. We have developed two new estimators for the treatment difference between these two treatments with the aim of reducing bias conditional on the treatment selection made and on the fact that the trial continues to stage 2. We have demonstrated the properties of these estimators using simulations
Resumo:
A quasi-optical interferometric technique capable of measuring antenna phase patterns without the need for a heterodyne receiver is presented. It is particularly suited to the characterization of terahertz antennas feeding power detectors or mixers employing quasi-optical local oscillator injection. Examples of recorded antenna phase patterns at frequencies of 1.4 and 2.5 THz using homodyne detectors are presented. To our knowledge, these are the highest frequency antenna phase patterns ever recovered. Knowledge of both the amplitude and phase patterns in the far field enable a Gauss-Hermite or Gauss-Laguerre beam-mode analysis to be carried out for the antenna, of importance in performance optimization calculations, such as antenna gain and beam efficiency parameters at the design and prototype stage of antenna development. A full description of the beam would also be required if the antenna is to be used to feed a quasi-optical system in the near-field to far-field transition region. This situation could often arise when the device is fitted directly at the back of telescopes in flying observatories. A further benefit of the proposed technique is simplicity for characterizing systems in situ, an advantage of considerable importance as in many situations, the components may not be removable for further characterization once assembled. The proposed methodology is generic and should be useful across the wider sensing community, e.g., in single detector acoustic imaging or in adaptive imaging array applications. Furthermore, it is applicable across other frequencies of the EM spectrum, provided adequate spatial and temporal phase stability of the source can be maintained throughout the measurement process. Phase information retrieval is also of importance to emergent research areas, such as band-gap structure characterization, meta-materials research, electromagnetic cloaking, slow light, super-lens design as well as near-field and virtual imaging applications.
Resumo:
In wireless communication systems, all in-phase and quadrature-phase (I/Q) signal processing receivers face the problem of I/Q imbalance. In this paper, we investigate the effect of I/Q imbalance on the performance of multiple-input multiple-output (MIMO) maximal ratio combining (MRC) systems that perform the combining at the radio frequency (RF) level, thereby requiring only one RF chain. In order to perform the MIMO MRC, we propose a channel estimation algorithm that accounts for the I/Q imbalance. Moreover, a compensation algorithm for the I/Q imbalance in MIMO MRC systems is proposed, which first employs the least-squares (LS) rule to estimate the coefficients of the channel gain matrix, beamforming and combining weight vectors, and parameters of I/Q imbalance jointly, and then makes use of the received signal together with its conjugation to detect the transmitted signal. The performance of the MIMO MRC system under study is evaluated in terms of average symbol error probability (SEP), outage probability and ergodic capacity, which are derived considering transmission over Rayleigh fading channels. Numerical results are provided and show that the proposed compensation algorithm can efficiently mitigate the effect of I/Q imbalance.