993 resultados para Subset


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5°-resolution range from approximately 50% at 1 mm h−1 to 20% at 14 mm h−1. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%–80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5° resolution is relatively small (less than 6% at 5 mm day−1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%–35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%–15% at 5 mm day−1, with proportionate reductions in latent heating sampling errors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Commensal bacteria, including some species of lactobacilli commonly present in human breast milk, appear to colonize the neonatal gut and contribute to protection against infant infections, suggesting that lactobacilli could potentially modulate immunity. In this study, we evaluated the potential of two Lactobacillus strains isolated from human milk to modulate the activation and cytokine profile of peripheral blood mononuclear cell (PBMC) subsets in vitro. Moreover, these effects were compared to the same probiotic species of non-milk origin. Lactobacillus salivarius CECT5713 and Lactobacillus fermentum CECT5716 at 105, 106 and 107 bacteria/mL were co-cultured with PBMC (106/mL) from 8 healthy donors for 24 h. Activation status (CD69 and CD25 expressions) of natural killer (NK) cells (CD56+), total T cells (CD3+), cytotoxic T cells (CD8+) and CD4+ T cells was determined by flow cytometry. Regulatory T cells (Treg) were also quantified by intracellular Foxp3 evaluation. Regarding innate immunity, NK cells were activated by addition of both Lactobacillus strains, and in particular, the CD8+ NK subset was preferentially induced to highly express CD69 (90%, p<0.05). With respect to acquired immunity, approximately 9% of CD8+ T cells became activated after co-cultivation with L. fermentum or L salivarius. Although CD4+ T cells demonstrated a weaker response, there was a preferential activation of Treg cells (CD4+CD25+Foxp3+) after exposure to both milk probiotic bacteria (p<0.05). Both strains significantly induced the production of a number of cytokines and chemokines, including TNFα, IL-1β, IL-8, MIP-1α, MIP-1β, and GM-CSF, but some strain-specific effects were apparent. This work demonstrates that L salivarius CECT5713 and L. fermentum CECT5716 enhanced both natural and acquired immune responses, as evidenced by the activation of NK and T cell subsets and the expansion of Treg cells, as well as the induction of a broad array of cytokines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-organizing neural networks have been implemented in a wide range of application areas such as speech processing, image processing, optimization and robotics. Recent variations to the basic model proposed by the authors enable it to order state space using a subset of the input vector and to apply a local adaptation procedure that does not rely on a predefined test duration limit. Both these variations have been incorporated into a new feature map architecture that forms an integral part of an Hybrid Learning System (HLS) based on a genetic-based classifier system. Problems are represented within HLS as objects characterized by environmental features. Objects controlled by the system have preset targets set against a subset of their features. The system's objective is to achieve these targets by evolving a behavioural repertoire that efficiently explores and exploits the problem environment. Feature maps encode two types of knowledge within HLS — long-term memory traces of useful regularities within the environment and the classifier performance data calibrated against an object's feature states and targets. Self-organization of these networks constitutes non-genetic-based (experience-driven) learning within HLS. This paper presents a description of the HLS architecture and an analysis of the modified feature map implementing associative memory. Initial results are presented that demonstrate the behaviour of the system on a simple control task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Variations on the standard Kohonen feature map can enable an ordering of the map state space by using only a limited subset of the complete input vector. Also it is possible to employ merely a local adaptation procedure to order the map, rather than having to rely on global variables and objectives. Such variations have been included as part of a hybrid learning system (HLS) which has arisen out of a genetic-based classifier system. In the paper a description of the modified feature map is given, which constitutes the HLSs long term memory, and results in the control of a simple maze running task are presented, thereby demonstrating the value of goal related feedback within the overall network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model robustness and adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the model subset selection cost function includes a D-optimality design criterion that maximizes the determinant of the design matrix of the subset to ensure the model robustness, adequacy, and parsimony of the final model. The proposed approach is based on the forward orthogonal least square (OLS) algorithm, such that new D-optimality-based cost function is constructed based on the orthogonalization process to gain computational advantages and hence to maintain the inherent advantage of computational efficiency associated with the conventional forward OLS approach. Illustrative examples are included to demonstrate the effectiveness of the new approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the subset selection cost function includes an A-optimality design criterion to minimize the variance of the parameter estimates that ensures the adequacy and parsimony of the final model. An illustrative example is included to demonstrate the effectiveness of the new approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Determination of varicella zoster virus (VZV) immunity in healthcare workers without a history of chickenpox is important for identifying those in need of vOka vaccination. Post immunisation, healthcare workers in the UK who work with high risk patients are tested for seroconversion. To assess the performance of the time-resolved fluorescence immunoassay (TRFIA) for the detection of antibody in vaccinated as well as unvaccinated individuals, a cut-off was first calculated. VZV-IgG specific avidity and titres six weeks after the first dose of vaccine were used to identify subjects with pre-existing immunity among a cohort of 110 healthcare workers. Those with high avidity (≥60%) were considered to have previous immunity to VZV and those with low or equivocal avidity (<60%) were considered naive. The former had antibody levels ≥400mIU/mL and latter had levels <400mIU/mL. Comparison of the baseline values of the naive and immune groups allowed the estimation of a TRFIA cut-off value of >130mIU/mL which best discriminated between the two groups and this was confirmed by ROC analysis. Using this value, the sensitivity and specificity of TRFIA cut-off were 90% (95% CI 79-96), and 78% (95% CI 61-90) respectively in this population. A subset of samples tested by the gold standard Fluorescence Antibody to Membrane Antigen (FAMA) test showed 84% (54/64) agreement with TRFIA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A predominance of small, dense low-density lipoprotein (LDL) is a major component of an atherogenic lipoprotein phenotype, and a common, but modifiable, source of increased risk for coronary heart disease in the free-living population. While much of the atherogenicity of small, dense LDL is known to arise from its structural properties, the extent to which an increase in the number of small, dense LDL particles (hyper-apoprotein B) contributes to this risk of coronary heart disease is currently unknown. This study reports a method for the recruitment of free-living individuals with an atherogenic lipoprotein phenotype for a fish-oil intervention trial, and critically evaluates the relationship between LDL particle number and the predominance of small, dense LDL. In this group, volunteers were selected through local general practices on the basis of a moderately raised plasma triacylglycerol (triglyceride) level (>1.5 mmol/l) and a low concentration of high-density-lipoprotein cholesterol (<1.1 mmol/l). The screening of LDL subclasses revealed a predominance of small, dense LDL (LDL subclass pattern B) in 62% of the cohort. As expected, subjects with LDL subclass pattern B were characterized by higher plasma triacylglycerol and lower high-density lipoprotein cholesterol (<1.1 mmol/l) levels and, less predictably, by lower LDL cholesterol and apoprotein B levels (P<0.05; LDL subclass A compared with subclass B). While hyper-apoprotein B was detected in only five subjects, the relative percentage of small, dense LDL-III in subjects with subclass B showed an inverse relationship with LDL apoprotein B (r=-0.57; P<0.001), identifying a subset of individuals with plasma triacylglycerol above 2.5 mmol/l and a low concentration of LDL almost exclusively in a small and dense form. These findings indicate that a predominance of small, dense LDL and hyper-apoprotein B do not always co-exist in free-living groups. Moreover, if coronary risk increases with increasing LDL particle number, these results imply that the risk arising from a predominance of small, dense LDL may actually be reduced in certain cases when plasma triacylglycerol exceeds 2.5 mmol/l.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple and coherent framework for partitioning uncertainty in multi-model climate ensembles is presented. The analysis of variance (ANOVA) is used to decompose a measure of total variation additively into scenario uncertainty, model uncertainty and internal variability. This approach requires fewer assumptions than existing methods and can be easily used to quantify uncertainty related to model-scenario interaction - the contribution to model uncertainty arising from the variation across scenarios of model deviations from the ensemble mean. Uncertainty in global mean surface air temperature is quantified as a function of lead time for a subset of the Coupled Model Intercomparison Project phase 3 ensemble and results largely agree with those published by other authors: scenario uncertainty dominates beyond 2050 and internal variability remains approximately constant over the 21st century. Both elements of model uncertainty, due to scenario-independent and scenario-dependent deviations from the ensemble mean, are found to increase with time. Estimates of model deviations that arise as by-products of the framework reveal significant differences between models that could lead to a deeper understanding of the sources of uncertainty in multi-model ensembles. For example, three models are shown diverging pattern over the 21st century, while another model exhibits an unusually large variation among its scenario-dependent deviations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We give necessary and sufficient conditions for a pair of (generali- zed) functions 1(r1) and 2(r1, r2), ri 2X, to be the density and pair correlations of some point process in a topological space X, for ex- ample, Rd, Zd or a subset of these. This is an infinite-dimensional version of the classical “truncated moment” problem. Standard tech- niques apply in the case in which there can be only a bounded num- ber of points in any compact subset of X. Without this restriction we obtain, for compact X, strengthened conditions which are necessary and sufficient for the existence of a process satisfying a further re- quirement—the existence of a finite third order moment. We general- ize the latter conditions in two distinct ways when X is not compact.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new database of weather and circulation type catalogs is presented comprising 17 automated classification methods and five subjective classifications. It was compiled within COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions" in order to evaluate different methods for weather and circulation type classification. This paper gives a technical description of the included methods using a new conceptual categorization for classification methods reflecting the strategy for the definition of types. Methods using predefined types include manual and threshold based classifications while methods producing types derived from the input data include those based on eigenvector techniques, leader algorithms and optimization algorithms. In order to allow direct comparisons between the methods, the circulation input data and the methods' configuration were harmonized for producing a subset of standard catalogs of the automated methods. The harmonization includes the data source, the climatic parameters used, the classification period as well as the spatial domain and the number of types. Frequency based characteristics of the resulting catalogs are presented, including variation of class sizes, persistence, seasonal and inter-annual variability as well as trends of the annual frequency time series. The methodological concept of the classifications is partly reflected by these properties of the resulting catalogs. It is shown that the types of subjective classifications compared to automated methods show higher persistence, inter-annual variation and long-term trends. Among the automated classifications optimization methods show a tendency for longer persistence and higher seasonal variation. However, it is also concluded that the distance metric used and the data preprocessing play at least an equally important role for the properties of the resulting classification compared to the algorithm used for type definition and assignment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A solution of the lidar equation is discussed, that permits combining backscatter and depolarization measurements to quantitatively distinguish two different aerosol types with different depolarization properties. The method has been successfully applied to simultaneous observations of volcanic ash and boundary layer aerosol obtained in Exeter, United Kingdom, on 16 and 18 April 2010, permitting the contribution of the two aerosols to be quantified separately. First a subset of the atmospheric profiles is used where the two aerosol types belong to clearly distinguished layers, for the purpose of characterizing the ash in terms of lidar ratio and depolarization. These quantities are then used in a three‐component atmosphere solution scheme of the lidar equation applied to the full data set, in order to compute the optical properties of both aerosol types separately. On 16 April a thin ash layer, 100–400 m deep, is observed (average and maximum estimated ash optical depth: 0.11 and 0.2); it descends from ∼2800 to ∼1400 m altitude over a 6‐hour period. On 18 April a double ash layer, ∼400 m deep, is observed just above the morning boundary layer (average and maximum estimated ash optical depth: 0.19 and 0.27). In the afternoon the ash is entrained into the boundary layer, and the latter reaches a depth of ∼1800 m (average and maximum estimated ash optical depth: 0.1 and 0.15). An additional ash layer, with a very small optical depth, was observed on 18 April at an altitude of 3500–4000 m. By converting the lidar optical measurements using estimates of volcanic ash specific extinction, derived from other works, the observations seem to suggest approximate peak ash concentrations of ∼1500 and ∼1000 mg/m3,respectively, on the two observations dates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines two hydrochemical time-series derived from stream samples taken in the Upper Hafren catchment, Plynlimon, Wales. One time-series comprises data collected at 7-hour intervals over 22 months (Neal et al., submitted, this issue), while the other is based on weekly sampling over 20 years. A subset of determinands: aluminium, calcium, chloride, conductivity, dissolved organic carbon, iron, nitrate, pH, silicon and sulphate are examined within a framework of non-stationary time-series analysis to identify determinand trends, seasonality and short-term dynamics. The results demonstrate that both long-term and high-frequency monitoring provide valuable and unique insights into the hydrochemistry of a catchment. The long-term data allowed analysis of long-termtrends, demonstrating continued increases in DOC concentrations accompanied by declining SO4 concentrations within the stream, and provided new insights into the changing amplitude and phase of the seasonality of the determinands such as DOC and Al. Additionally, these data proved invaluable for placing the short-term variability demonstrated within the high-frequency data within context. The 7-hour data highlighted complex diurnal cycles for NO3, Ca and Fe with cycles displaying changes in phase and amplitude on a seasonal basis. The high-frequency data also demonstrated the need to consider the impact that the time of sample collection can have on the summary statistics of the data and also that sampling during the hours of darkness provides additional hydrochemical information for determinands which exhibit pronounced diurnal variability. Moving forward, this research demonstrates the need for both long-term and high-frequency monitoring to facilitate a full and accurate understanding of catchment hydrochemical dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Magnetic clouds (MCs) are a subset of interplanetary coronal mass ejections (ICMEs) which exhibit signatures consistent with a magnetic flux rope structure. Techniques for reconstructing flux rope orientation from single-point in situ observations typically assume the flux rope is locally cylindrical, e.g., minimum variance analysis (MVA) and force-free flux rope (FFFR) fitting. In this study, we outline a non-cylindrical magnetic flux rope model, in which the flux rope radius and axial curvature can both vary along the length of the axis. This model is not necessarily intended to represent the global structure of MCs, but it can be used to quantify the error in MC reconstruction resulting from the cylindrical approximation. When the local flux rope axis is approximately perpendicular to the heliocentric radial direction, which is also the effective spacecraft trajectory through a magnetic cloud, the error in using cylindrical reconstruction methods is relatively small (≈ 10∘). However, as the local axis orientation becomes increasingly aligned with the radial direction, the spacecraft trajectory may pass close to the axis at two separate locations. This results in a magnetic field time series which deviates significantly from encounters with a force-free flux rope, and consequently the error in the axis orientation derived from cylindrical reconstructions can be as much as 90∘. Such two-axis encounters can result in an apparent ‘double flux rope’ signature in the magnetic field time series, sometimes observed in spacecraft data. Analysing each axis encounter independently produces reasonably accurate axis orientations with MVA, but larger errors with FFFR fitting.