950 resultados para kernel density method
Resumo:
The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.
Resumo:
Developments in high-throughput genotyping provide an opportunity to explore the application of marker technology in distinctness, uniformity and stability (DUS) testing of new varieties. We have used a large set of molecular markers to assess the feasibility of a UPOV Model 2 approach: “Calibration of threshold levels for molecular characteristics against the minimum distance in traditional characteristics”. We have examined 431 winter and spring barley varieties, with data from UK DUS trials comprising 28 characteristics, together with genotype data from 3072 SNP markers. Inter varietal distances were calculated and we found higher correlations between molecular and morphological distances than have been previously reported. When varieties were grouped by kinship, phenotypic and genotypic distances of these groups correlated well. We estimated the minimum marker numbers required and showed there was a ceiling after which the correlations do not improve. To investigate the possibility of breaking through this ceiling, we attempted genomic prediction of phenotypes from genotypes and higher correlations were achieved. We tested distinctness decisions made using either morphological or genotypic distances and found poor correspondence between each method.
Resumo:
Purpose: To quantify to what extent the new registration method, DARTEL (Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra), may reduce the smoothing kernel width required and investigate the minimum group size necessary for voxel-based morphometry (VBM) studies. Materials and Methods: A simulated atrophy approach was employed to explore the role of smoothing kernel, group size, and their interactions on VBM detection accuracy. Group sizes of 10, 15, 25, and 50 were compared for kernels between 0–12 mm. Results: A smoothing kernel of 6 mm achieved the highest atrophy detection accuracy for groups with 50 participants and 8–10 mm for the groups of 25 at P < 0.05 with familywise correction. The results further demonstrated that a group size of 25 was the lower limit when two different groups of participants were compared, whereas a group size of 15 was the minimum for longitudinal comparisons but at P < 0.05 with false discovery rate correction. Conclusion: Our data confirmed DARTEL-based VBM generally benefits from smaller kernels and different kernels perform best for different group sizes with a tendency of smaller kernels for larger groups. Importantly, the kernel selection was also affected by the threshold applied. This highlighted that the choice of kernel in relation to group size should be considered with care.
Resumo:
e consider integral equations on the half-line of the form and the finite section approximation to x obtained by replacing the infinite limit of integration by the finite limit β. We establish conditions under which, if the finite section method is stable for the original integral equation (i.e. exists and is uniformly bounded in the space of bounded continuous functions for all sufficiently large β), then it is stable also for a perturbed equation in which the kernel k is replaced by k + h. The class of perturbations allowed includes all compact and some non-compact perturbations of the integral operator. Using this result we study the stability and convergence of the finite section method in the space of continuous functions x for which ()()()=−∫∞dttxt,sk)s(x0()syβxβx()sxsp+1 is bounded. With the additional assumption that ()(tskt,sk−≤ where ()()(),qsomefor,sassOskandRLkq11>+∞→=∈− we show that the finite-section method is stable in the weighted space for ,qp≤≤0 provided it is stable on the space of bounded continuous functions. With these results we establish error bounds in weighted spaces for x - xβ and precise information on the asymptotic behaviour at infinity of x. We consider in particular the case when the integral operator is a perturbation of a Wiener-Hopf operator and illustrate this case with a Wiener-Hopf integral equation arising in acoustics.
Resumo:
High-density oligonucleotide (oligo) arrays are a powerful tool for transcript profiling. Arrays based on GeneChip® technology are amongst the most widely used, although GeneChip® arrays are currently available for only a small number of plant and animal species. Thus, we have developed a method to improve the sensitivity of high-density oligonucleotide arrays when applied to heterologous species and tested the method by analysing the transcriptome of Brassica oleracea L., a species for which no GeneChip® array is available, using a GeneChip® array designed for Arabidopsis thaliana (L.) Heynh. Genomic DNA from B. oleracea was labelled and hybridised to the ATH1-121501 GeneChip® array. Arabidopsis thaliana probe-pairs that hybridised to the B. oleracea genomic DNA on the basis of the perfect-match (PM) probe signal were then selected for subsequent B. oleracea transcriptome analysis using a .cel file parser script to generate probe mask files. The transcriptional response of B. oleracea to a mineral nutrient (phosphorus; P) stress was quantified using probe mask files generated for a wide range of gDNA hybridisation intensity thresholds. An example probe mask file generated with a gDNA hybridisation intensity threshold of 400 removed > 68 % of the available PM probes from the analysis but retained >96 % of available A. thaliana probe-sets. Ninety-nine of these genes were then identified as significantly regulated under P stress in B. oleracea, including the homologues of P stress responsive genes in A. thaliana. Increasing the gDNA hybridisation intensity thresholds up to 500 for probe-selection increased the sensitivity of the GeneChip® array to detect regulation of gene expression in B. oleracea under P stress by up to 13-fold. Our open-source software to create probe mask files is freely available http://affymetrix.arabidopsis.info/xspecies/ webcite and may be used to facilitate transcriptomic analyses of a wide range of plant and animal species in the absence of custom arrays.
Resumo:
One of the prerequisites for achieving skill in decadal climate prediction is to initialize and predict the circulation in the Atlantic Ocean successfully. The RAPID array measures the Atlantic Meridional Overturning Circulation (MOC) at 26°N. Here we develop a method to include these observations in the Met Office Decadal Prediction System (DePreSys). The proposed method uses covariances of overturning transport anomalies at 26°N with ocean temperature and salinity anomalies throughout the ocean to create the density structure necessary to reproduce the observed transport anomaly. Assimilating transport alone in this way effectively reproduces the observed transport anomalies at 26°N and is better than using basin-wide temperature and salinity observations alone. However, when the transport observations are combined with in situ temperature and salinity observations in the analysis, the transport is not currently reproduced so well. The reasons for this are investigated using pseudo-observations in a twin experiment framework. Sensitivity experiments show that the MOC on monthly time-scales, at least in the HadCM3 model, is modulated by a mechanism where non-local density anomalies appear to be more important for transport variability at 26°N than local density gradients.
Resumo:
Predictions of twenty-first century sea level change show strong regional variation. Regional sea level change observed by satellite altimetry since 1993 is also not spatially homogenous. By comparison with historical and pre-industrial control simulations using the atmosphere–ocean general circulation models (AOGCMs) of the CMIP5 project, we conclude that the observed pattern is generally dominated by unforced (internal generated) variability, although some regions, especially in the Southern Ocean, may already show an externally forced response. Simulated unforced variability cannot explain the observed trends in the tropical Pacific, but we suggest that this is due to inadequate simulation of variability by CMIP5 AOGCMs, rather than evidence of anthropogenic change. We apply the method of pattern scaling to projections of sea level change and show that it gives accurate estimates of future local sea level change in response to anthropogenic forcing as simulated by the AOGCMs under RCP scenarios, implying that the pattern will remain stable in future decades. We note, however, that use of a single integration to evaluate the performance of the pattern-scaling method tends to exaggerate its accuracy. We find that ocean volume mean temperature is generally a better predictor than global mean surface temperature of the magnitude of sea level change, and that the pattern is very similar under the different RCPs for a given model. We determine that the forced signal will be detectable above the noise of unforced internal variability within the next decade globally and may already be detectable in the tropical Atlantic.
Resumo:
The components of many signaling pathways have been identified and there is now a need to conduct quantitative data-rich temporal experiments for systems biology and modeling approaches to better understand pathway dynamics and regulation. Here we present a modified Western blotting method that allows the rapid and reproducible quantification and analysis of hundreds of data points per day on proteins and their phosphorylation state at individual sites. The approach is of particular use where samples show a high degree of sample-to-sample variability such as primary cells from multiple donors. We present a case study on the analysis of >800 phosphorylation data points from three phosphorylation sites in three signaling proteins over multiple time points from platelets isolated from ten donors, demonstrating the technique's potential to determine kinetic and regulatory information from limited cell numbers and to investigate signaling variation within a population. We envisage the approach being of use in the analysis of many cellular processes such as signaling pathway dynamics to identify regulatory feedback loops and the investigation of potential drug/inhibitor responses, using primary cells and tissues, to generate information about how a cell's physiological state changes over time.
Resumo:
Background: Oxidative modification of low-density lipoprotein (LDL) plays a key role in the pathogenesis of atherosclerosis. LDL(-) is present in blood plasma of healthy subjects and at higher concentrations in diseases with high cardiovascular risk, such as familial hypercholesterolemia or diabetes. Methods: We developed and validated a sandwich ELISA for LDL(-) in human plasma using two monoclonal antibodies against LDL(-) that do not bind to native LDL, extensively copper-oxidized LDL or malondialdehyde-modified LDL. The characteristics of assay performance, such as limits of detection and quantification, accuracy, inter- and intra-assay precision were evaluated. The linearity, interferences and stability tests were also performed. Results: The calibration range of the assay is 0.625-20.0 mU/L at 1: 2000 sample dilution. ELISA validation showed intra- and inter- assay precision and recovery within the required limits for immunoassays. The limits of detection and quantification were 0.423 mU/L and 0.517 mU/L LDL(-), respectively. The intra- and inter- assay coefficient of variation ranged from 9.5% to 11.5% and from 11.3% to 18.9%, respectively. Recovery of LDL(-) ranged from 92.8% to 105.1%. Conclusions: This ELISA represents a very practical tool for measuring LDL(-) in human blood for widespread research and clinical sample use. Clin Chem Lab Med 2008; 46: 1769-75.
Resumo:
A method for the determination of volatile organic compounds (VOCs) in recycled polyethylene terephthalate and high-density polyethylene using headspace sampling by solid-phase microextraction and gas chromatography coupled to mass spectrometry detection is presented. This method was used to evaluate the efficiency of cleaning processes for VOC removal from recycled PET. In addition, the method was also employed to evaluate the level of VOC contamination in multilayer packaging material containing recycled HDPE material. The optimisation of the extraction procedure for volatile compounds was performed and the best extraction conditions were found using a 75 mu m carboxen-polydimethylsiloxane (CAR-PDMS) fibre for 20 min at 60 degrees C. The validation parameters for the established method were linear range, linearity, sensitivity, precision (repeatability), accuracy (recovery) and detection and quantification limits. The results indicated that the method could easily be used in quality control for the production of recycled PET and HDPE. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We have developed a spectrum synthesis method for modeling the ultraviolet (UV) emission from the accretion disk from cataclysmic variables (CVs). The disk is separated into concentric rings, with an internal structure from the Wade & Hubeny disk-atmosphere models. For each ring, a wind atmosphere is calculated in the comoving frame with a vertical velocity structure obtained from a solution of the Euler equation. Using simple assumptions, regarding rotation and the wind streamlines, these one-dimensional models are combined into a single 2.5-dimensional model for which we compute synthetic spectra. We find that the resulting line and continuum behavior as a function of the orbital inclination is consistent with the observations, and verify that the accretion rate affects the wind temperature, leading to corresponding trends in the intensity of UV lines. In general, we also find that the primary mass has a strong effect on the P Cygni absorption profiles, the synthetic emission line profiles are strongly sensitive to the wind temperature structure, and an increase in the mass-loss rate enhances the resonance line intensities. Synthetic spectra were compared with UV data for two high orbital inclination nova-like CVs-RW Tri and V347 Pup. We needed to include disk regions with arbitrary enhanced mass loss to reproduce reasonably well widths and line profiles. This fact and a lack of flux in some high ionization lines may be the signature of the presence of density-enhanced regions in the wind, or alternatively, may result from inadequacies in some of our simplifying assumptions.
Resumo:
In this paper, we construct a dynamic portrait of the inner asteroidal belt. We use information about the distribution of test particles, which were initially placed on a perfectly rectangular grid of initial conditions, after 4.2 Myr of gravitational interactions with the Sun and five planets, from Mars to Neptune. Using the spectral analysis method introduced by Michtchenko et al., the asteroidal behaviour is illustrated in detail on the dynamical, averaged and frequency maps. On the averaged and frequency maps, we superpose information on the proper elements and proper frequencies of real objects, extracted from the data base, AstDyS, constructed by Milani and Knezevic. A comparison of the maps with the distribution of real objects allows us to detect possible dynamical mechanisms acting in the domain under study; these mechanisms are related to mean-motion and secular resonances. We note that the two- and three-body mean-motion resonances and the secular resonances (strong linear and weaker non-linear) have an important role in the diffusive transportation of the objects. Their long-lasting action, overlaid with the Yarkovsky effect, may explain many observed features of the density, size and taxonomic distributions of the asteroids.
Resumo:
Emission line ratios have been essential for determining physical parameters such as gas temperature and density in astrophysical gaseous nebulae. With the advent of panoramic spectroscopic devices, images of regions with emission lines related to these physical parameters can, in principle, also be produced. We show that, with observations from modern instruments, it is possible to transform images taken from density-sensitive forbidden lines into images of emission from high- and low-density clouds by applying a transformation matrix. In order to achieve this, images of the pairs of density-sensitive lines as well as the adjacent continuum have to be observed and combined. We have computed the critical densities for a series of pairs of lines in the infrared, optical, ultraviolet and X-rays bands, and calculated the pair line intensity ratios in the high- and low-density limit using a four- and five-level atom approximation. In order to illustrate the method, we applied it to Gemini Multi-Object Spectrograph (GMOS) Integral Field Unit (GMOS-IFU) data of two galactic nuclei. We conclude that this method provides new information of astrophysical interest, especially for mapping low- and high-density clouds; for this reason, we call it `the ld/hd imaging method`.
Resumo:
A particle filter method is presented for the discrete-time filtering problem with nonlinear ItA ` stochastic ordinary differential equations (SODE) with additive noise supposed to be analytically integrable as a function of the underlying vector-Wiener process and time. The Diffusion Kernel Filter is arrived at by a parametrization of small noise-driven state fluctuations within branches of prediction and a local use of this parametrization in the Bootstrap Filter. The method applies for small noise and short prediction steps. With explicit numerical integrators, the operations count in the Diffusion Kernel Filter is shown to be smaller than in the Bootstrap Filter whenever the initial state for the prediction step has sufficiently few moments. The established parametrization is a dual-formula for the analysis of sensitivity to gaussian-initial perturbations and the analysis of sensitivity to noise-perturbations, in deterministic models, showing in particular how the stability of a deterministic dynamics is modeled by noise on short times and how the diffusion matrix of an SODE should be modeled (i.e. defined) for a gaussian-initial deterministic problem to be cast into an SODE problem. From it, a novel definition of prediction may be proposed that coincides with the deterministic path within the branch of prediction whose information entropy at the end of the prediction step is closest to the average information entropy over all branches. Tests are made with the Lorenz-63 equations, showing good results both for the filter and the definition of prediction.
Resumo:
The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.