16 resultados para cosmological parameters from CMBR
em CentAUR: Central Archive University of Reading - UK
Resumo:
This paper presents a first attempt to estimate mixing parameters from sea level observations using a particle method based on importance sampling. The method is applied to an ensemble of 128 members of model simulations with a global ocean general circulation model of high complexity. Idealized twin experiments demonstrate that the method is able to accurately reconstruct mixing parameters from an observed mean sea level field when mixing is assumed to be spatially homogeneous. An experiment with inhomogeneous eddy coefficients fails because of the limited ensemble size. This is overcome by the introduction of local weighting, which is able to capture spatial variations in mixing qualitatively. As the sensitivity of sea level for variations in mixing is higher for low values of mixing coefficients, the method works relatively well in regions of low eddy activity.
Resumo:
Herd Companion uses routine milk‐recording records to generate twelve‐month rolling averages that indicate performance trends. This article looks at Herd Somatic Cell Count (SCC) and four other SCC‐related parameters from 252 National Milk Records (NMR) recorded herds to assess how each parameter correlates with the Herd SCC. The analysis provides evidence for the importance of targeting individual cows with high SCC recordings (>200,000 cells/ml and >500,000 cells/ml) and/or individual cows with repeatedly high SCC recordings (chronic high SCC) and/or cows that begin lactation with a high SCC recording (dry period infection) in order to achieve bulk milk Herd SCC below 200,000 cells/ml.
Resumo:
Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.
Resumo:
The alignment of model amyloid peptide YYKLVFFC is investigated in bulk and at a solid surface using a range of spectroscopic methods employing polarized radiation. The peptide is based on a core sequence of the amyloid beta (A beta) peptide, KLVFF. The attached tyrosine and cysteine units are exploited to yield information on alignment and possible formation of disulfide or dityrosine links. Polarized Raman spectroscopy on aligned stalks provides information on tyrosine orientation, which complements data from linear dichroism (LD) on aqueous solutions subjected to shear in a Couette cell. LD provides a detailed picture of alignment of peptide strands and aromatic residues and was also used to probe the kinetics of self-assembly. This suggests initial association of phenylalanine residues, followed by subsequent registry of strands and orientation of tyrosine residues. X-ray diffraction (XRD) data from aligned stalks is used to extract orientational order parameters from the 0.48 nm reflection in the cross-beta pattern, from which an orientational distribution function is obtained. X-ray diffraction on solutions subject to capillary flow confirmed orientation in situ at the level of the cross-beta pattern. The information on fibril and tyrosine orientation from polarized Raman spectroscopy is compared with results from NEXAFS experiments on samples prepared as films on silicon. This indicates fibrils are aligned parallel to the surface, with phenyl ring normals perpendicular to the surface. Possible disulfide bridging leading to peptide dimer formation was excluded by Raman spectroscopy, whereas dityrosine formation was probed by fluorescence experiments and was found not to occur except under alkaline conditions. Congo red binding was found not to influence the cross-beta XRD pattern.
Resumo:
A unique parameterization of the perspective projections in all whole-numbered dimensions is reported. The algorithm for generating a perspective transformation from parameters and for recovering parameters from a transformation is a modification of the Givens orthogonalization algorithm. The algorithm for recovering a perspective transformation from a perspective projection is a modification of Roberts' classical algorithm. Both algorithms have been implemented in Pop-11 with call-out to the NAG Fortran libraries. Preliminary monte-carlo tests show that the transformation algorithm is highly accurate, but that the projection algorithm cannot recover magnitude and shear parameters accurately. However, there is reason to believe that the projection algorithm might improve significantly with the use of many corresponding points, or with multiple perspective views of an object. Previous parameterizations of the perspective transformations in the computer graphics and computer vision literature are discussed.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
This paper presents evidence for several features of the population of chess players, and the distribution of their performances measured in terms of Elo ratings and by computer analysis of moves. Evidence that ratings have remained stable since the inception of the Elo system in the 1970’s is given in several forms: by showing that the population of strong players fits a simple logistic-curve model without inflation, by plotting players’ average error against the FIDE category of tournaments over time, and by skill parameters from a model that employs computer analysis keeping a nearly constant relation to Elo rating across that time. The distribution of the model’s Intrinsic Performance Ratings can hence be used to compare populations that have limited interaction, such as between players in a national chess federation and FIDE, and ascertain relative drift in their respective rating systems.
Resumo:
A procedure is presented for obtaining conformational parameters from oriented but non-crystalline polymers. This is achieved by comparison of the experimental wide angle X-ray scattering with that calculated from models but in such a way that foreknowledge of the orientation distribution function is not required. X-ray scattering intensity values for glassy isotactic poly(methylmethacrylate) are analysed by these techniques. The method could be usefully applied to other oriented molecular systems such as liquid crystalline materials.
Resumo:
Procedures for obtaining molecular orientational parameters from wide angle X-ray scattering patterns of samples of thermotropic liquid crystalline polymers are presented. The methods described are applied to an extrusion-aligned sample of a random copolyester of poly(ethylene terephthalate) (PET) and p-acetoxybenzoic acid. Values of the orientational parameters are obtained from both the interchain and intrachain maxima in the scattering pattern. The differences in the values so derived suggest some level of local rotational correlation
Resumo:
Data are presented from the EISCAT (European Incoherent Scatter (Facility)) CP-3-E experiment which show large increases in the auroral zone convection velocities (>2 km s−1) over a wide range of latitudes. These are larger than the estimated neutral thermal speed and allow a study of the plasma in a nonthermal state over a range of observing angles. Spectra are presented which show a well-defined central peak, consistent with an ion velocity distribution function which significantly departs from a Maxwellian form. As the aspect angle decreases, the central peak becomes less obvious. Simulated spectra, derived using theoretical expressions for the O+ ion velocity distribution function based on the generalized relaxation collision model, are compared with the observations and show good first-order, qualitative agreement. It is shown that ion temperatures derived from the observations, with the assumption of a Maxwellian distribution function, are an overestimate of the true ion temperature at large aspect angles and an underestimate at low aspect angles. The theoretical distribution functions have been included in the “standard” incoherent scatter radar analysis procedure, and attempts have been made to derive realistic ionospheric parameters from nonthermal plasma observations. If the expressions for the distribution function are extended to include mixed ion composition, a significant improvement is found in fitting some of the observed spectra, and estimates of the ion composition can be made. The non-Maxwellian analysis of the data revealed that the spectral shape distortion parameter, D*, was significantly higher in this case for molecular ions than for atomic ions in a thin height slab roughly 40 km thick. This would seem unlikely if the main molecular ions present were NO+. We therefore suggest that N2+ formed a significant proportion of the molecular ions present during these observations.
Resumo:
The vertical distribution of cloud cover has a significant impact on a large number of meteorological and climatic processes. Cloud top altitude and cloud geometrical thickness are then essential. Previous studies established the possibility of retrieving those parameters from multi-angular oxygen A-band measurements. Here we perform a study and comparison of the performances of future instruments. The 3MI (Multi-angle, Multi-channel and Multi-polarization Imager) instrument developed by EUMETSAT, which is an extension of the POLDER/PARASOL instrument, and MSPI (Multi-angles Spectro-Polarimetric Imager) develoloped by NASA's Jet Propulsion Laboratory will measure total and polarized light reflected by the Earth's atmosphere–surface system in several spectral bands (from UV to SWIR) and several viewing geometries. Those instruments should provide opportunities to observe the links between the cloud structures and the anisotropy of the reflected solar radiation into space. Specific algorithms will need be developed in order to take advantage of the new capabilities of this instrument. However, prior to this effort, we need to understand, through a theoretical Shannon information content analysis, the limits and advantages of these new instruments for retrieving liquid and ice cloud properties, and especially, in this study, the amount of information coming from the A-Band channel on the cloud top altitude (CTOP) and geometrical thickness (CGT). We compare the information content of 3MI A-Band in two configurations and that of MSPI. Quantitative information content estimates show that the retrieval of CTOP with a high accuracy is possible in almost all cases investigated. The retrieval of CGT seems less easy but possible for optically thick clouds above a black surface, at least when CGT > 1–2 km.
Resumo:
New experiments underpin the interpretation of the basic division in crystallization behaviour of polyethylene in terms of whether or not there is time for the fold surface to order before the next molecular layer is added at the growth front. For typical growth rates, in Regime 11, polyethylene lamellae form with disordered {001} fold surfaces then transform, with lamellar thickening and twisting, towards the more-ordered condition found for slower crystallization in Regime 1, in which lamellae form with and retain {201} fold surfaces. Several linear and linear-low-density polyethylenes have been used to show that, for the same polymer crystallized alone or in a blend, the growth rate at which the change in initial lamellar condition occurs is reasonably constant thereby supporting the concept of a specific time for surfaces to attain the ordered {201}) state. This specific time, in the range from milliseconds to seconds, increases with molecular length, and in linear-low-density polymer, for higher branch contents. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Genetic parameters and breeding values for dairy cow fertility were estimated from 62 443 lactation records. Two-trait analysis of fertility and milk yield was investigated as a method to estimate fertility breeding values when culling or selection based on milk yield in early lactation determines presence or absence of fertility observations in later lactations. Fertility traits were calving interval, intervals from calving to first service, calving to conception and first to last service, conception success to first service and number of services per conception. Milk production traits were 305-day milk, fat and protein yield. For fertility traits, range of estimates of heritability (h(2)) was 0.012 to 0.028 and of permanent environmental variance (c(2)) was 0.016 to 0.032. Genetic correlations (r(g)) among fertility traits were generally high ( > 0.70). Genetic correlations of fertility with milk production traits were unfavourable (range -0.11 to 0.46). Single and two-trait analyses of fertility were compared using the same data set. The estimates of h(2) and c(2) were similar for two types of analyses. However, there were differences between estimated breeding values and rankings for the same trait from single versus multi-trait analyses. The range for rank correlation was 0.69-0.83 for all animals in the pedigree and 0.89-0.96 for sires with more than 25 daughters. As single-trait method is biased due to selection on milk yield, a multi-trait evaluation of fertility with milk yield is recommended. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We present the results of a systematic study of the influence of carbon surface oxidation on Dubinin–Astakhov isotherm parameters obtained from the fitting of CO2 adsorption data. Using GCMC simulations of adsorption on realistic VPC models differing in porosity and containing the most frequently occurring carbon surface functionalities (carboxyls, hydroxyls and carbonyls) and their mixtures, it is concluded that the maximum adsorption calculated from the DA model is not strongly affected by the presence of oxygen groups. Unfortunately, the same cannot be said of the remaining two parameters of this model i.e. the heterogeneity parameter (n) and the characteristic energy of adsorption (E0). Since from the latter the pore diameters of carbons are usually calculated, by inverse-type relationships, it is concluded that they are questionable for carbons containing surface oxides, especially carboxyls.
Resumo:
Using grand canonical Monte Carlo simulation we show, for the first time, the influence of the carbon porosity and surface oxidation on the parameters of the Dubinin-Astakhov (DA) adsorption isotherm equation. We conclude that upon carbon surface oxidation, the adsorption decreases for all carbons studied. Moreover, the parameters of the DA model depend on the number of surface oxygen groups. That is why in the case of carbons containing surface polar groups, SF(6) adsorption isotherm data cannot be used for characterization of the porosity.