975 resultados para Point measurement


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple moire method for the direct measurement of refractive indices is presented. The change of magnification and/or distortion of the image of a linear grating when viewed through a refractive index field is amplified by means of moire fringes and is measured directly. Relations between the index of refraction and fringe spacing are derived and have been verified experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims We combine measurements of weak gravitational lensing from the CFHTLS-Wide survey, supernovae Ia from CFHT SNLS and CMB anisotropies from WMAP5 to obtain joint constraints on cosmological parameters, in particular, the dark-energy equation-of-state parameter w. We assess the influence of systematics in the data on the results and look for possible correlations with cosmological parameters. Methods We implemented an MCMC algorithm to sample the parameter space of a flat CDM model with a dark-energy component of constant w. Systematics in the data are parametrised and included in the analysis. We determine the influence of photometric calibration of SNIa data on cosmological results by calculating the response of the distance modulus to photometric zero-point variations. The weak lensing data set is tested for anomalous field-to-field variations and a systematic shape measurement bias for high-redshift galaxies. Results Ignoring photometric uncertainties for SNLS biases cosmological parameters by at most 20% of the statistical errors, using supernovae alone; the parameter uncertainties are underestimated by 10%. The weak-lensing field-to-field variance between 1 deg2-MegaCam pointings is 5-15% higher than predicted from N-body simulations. We find no bias in the lensing signal at high redshift, within the framework of a simple model, and marginalising over cosmological parameters. Assuming a systematic underestimation of the lensing signal, the normalisation increases by up to 8%. Combining all three probes we obtain -0.10 < 1 + w < 0.06 at 68% confidence ( -0.18 < 1 + w < 0.12 at 95%), including systematic errors. Our results are therefore consistent with the cosmological constant . Systematics in the data increase the error bars by up to 35%; the best-fit values change by less than 0.15.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of an algorithm shows that maximum uniformity of film thickness on a rotating substrate is achieved for a normalized source-to-substrate distance ratio, h/r =1.183.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Paper deals with the analysis of liquid limit of soils, an inferential parameter of universal acceptance. It has been undertaken primarily to re-examine one-point methods of determination of liquid limit water contents. It has been shown by basic characteristics of soils and associated physico-chemical factors that critical shear strengths at liquid limit water contents arise out of force field equilibrium and are independent of soil type. This leads to the formation of a scientific base for liquid limit determination by one-point methods, which hitherto was formulated purely on statistical analysis of data. Available methods (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) of one-point liquid limit determination have been critically re-examined. A simple one-point cone penetrometer method of computing liquid limit has been suggested and compared with other methods. Experimental data of Sherwood & Ryley (1970) have been employed for comparison of different cone penetration methods. Results indicate that, apart from mere statistical considerations, one-point methods have a strong scientific base on the uniqueness of modified flow line irrespective of soil type. Normalized flow line is obtained by normalization of water contents by liquid limit values thereby nullifying the effects of surface areas and associated physico-chemical factors that are otherwise reflected in different responses at macrolevel.Cet article traite de l'analyse de la limite de liquidité des sols, paramètre déductif universellement accepté. Cette analyse a été entreprise en premier lieu pour ré-examiner les méthodes à un point destinées à la détermination de la teneur en eau à la limite de liquidité. Il a été démontré par les caractéristiques fondamentales de sols et par des facteurs physico-chimiques associés que les résistances critiques à la rupture au cisaillement pour des teneurs en eau à la limite de liquidité résultent de l'équilibre des champs de forces et sont indépendantes du type de sol concerné. On peut donc constituer une base scientifique pour la détermination de la limite de liquidité par des méthodes à un point lesquelles, jusqu'alors, n'avaient été formulées que sur la base d'une analyse statistique des données. Les méthodes dont on dispose (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) pour la détermination de la limite de liquidité à un point font l'objet d'un ré-examen critique. Une simple méthode d'analyse à un point à l'aide d'un pénétromètre à cône pour le calcul de la limite de liquidité a été suggérée et comparée à d'autres méthodes. Les données expérimentales de Sherwood & Ryley (1970) ont été utilisées en vue de comparer différentes méthodes de pénétration par cône. En plus de considérations d'ordre purement statistque, les résultats montrent que les méthodes de détermination à un point constituent une base scientifique solide en raison du caractère unique de la ligne de courant modifiée, quel que soit le type de sol La ligne de courant normalisée est obtenue par la normalisation de la teneur en eau en faisant appel à des valeurs de limite de liquidité pour, de cette manière, annuler les effets des surfaces et des facteurs physico-chimiques associés qui sans cela se manifesteraient dans les différentes réponses au niveau macro.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents simple graphical procedures for the position synthesis of plane linkage mechanisms with sliding inputs and output to generate functions of two independent variables. The procedures are based on point position reduction and permit synthesis of the linkage to satisfy up to five arbitrarily selected precision positions.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports the basic design of a new six component force balance system using miniature piezoelectric accelerometers to measure all aerodynamic forces and moments for a test model in hypersonic shock tunnel (HST2). Since the flow duration in a hypersonic shock tunnel is of the order of $1$ ms, the balance system [1] uses fast response accelerometers (PCB Piezotronics; frequency range of 1-10 kHz) for obtaining the aerodynamic data. The alance system has been used to measure the basic aerodynamic forces and moments on a missile shaped body at Mach $8$ in the IISc hypersonic shock tunnel. The experimentally measured values match well with theoretical predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Few data exist on direct greenhouse gas emissions from pen manure at beef feedlots. However, emission inventories attempt to account for these emissions. This study used a large chamber to isolate N2O and CH4 emissions from pen manure at two Australian commercial beef feedlots (stocking densities, 13-27 m(2) head) and related these emissions to a range of potential emission control factors, including masses and concentrations of volatile solids, NO3-, total N, NH4+, and organic C (OC), and additional factors such as total manure mass, cattle numbers, manure pack depth and density, temperature, and moisture content. Mean measured pen N2O emissions were 0.428 kg ha(-1) d(-1) (95% confidence interval [CI], 0.252-0.691) and 0.00405 kg ha(-1) d(-1) (95% CI, 0.00114-0.0110) for the northern and southern feedlots, respectively. Mean measured CH4 emission was 0.236 kg ha(-1) d(-1) (95% CI, 0.163-0.332) for the northern feedlot and 3.93 kg ha(-1) d(-1) (95% CI, 2.58-5.81) for the southern feedlot. Nitrous oxide emission increased with density, pH, temperature, and manure mass, whereas negative relationships were evident with moisture and OC. Strong relationships were not evident between N2O emission and masses or concentrations of NO3- or total N in the manure. This is significant because many standard inventory calculation protocols predict N2O emissions using the mass of N excreted by the animal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we solve the distributed parameter fixed point smoothing problem by formulating it as an extended linear filtering problem and show that these results coincide with those obtained in the literature using the forward innovations method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple technique for determining the energy sensitivities for the thermographic recording of laser beams is described. The principle behind this technique is that, if a laser beam with a known spatial distribution such as a Gaussian profile is used for imaging, the radius of the thermal image formed depends uniquely on the intensity of the impinging beam. Thus by measuring the radii of the images produced for different incident beam intensities the minimum intensity necessary (that is, the threshold) for thermographic imaging is found. The diameter of the laser beam can also be found from this measurement. A simple analysis based on the temperature distribution in the laser heated material shows that there is an inverse square root dependence on pulse duration or period of exposure for the energy fluence of the laser beam required, both for the threshold and the subsequent increase in the size of the recording. It has also been shown that except for low intensity, long duration exposure on very low conductivity materials, heat losses are not very significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A ratio transformer method suitable for the measurement of the dielectric constant of highly conducting liquids is described. The resistance between the two plates of the capacitor can be as low as 2 k Omega . In this method variations in this low resistance will not give any error in capacitance measurement. One of the features of this method is the simplicity in balancing the resistance, using a LDR (light dependent resistor), without influencing the independent capacitance measurement. The ratio transformer enables the ground capacitances to be eliminated. The change in leakage inductance of the ratio transformer while changing the ratios is also taken into account. The capacitance of a dielectric cell of the order of 50 pF can be measured from 1000 Hz to 100 kHz with a resolution of 0.06 pF. The electrode polarisation problem is also discussed.