964 resultados para High Precision Positioning
Resumo:
Linkage and association analyses were performed to identify loci affecting disease susceptibility by scoring previously characterized sequence variations such as microsatellites and single nucleotide polymorphisms. Lack of markers in regions of interest, as well as difficulty in adapting various methods to high-throughput settings, often limits the effectiveness of the analyses. We have adapted the Escherichia coli mismatch detection system, employing the factors MutS, MutL and MutH, for use in PCR-based, automated, high-throughput genotyping and mutation detection of genomic DNA. Optimal sensitivity and signal-to-noise ratios were obtained in a straightforward fashion because the detection reaction proved to be principally dependent upon monovalent cation concentration and MutL concentration. Quantitative relationships of the optimal values of these parameters with length of the DNA test fragment were demonstrated, in support of the translocation model for the mechanism of action of these enzymes, rather than the molecular switch model. Thus, rapid, sequence-independent optimization was possible for each new genomic target region. Other factors potentially limiting the flexibility of mismatch scanning, such as positioning of dam recognition sites within the target fragment, have also been investigated. We developed several strategies, which can be easily adapted to automation, for limiting the analysis to intersample heteroduplexes. Thus, the principal barriers to the use of this methodology, which we have designated PCR candidate region mismatch scanning, in cost-effective, high-throughput settings have been removed.
Resumo:
Wheat (Triticum aestivum L.), rice (Oryza sativa L.), and maize (Zea mays L.) provide about two-thirds of all energy in human diets, and four major cropping systems in which these cereals are grown represent the foundation of human food supply. Yield per unit time and land has increased markedly during the past 30 years in these systems, a result of intensified crop management involving improved germplasm, greater inputs of fertilizer, production of two or more crops per year on the same piece of land, and irrigation. Meeting future food demand while minimizing expansion of cultivated area primarily will depend on continued intensification of these same four systems. The manner in which further intensification is achieved, however, will differ markedly from the past because the exploitable gap between average farm yields and genetic yield potential is closing. At present, the rate of increase in yield potential is much less than the expected increase in demand. Hence, average farm yields must reach 70–80% of the yield potential ceiling within 30 years in each of these major cereal systems. Achieving consistent production at these high levels without causing environmental damage requires improvements in soil quality and precise management of all production factors in time and space. The scope of the scientific challenge related to these objectives is discussed. It is concluded that major scientific breakthroughs must occur in basic plant physiology, ecophysiology, agroecology, and soil science to achieve the ecological intensification that is needed to meet the expected increase in food demand.
Resumo:
Positioned nucleosomes contribute to both the structure and the function of the chromatin fiber and can play a decisive role in controlling gene expression. We have mapped, at high resolution, the translational positions adopted by limiting amounts of core histone octamers reconstituted onto 4.4 kb of DNA comprising the entire chicken adult beta-globin gene, its enhancer, and flanking sequences. The octamer displays extensive variation in its affinity for different positioning sites, the range exhibited being about 2 orders of magnitude greater than that of the initial binding of the octamer. Strong positioning sites are located 5' and 3' of the globin gene and in the second intron but are absent from the coding regions. These sites exhibit a periodicity (approximately 200 bp) similar to the average spacing of nucleosomes on the inactive beta-globin gene in vivo, which could indicate their involvement in packaging the gene into higher-order chromatin structure. Overlapping, alternative octamer positioning sites commonly exhibit spacings of 20 and 40 bp, but not of 10 bp. These short-range periodicities could reflect features of the core particle structure contributing to the pronounced sequence-dependent manner in which the core histone octamer interacts with DNA.
Resumo:
Medidas de espectroscopia gama de alta resolução têm diversas aplicações. Aplicações envolvendo medidas de radioisótopos de meia-vida curta podem apresentar problemas de baixa precisão nas contagens quando a fonte radioativa está distante do detector e de perda de acurácia por efeitos de tempo morto e empilhamento de pulsos em situação de altas taxas de contagens. Um modo de minimizar esses problemas é alterando a posição da fonte radioativa durante o processo de medição, aproximando-a do detector conforme sua atividade diminui e assim maximizando o número de contagens medidas. Neste trabalho, foi desenvolvido o Movimentador de Amostras Radioativas Automatizado (MARA), um aparato de baixo custo, feito com materiais de baixo número atômico e leve, projetado e construído para auxiliar nas medidas de espectroscopia gama, capaz de controlar a distância entre a fonte e o detector, permitindo inclusive que ocorra alteração dessa distância durante o processo de medição. Por ser automatizado ele otimiza o tempo do operador, que tem total liberdade para criar suas rotinas de medidas no dispositivo, além de evitar que o mesmo tome uma parcela da dose radioativa. Foi também feita uma interface que permite controle do MARA e a programação do sistema de aquisição de dados. Foram realizados testes para otimização da operação do sistema MARA e foi verificada a segurança de operação do MARA, não apresentando nenhuma falha durante seus testes. Foi aplicado o teste de repetitividade, por meio de medições com uma fonte calibrada de 60Co, e verificou-se que o sistema de movimentação de prateleiras automatizado reproduziu os resultados do sistema estático com confiabilidade de 95%.
Resumo:
Os mecanismos amplamente utilizados em aplicações industriais são de tipo serial, porém há algum tempo vem sendo desenvolvidos estudos sobre as vantagens que os mecanismos de arquitetura paralela oferecem em contraposição com os seriais. Rigidez, precisão, altas frequências naturais e velocidade são algumas características que os mecanismos paralelos atribuem a máquinas já consolidadas na indústria, destinadas principalmente nas operações de manipulação (pick and place). Nesse sentido, é relevante o estudo sobre a funcionalidade em outros tipos de operação como a usinagem e, particularmente o fresamento. Para isto, devem-se ainda explorar e desenvolver as capacidades dos mecanismos paralelos em relação à rigidez e à precisão nas operações mencionadas. Foi desenvolvido previamente o projeto e montagem do protótipo de uma máquina fresadora de arquitetura paralela. Também aracterizado pela redundância na atuação para o posicionamento da ferramenta. Com este intuito, pretende-se no trabalho atual, avaliar o erro estático de posicionamento da ferramenta por métodos experimentais, quantificar os deslocamentos, realizar um mapeamento experimental em diversas configurações dos membros. Por outro lado, pretende-se adaptar um modelo numérico simplificado que possa prever as deformações elásticas em diversas configurações, que contemple o efeito de juntas lineares flexíveis e que de alguma forma ajude a identificar as principais fontes de erro. Para tal, foram elaboradas rotinas de programação que através da cinemática inversa e o uso do método dos elementos finitos tentem prever o que de fato acontece nos experimentos. Foi proposta também uma implementação alternativa para o controle do mecanismo através de um software CNC e a conversão de coordenadas cartesianas em coordenadas dos atuadores, isto ajudaria na geração do código G. Finalmente, foram elaboradas algumas trajetórias que tentam avaliar a exatidão e repetitividade do mecanismo além de descrever outras trajetórias livres.
Resumo:
We propose an original method to geoposition an audio/video stream with multiple emitters that are at the same time receivers of the mixed signal. The achieved method is suitable for those comes where a list of positions within a designated area is encoded with a degree of precision adjusted to the visualization capabilities; and is also easily extensible to support new requirements. This method extends a previously proposed protocol, without incurring in any performance penalty.
Resumo:
In this paper, we propose an original method to geoposition an audio/video stream with multiple emitters that are at the same time receivers of the mixed signal. The obtained method is suitable when a list of positions within a known area is encoded with precision tailored to the visualization capabilities of the target device. Nevertheless, it is easily adaptable to new precision requirements, as well as parameterized data precision. This method extends a previously proposed protocol, without incurring in any performance penalty.
Resumo:
Context. The ongoing Gaia-ESO Public Spectroscopic Survey is using FLAMES at the VLT to obtain high-quality medium-resolution Giraffe spectra for about 105 stars and high-resolution UVES spectra for about 5000 stars. With UVES, the Survey has already observed 1447 FGK-type stars. Aims. These UVES spectra are analyzed in parallel by several state-of-the-art methodologies. Our aim is to present how these analyses were implemented, to discuss their results, and to describe how a final recommended parameter scale is defined. We also discuss the precision (method-to-method dispersion) and accuracy (biases with respect to the reference values) of the final parameters. These results are part of the Gaia-ESO second internal release and will be part of its first public release of advanced data products. Methods. The final parameter scale is tied to the scale defined by the Gaia benchmark stars, a set of stars with fundamental atmospheric parameters. In addition, a set of open and globular clusters is used to evaluate the physical soundness of the results. Each of the implemented methodologies is judged against the benchmark stars to define weights in three different regions of the parameter space. The final recommended results are the weighted medians of those from the individual methods. Results. The recommended results successfully reproduce the atmospheric parameters of the benchmark stars and the expected Teff-log g relation of the calibrating clusters. Atmospheric parameters and abundances have been determined for 1301 FGK-type stars observed with UVES. The median of the method-to-method dispersion of the atmospheric parameters is 55 K for Teff, 0.13 dex for log g and 0.07 dex for [Fe/H]. Systematic biases are estimated to be between 50−100 K for Teff, 0.10−0.25 dex for log g and 0.05−0.10 dex for [Fe/H]. Abundances for 24 elements were derived: C, N, O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Y, Zr, Mo, Ba, Nd, and Eu. The typical method-to-method dispersion of the abundances varies between 0.10 and 0.20 dex. Conclusions. The Gaia-ESO sample of high-resolution spectra of FGK-type stars will be among the largest of its kind analyzed in a homogeneous way. The extensive list of elemental abundances derived in these stars will enable significant advances in the areas of stellar evolution and Milky Way formation and evolution.
Resumo:
Context. The Gaia-ESO Survey (GES) is a large public spectroscopic survey at the European Southern Observatory Very Large Telescope. Aims. A key aim is to provide precise radial velocities (RVs) and projected equatorial velocities (vsini) for representative samples of Galactic stars, which will complement information obtained by the Gaia astrometry satellite. Methods. We present an analysis to empirically quantify the size and distribution of uncertainties in RV and vsini using spectra from repeated exposures of the same stars. Results. We show that the uncertainties vary as simple scaling functions of signal-to-noise ratio (S/N) and vsini, that the uncertainties become larger with increasing photospheric temperature, but that the dependence on stellar gravity, metallicity and age is weak. The underlying uncertainty distributions have extended tails that are better represented by Student’s t-distributions than by normal distributions. Conclusions. Parametrised results are provided, which enable estimates of the RV precision for almost all GES measurements, and estimates of the vsini precision for stars in young clusters, as a function of S/N, vsini and stellar temperature. The precision of individual high S/N GES RV measurements is 0.22–0.26 km s-1, dependent on instrumental configuration.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
An assay using high performance liquid chromatography (HPLC)-electrospray ionization-tandem mass spectrometry (ESI-MS-MS) was developed for simultaneously determining concentrations of morphine, oxycodone, morphine-3-glucuronide, and noroxycodone, in 50 mul samples of rat serum. Deuterated (d(3)) analogues of each compound were used as internal standards. Samples were treated with acetonitrile to precipitate plasma proteins: acetonitrile was removed from the supernatant by centrifugal evaporation before analysis. Limits of quantitation (ng/ml) and their between-day accuracy and precision (%deviation and %CV) were-morphine, 3.8 (4.3% and 7.6%); morphine-3-glucuronide, 5.0 (4.5% and 2.9%); oxycodone, 4.5 (0.4% and 9.3%); noroxycodone, 5.0 (8.5% and 4.6%). (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The notorious "dimensionality curse" is a well-known phenomenon for any multi-dimensional indexes attempting to scale up to high dimensions. One well-known approach to overcome degradation in performance with respect to increasing dimensions is to reduce the dimensionality of the original dataset before constructing the index. However, identifying the correlation among the dimensions and effectively reducing them are challenging tasks. In this paper, we present an adaptive Multi-level Mahalanobis-based Dimensionality Reduction (MMDR) technique for high-dimensional indexing. Our MMDR technique has four notable features compared to existing methods. First, it discovers elliptical clusters for more effective dimensionality reduction by using only the low-dimensional subspaces. Second, data points in the different axis systems are indexed using a single B+-tree. Third, our technique is highly scalable in terms of data size and dimension. Finally, it is also dynamic and adaptive to insertions. An extensive performance study was conducted using both real and synthetic datasets, and the results show that our technique not only achieves higher precision, but also enables queries to be processed efficiently. Copyright Springer-Verlag 2005
Resumo:
Quantile computation has many applications including data mining and financial data analysis. It has been shown that an is an element of-approximate summary can be maintained so that, given a quantile query d (phi, is an element of), the data item at rank [phi N] may be approximately obtained within the rank error precision is an element of N over all N data items in a data stream or in a sliding window. However, scalable online processing of massive continuous quantile queries with different phi and is an element of poses a new challenge because the summary is continuously updated with new arrivals of data items. In this paper, first we aim to dramatically reduce the number of distinct query results by grouping a set of different queries into a cluster so that they can be processed virtually as a single query while the precision requirements from users can be retained. Second, we aim to minimize the total query processing costs. Efficient algorithms are developed to minimize the total number of times for reprocessing clusters and to produce the minimum number of clusters, respectively. The techniques are extended to maintain near-optimal clustering when queries are registered and removed in an arbitrary fashion against whole data streams or sliding windows. In addition to theoretical analysis, our performance study indicates that the proposed techniques are indeed scalable with respect to the number of input queries as well as the number of items and the item arrival rate in a data stream.
Resumo:
The results of two experiments are reported that examined how performance in a simple interceptive action (hitting a moving target) was influenced by the speed of the target, the size of the intercepting effector and the distance moved to make the interception. In Experiment 1, target speed and the width of the intercepting manipulandum (bat) were varied. The hypothesis that people make briefer movements, when the temporal accuracy and precision demands of the task are high, predicts that bat width and target speed will divisively interact in their effect on movement time (MT) and that shorter MTs will be associated with a smaller temporal variable error (VE). An alternative hypothesis that people initiate movement when the rate of expansion (ROE) of the target's image reaches a specific, fixed criterion value predicts that bat width will have no effect on MT. The results supported the first hypothesis: a statistically reliable interaction of the predicted form was obtained and the temporal VE was smaller for briefer movements. In Experiment 2, distance to move and target speed were varied. MT increased in direct proportion to distance and there was a divisive interaction between distance and speed; as in Experiment 1, temporal VE was smaller for briefer movements. The pattern of results could not be explained by the strategy of initiating movement at a fixed value of the ROE or at a fixed value of any other perceptual variable potentially available for initiating movement. It is argued that the results support pre-programming of MT with movement initiated when the target's time to arrival at the interception location reaches a criterion value that is matched to the pre-programmed MT. The data supported completely open-loop control when MT was less than between 200 and 240 ms with corrective sub-movements increasingly frequent for movements of longer duration.