930 resultados para four-point probe method
Resumo:
Context. Convergent point (CP) search methods are important tools for studying the kinematic properties of open clusters and young associations whose members share the same spatial motion. Aims. We present a new CP search strategy based on proper motion data. We test the new algorithm on synthetic data and compare it with previous versions of the CP search method. As an illustration and validation of the new method we also present an application to the Hyades open cluster and a comparison with independent results. Methods. The new algorithm rests on the idea of representing the stellar proper motions by great circles over the celestial sphere and visualizing their intersections as the CP of the moving group. The new strategy combines a maximum-likelihood analysis for simultaneously determining the CP and selecting the most likely group members and a minimization procedure that returns a refined CP position and its uncertainties. The method allows one to correct for internal motions within the group and takes into account that the stars in the group lie at different distances. Results. Based on Monte Carlo simulations, we find that the new CP search method in many cases returns a more precise solution than its previous versions. The new method is able to find and eliminate more field stars in the sample and is not biased towards distant stars. The CP solution for the Hyades open cluster is in excellent agreement with previous determinations.
Multicentre evaluation of a new point-of-care test for the determination of NT-proBNP in whole blood
Resumo:
BACKGROUND: The Roche CARDIAC proBNP point-of-care (POC) test is the first test intended for the quantitative determination of N-terminal pro-brain natriuretic peptide (NT-proBNP) in whole blood as an aid in the diagnosis of suspected congestive heart failure, in the monitoring of patients with compensated left-ventricular dysfunction and in the risk stratification of patients with acute coronary syndromes. METHODS: A multicentre evaluation was carried out to assess the analytical performance of the POC NT-proBNP test at seven different sites. RESULTS: The majority of all coefficients of variation (CVs) obtained for within-series imprecision using native blood samples was below 10% for both 52 samples measured ten times and for 674 samples measured in duplicate. Using quality control material, the majority of CV values for day-to-day imprecision were below 14% for the low control level and below 13% for the high control level. In method comparisons for four lots of the POC NT-proBNP test with the laboratory reference method (Elecsys proBNP), the slope ranged from 0.93 to 1.10 and the intercept ranged from 1.8 to 6.9. The bias found between venous and arterial blood with the POC NT-proBNP method was < or =5%. All four lots of the POC NT-proBNP test investigated showed excellent agreement, with mean differences of between -5% and +4%. No significant interference was observed with lipaemic blood (triglyceride concentrations up to 6.3 mmol/L), icteric blood (bilirubin concentrations up to 582 micromol/L), haemolytic blood (haemoglobin concentrations up to 62 mg/L), biotin (up to 10 mg/L), rheumatoid factor (up to 42 IU/mL), or with 50 out of 52 standard or cardiological drugs in therapeutic concentrations. With bisoprolol and BNP, somewhat higher bias in the low NT-proBNP concentration range (<175 ng/L) was found. Haematocrit values between 28% and 58% had no influence on the test result. Interference may be caused by human anti-mouse antibodies (HAMA) types 1 and 2. No significant influence on the results with POC NT-proBNP was found using volumes of 140-165 muL. High NT-proBNP concentrations above the measuring range of the POC NT-proBNP test did not lead to false low results due to a potential high-dose hook effect. CONCLUSIONS: The POC NT-proBNP test showed good analytical performance and excellent agreement with the laboratory method. The POC NT-proBNP assay is therefore suitable in the POC setting.
Resumo:
Computer-aided surgery (CAS) allows for real-time intraoperative feedback resulting in increased accuracy, while reducing intraoperative radiation. CAS is especially useful for the treatment of certain pelvic ring fractures, which necessitate the precise placement of screws. Flouroscopy-based CAS modules have been developed for many orthopedic applications. The integration of the isocentric flouroscope even enables navigation using intraoperatively acquired three-dimensional (3D) data, though the scan volume and imaging quality are limited. Complicated and comprehensive pathologies in regions like the pelvis can necessitate a CT-based navigation system because of its larger field of view. To be accurate, the patient's anatomy must be registered and matched with the virtual object (CT data). The actual precision within the region of interest depends on the area of the bone where surface matching is performed. Conventional surface matching with a solid pointer requires extensive soft tissue dissection. This contradicts the primary purpose of CAS as a minimally invasive alternative to conventional surgical techniques. We therefore integrated an a-mode ultrasound pointer into the process of surface matching for pelvic surgery and compared it to the conventional method. Accuracy measurements were made in two pelvic models: a foam model submerged in water and one with attached porcine muscle tissue. Three different tissue depths were selected based on CT scans of 30 human pelves. The ultrasound pointer allowed for registration of virtually any point on the pelvis. This method of surface matching could be successfully integrated into CAS of the pelvis.
Resumo:
The reliable quantification of gene copy number variations is a precondition for future investigations regarding their functional relevance. To date, there is no generally accepted gold standard method for copy number quantification, and methods in current use have given inconsistent results in selected cohorts. In this study, we compare two methods for copy number quantification. beta-defensin gene copy numbers were determined in parallel in 80 genomic DNA samples by real-time PCR and multiplex ligation-dependent probe amplification (MLPA). The pyrosequencing-based paralog ratio test (PPRT) was used as a standard of comparison in 79 out of 80 samples. Realtime PCR and MPLA results confirmed concordant DEFB4, DEFB103A, and DEFB104A copy numbers within samples. These two methods showed identical results in 32 out of 80 samples; 29 of these 32 samples comprised four or fewer copies. The coefficient of variation of MLPA is lower compared with PCR. In addition, the consistency between MLPA and PPRT is higher than either PCR/MLPA or PCR/PPRT consistency. In summary, these results suggest that MLPA is superior to real-time PCR in beta-defensin copy number quantification.
Resumo:
To improve our understanding of the Asian monsoon system, we developed a hydroclimate reconstruction in a marginal monsoon shoulder region for the period prior to the industrial era. Here, we present the first moisture sensitive tree-ring chronology, spanning 501 years for the Dieshan Mountain area, a boundary region of the Asian summer monsoon in the northeastern Tibetan Plateau. This reconstruction was derived from 101 cores of 68 old-growth Chinese pine (Pinus tabulaeformis) trees. We introduce a Hilbert–Huang Transform (HHT) based standardization method to develop the tree-ring chronology, which has the advantages of excluding non-climatic disturbances in individual tree-ring series. Based on the reliable portion of the chronology, we reconstructed the annual (prior July to current June) precipitation history since 1637 for the Dieshan Mountain area and were able to explain 41.3% of the variance. The extremely dry years in this reconstruction were also found in historical documents and are also associated with El Niño episodes. Dry periods were reconstructed for 1718–1725, 1766–1770 and 1920–1933, whereas 1782–1788 and 1979–1985 were wet periods. The spatial signatures of these events were supported by data from other marginal regions of the Asian summer monsoon. Over the past four centuries, out-of-phase relationships between hydroclimate variations in the Dieshan Mountain area and far western Mongolia were observed during the 1718–1725 and 1766–1770 dry periods and the 1979–1985 wet period.
Resumo:
We obtain upper bounds for the total variation distance between the distributions of two Gibbs point processes in a very general setting. Applications are provided to various well-known processes and settings from spatial statistics and statistical physics, including the comparison of two Lennard-Jones processes, hard core approximation of an area interaction process and the approximation of lattice processes by a continuous Gibbs process. Our proof of the main results is based on Stein's method. We construct an explicit coupling between two spatial birth-death processes to obtain Stein factors, and employ the Georgii-Nguyen-Zessin equation for the total bound.
Resumo:
In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^
Resumo:
Four retrogressive thaw slumps (RTS) located on Herschel Island and the Yukon coast (King Point) in the western Canadian Arctic were investigated to compare the environmental, sedimentological and geochemical setting and characteristics of zones in active and stabilised slumps and at undisturbed sites. In general, the slope, sedimentology and biogeochemistry of stabilised and undisturbed zones differ, independent of their age or location. Organic carbon contents were lower in slumps than in the surrounding tundra, and the density and compaction of slump sediments were much greater. Radiocarbon dating showed that RTS were likely to have been active around 300 a BP and are undergoing a similar period of increased activity now. This cycle is thought to be controlled more by local geometry, cryostratigraphy and the rate of coastal erosion than by variation in summer temperatures.
Resumo:
This paper assesses the along strike variation of active bedrock fault scarps using long range terrestrial laser scanning (t-LiDAR) data in order to determine the distribution behaviour of scarp height and the subsequently calculate long term throw-rates. Five faults on Cretewhich display spectacular limestone fault scarps have been studied using high resolution digital elevation model (HRDEM) data. We scanned several hundred square metres of the fault system including the footwall, fault scarp and hanging wall of the investigated fault segment. The vertical displacement and the dip of the scarp were extracted every metre along the strike of the detected fault segment based on the processed HRDEM. The scarp variability was analysed by using statistical and morphological methods. The analysis was done in a geographical information system (GIS) environment. Results show a normal distribution for the scanned fault scarp's vertical displacement. Based on these facts, the mean value of height was chosen to define the authentic vertical displacement. Consequently the scarp can be divided into above, below and within the range of mean (within one standard deviation) and quantify the modifications of vertical displacement. Therefore, the fault segment can be subdivided into areas which are influenced by external modification like erosion and sedimentation processes. Moreover, to describe and measure the variability of vertical displacement along strike the fault, the semi-variance was calculated with the variogram method. This method is used to determine how much influence the external processes have had on the vertical displacement. By combining of morphological and statistical results, the fault can be subdivided into areas with high external influences and areas with authentic fault scarps, which have little or no external influences. This subdivision is necessary for long term throw-rate calculations, because without this differentiation the calculated rates would be misleading and the activity of a fault would be incorrectly assessed with significant implications for seismic hazard assessment since fault slip rate data govern the earthquake recurrence. Furthermore, by using this workflow areas with minimal external influences can be determined, not only for throw-rate calculations, but also for determining samples sites for absolute dating techniques such as cosmogenic nuclide dating. The main outcomes of this study include: i) there is no direct correlation between the fault's mean vertical displacement and dip (R² less than 0.31); ii) without subdividing the scanned scarp into areas with differing amounts of external influences, the along strike variability of vertical displacement is ±35%; iii) when the scanned scarp is subdivided the variation of the vertical displacement of the authentic scarp (exposed by earthquakes only) is in a range of ±6% (the varies depending on the fault from 7 to 12%); iv) the calculation of the long term throw-rate (since 13 ka) for four scarps in Crete using the authentic vertical displacement is 0.35 ± 0.04 mm/yr at Kastelli 1, 0.31 ± 0.01 mm/yr at Kastelli 2, 0.85 ± 0.06 mm/yr at the Asomatos fault (Sellia) and 0.55 ± 0.05 mm/yr at the Lastros fault.
Resumo:
Resumen El diseño de sistemas ópticos, entendido como un arte por algunos, como una ciencia por otros, se ha realizado durante siglos. Desde los egipcios hasta nuestros días los sistemas de formación de imagen han ido evolucionando así como las técnicas de diseño asociadas. Sin embargo ha sido en los últimos 50 años cuando las técnicas de diseño han experimentado su mayor desarrollo y evolución, debido, en parte, a la aparición de nuevas técnicas de fabricación y al desarrollo de ordenadores cada vez más potentes que han permitido el cálculo y análisis del trazado de rayos a través de los sistemas ópticos de forma rápida y eficiente. Esto ha propiciado que el diseño de sistemas ópticos evolucione desde los diseños desarrollados únicamente a partir de la óptica paraxial hasta lo modernos diseños realizados mediante la utilización de diferentes técnicas de optimización multiparamétrica. El principal problema con el que se encuentra el diseñador es que las diferentes técnicas de optimización necesitan partir de un diseño inicial el cual puede fijar las posibles soluciones. Dicho de otra forma, si el punto de inicio está lejos del mínimo global, o diseño óptimo para las condiciones establecidas, el diseño final puede ser un mínimo local cerca del punto de inicio y lejos del mínimo global. Este tipo de problemática ha llevado al desarrollo de sistemas globales de optimización que cada vez sean menos sensibles al punto de inicio de la optimización. Aunque si bien es cierto que es posible obtener buenos diseños a partir de este tipo de técnicas, se requiere de muchos intentos hasta llegar a la solución deseada, habiendo un entorno de incertidumbre durante todo el proceso, puesto que no está asegurado el que se llegue a la solución óptima. El método de las Superficies Múltiples Simultaneas (SMS), que nació como una herramienta de cálculo de concentradores anidólicos, se ha demostrado como una herramienta también capaz utilizarse para el diseño de sistemas ópticos formadores de imagen, aunque hasta la fecha se ha utilizado para el diseño puntual de sistemas de formación de imagen. Esta tesis tiene por objeto presentar el SMS como un método que puede ser utilizado de forma general para el diseño de cualquier sistema óptico de focal fija o v afocal con un aumento definido así como una herramienta que puede industrializarse para ayudar al diseñador a afrontar de forma sencilla el diseño de sistemas ópticos complejos. Esta tesis está estructurada en cinco capítulos: El capítulo 1, es un capítulo de fundamentos donde se presentan los conceptos fundamentales necesarios para que el lector, aunque no posea una gran base en óptica formadora de imagen, pueda entender los planteamientos y resultados que se presentan en el resto de capítulos El capitulo 2 aborda el problema de la optimización de sistemas ópticos, donde se presenta el método SMS como una herramienta idónea para obtener un punto de partida para el proceso de optimización. Mediante un ejemplo aplicado se demuestra la importancia del punto de partida utilizado en la solución final encontrada. Además en este capítulo se presentan diferentes técnicas que permiten la interpolación y optimización de las superficies obtenidas a partir de la aplicación del SMS. Aunque en esta tesis se trabajará únicamente utilizando el SMS2D, se presenta además un método para la interpolación y optimización de las nubes de puntos obtenidas a partir del SMS3D basado en funciones de base radial (RBF). En el capítulo 3 se presenta el diseño, fabricación y medidas de un objetivo catadióptrico panorámico diseñado para trabajar en la banda del infrarrojo lejano (8-12 μm) para aplicaciones de vigilancia perimetral. El objetivo presentado se diseña utilizando el método SMS para tres frentes de onda de entrada utilizando cuatro superficies. La potencia del método de diseño utilizado se hace evidente en la sencillez con la que este complejo sistema se diseña. Las imágenes presentadas demuestran cómo el prototipo desarrollado cumple a la perfección su propósito. El capítulo 4 aborda el problema del diseño de sistemas ópticos ultra compactos, se introduce el concepto de sistemas multicanal, como aquellos sistemas ópticos compuestos por una serie de canales que trabajan en paralelo. Este tipo de sistemas resultan particularmente idóneos para él diseño de sistemas afocales. Se presentan estrategias de diseño para sistemas multicanal tanto monocromáticos como policromáticos. Utilizando la novedosa técnica de diseño que en este capítulo se presenta el diseño de un telescopio de seis aumentos y medio. En el capítulo 5 se presenta una generalización del método SMS para rayos meridianos. En este capítulo se presenta el algoritmo que debe utilizarse para el diseño de cualquier sistema óptico de focal fija. La denominada optimización fase 1 se vi introduce en el algoritmo presentado de forma que mediante el cambio de las condiciones iníciales del diseño SMS que, aunque el diseño se realice para rayos meridianos, los rayos skew tengan un comportamiento similar. Para probar la potencia del algoritmo desarrollado se presenta un conjunto de diseños con diferente número de superficies. La estabilidad y potencia del algoritmo se hace evidente al conseguirse por primera vez el diseño de un sistema de seis superficies diseñado por SMS. vii Abstract The design of optical systems, considered an art by some and a science by others, has been developed for centuries. Imaging optical systems have been evolving since Ancient Egyptian times, as have design techniques. Nevertheless, the most important developments in design techniques have taken place over the past 50 years, in part due to the advances in manufacturing techniques and the development of increasingly powerful computers, which have enabled the fast and efficient calculation and analysis of ray tracing through optical systems. This has led to the design of optical systems evolving from designs developed solely from paraxial optics to modern designs created by using different multiparametric optimization techniques. The main problem the designer faces is that the different optimization techniques require an initial design which can set possible solutions as a starting point. In other words, if the starting point is far from the global minimum or optimal design for the set conditions, the final design may be a local minimum close to the starting point and far from the global minimum. This type of problem has led to the development of global optimization systems which are increasingly less sensitive to the starting point of the optimization process. Even though it is possible to obtain good designs from these types of techniques, many attempts are necessary to reach the desired solution. This is because of the uncertain environment due to the fact that there is no guarantee that the optimal solution will be obtained. The Simultaneous Multiple Surfaces (SMS) method, designed as a tool to calculate anidolic concentrators, has also proved useful for the design of image-forming optical systems, although until now it has occasionally been used for the design of imaging systems. This thesis aims to present the SMS method as a technique that can be used in general for the design of any optical system, whether with a fixed focal or an afocal with a defined magnification, and also as a tool that can be commercialized to help designers in the design of complex optical systems. The thesis is divided into five chapters. Chapter 1 establishes the basics by presenting the fundamental concepts which the reader needs to acquire, even if he/she doesn‟t have extensive knowledge in the field viii of image-forming optics, in order to understand the steps taken and the results obtained in the following chapters. Chapter 2 addresses the problem of optimizing optical systems. Here the SMS method is presented as an ideal tool to obtain a starting point for the optimization process. The importance of the starting point for the final solution is demonstrated through an example. Additionally, this chapter introduces various techniques for the interpolation and optimization of the surfaces obtained through the application of the SMS method. Even though in this thesis only the SMS2D method is used, we present a method for the interpolation and optimization of clouds of points obtained though the SMS3D method, based on radial basis functions (RBF). Chapter 3 presents the design, manufacturing and measurement processes of a catadioptric panoramic lens designed to work in the Long Wavelength Infrared (LWIR) (8-12 microns) for perimeter surveillance applications. The lens presented is designed by using the SMS method for three input wavefronts using four surfaces. The powerfulness of the design method used is revealed through the ease with which this complex system is designed. The images presented show how the prototype perfectly fulfills its purpose. Chapter 4 addresses the problem of designing ultra-compact optical systems. The concept of multi-channel systems, such as optical systems composed of a series of channels that work in parallel, is introduced. Such systems are especially suitable for the design of afocal systems. We present design strategies for multichannel systems, both monochromatic and polychromatic. A telescope designed with a magnification of six-and-a-half through the innovative technique exposed in this chapter is presented. Chapter 5 presents a generalization of the SMS method for meridian rays. The algorithm to be used for the design of any fixed focal optics is revealed. The optimization known as phase 1 optimization is inserted into the algorithm so that, by changing the initial conditions of the SMS design, the skew rays have a similar behavior, despite the design being carried out for meridian rays. To test the power of the developed algorithm, a set of designs with a different number of surfaces is presented. The stability and strength of the algorithm become apparent when the first design of a system with six surfaces if obtained through the SMS method.
Resumo:
In this work, we demonstrate how it is possible to sharply image multiple object points. The Simultaneous Multiple Surface (SMS) design method has usually been presented as a method to couple N wave-front pairs with N surfaces, but recent findings show that when using N surfaces, we can obtain M image points when N
Resumo:
Published by the Division under its earlier name: Division of Library and Reference Services.
Resumo:
Texas Department of Transportation, Austin
Resumo:
Mode of access: Internet.