986 resultados para projector calibration
Resumo:
Consumption of nicotine in the form of smokeless tobacco (snus, snuff, chewing tobacco) or nicotine-containing medication (gum, patch) may benefit sport practice. Indeed, use of snus seems to be a growing trend and investigating nicotine consumption amongst professional athletes is of major interest to sport authorities. Thus, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for the detection and quantification of nicotine and its principal metabolites cotinine, trans-3-hydroxycotinine, nicotine-N'-oxide and cotinine-N-oxide in urine was developed. Sample preparation was performed by liquid-liquid extraction followed by hydrophilic interaction chromatography-tandem mass spectrometry (HILIC-MS/MS) operated in electrospray positive ionization (ESI) mode with selective reaction monitoring (SRM) data acquisition. The method was validated and calibration curves were linear over the selected concentration ranges of 10-10,000 ng/mL for nicotine, cotinine, trans-3-hydroxycotinine and 10-5000 ng/mL for nicotine-N'-oxide and cotinine-N-oxide, with calculated coefficients of determination (R(2)) greater than 0.95. The total extraction efficiency (%) was concentration dependent and ranged between 70.4 and 100.4%. The lower limit of quantification (LLOQ) for all analytes was 10 ng/mL. Repeatability and intermediate precision were ?9.4 and ?9.9%, respectively. In order to measure the prevalence of nicotine exposure during the 2009 Ice Hockey World Championships, 72 samples were collected and analyzed after the minimum of 3 months storage period and complete removal of identification means as required by the 2009 International Standards for Laboratories (ISL). Nicotine and/or metabolites were detected in every urine sample, while concentration measurements indicated an exposure within the last 3 days for eight specimens out of ten. Concentrations of nicotine, cotinine, trans-3-hydroxycotinine, nicotine-N'-oxide and cotinine-N-oxide were found to range between 11 and 19,750, 13 and 10,475, 10 and 8217, 11 and 3396, and 13 and 1640 ng/mL, respectively. When proposing conservative concentration limits for nicotine consumption prior and/or during the games (50 ng/mL for nicotine, cotinine and trans-3-hydroxycotinine and 25 ng/mL for nicotine-N'-oxide and cotinine-N-oxide), about half of the hockey players were qualified as consumers. These findings significantly support the likelihood of extensive smokeless nicotine consumption. However, since such conclusions can only be hypothesized, the potential use of smokeless tobacco as a doping agent in ice hockey requires further investigation.
Resumo:
IMPORTANCE: The 2013 American College of Cardiology/American Heart Association (ACC/AHA) guidelines introduced a prediction model and lowered the threshold for treatment with statins to a 7.5% 10-year hard atherosclerotic cardiovascular disease (ASCVD) risk. Implications of the new guideline's threshold and model have not been addressed in non-US populations or compared with previous guidelines. OBJECTIVE: To determine population-wide implications of the ACC/AHA, the Adult Treatment Panel III (ATP-III), and the European Society of Cardiology (ESC) guidelines using a cohort of Dutch individuals aged 55 years or older. DESIGN, SETTING, AND PARTICIPANTS: We included 4854 Rotterdam Study participants recruited in 1997-2001. We calculated 10-year risks for "hard" ASCVD events (including fatal and nonfatal coronary heart disease [CHD] and stroke) (ACC/AHA), hard CHD events (fatal and nonfatal myocardial infarction, CHD mortality) (ATP-III), and atherosclerotic CVD mortality (ESC). MAIN OUTCOMES AND MEASURES: Events were assessed until January 1, 2012. Per guideline, we calculated proportions of individuals for whom statins would be recommended and determined calibration and discrimination of risk models. RESULTS: The mean age was 65.5 (SD, 5.2) years. Statins would be recommended for 96.4% (95% CI, 95.4%-97.1%; n = 1825) of men and 65.8% (95% CI, 63.8%-67.7%; n = 1523) of women by the ACC/AHA, 52.0% (95% CI, 49.8%-54.3%; n = 985) of men and 35.5% (95% CI, 33.5%-37.5%; n = 821) of women by the ATP-III, and 66.1% (95% CI, 64.0%-68.3%; n = 1253) of men and 39.1% (95% CI, 37.1%-41.2%; n = 906) of women by ESC guidelines. With the ACC/AHA model, average predicted risk vs observed cumulative incidence of hard ASCVD events was 21.5% (95% CI, 20.9%-22.1%) vs 12.7% (95% CI, 11.1%-14.5%) for men (192 events) and 11.6% (95% CI, 11.2%-12.0%) vs 7.9% (95% CI, 6.7%-9.2%) for women (151 events). Similar overestimation occurred with the ATP-III model (98 events in men and 62 events in women) and ESC model (50 events in men and 37 events in women). The C statistic was 0.67 (95% CI, 0.63-0.71) in men and 0.68 (95% CI, 0.64-0.73) in women for hard ASCVD (ACC/AHA), 0.67 (95% CI, 0.62-0.72) in men and 0.69 (95% CI, 0.63-0.75) in women for hard CHD (ATP-III), and 0.76 (95% CI, 0.70-0.82) in men and 0.77 (95% CI, 0.71-0.83) in women for CVD mortality (ESC). CONCLUSIONS AND RELEVANCE: In this European population aged 55 years or older, proportions of individuals eligible for statins differed substantially among the guidelines. The ACC/AHA guideline would recommend statins for nearly all men and two-thirds of women, proportions exceeding those with the ATP-III or ESC guidelines. All 3 risk models provided poor calibration and moderate to good discrimination. Improving risk predictions and setting appropriate population-wide thresholds are necessary to facilitate better clinical decision making.
Resumo:
The aim of our study was to provide an innovative headspace-gas chromatography-mass spectrometry (HS-GC-MS) method applicable for the routine determination of blood CO concentration in forensic toxicology laboratories. The main drawback of the GC/MS methods discussed in literature for CO measurement is the absence of a specific CO internal standard necessary for performing quantification. Even if stable isotope of CO is commercially available in the gaseous state, it is essential to develop a safer method to limit the manipulation of gaseous CO and to precisely control the injected amount of CO for spiking and calibration. To avoid the manipulation of a stable isotope-labeled gas, we have chosen to generate in a vial in situ, an internal labeled standard gas ((13)CO) formed by the reaction of labeled formic acid formic acid (H(13)COOH) with sulfuric acid. As sulfuric acid can also be employed to liberate the CO reagent from whole blood, the procedure allows for the liberation of CO simultaneously with the generation of (13)CO. This method allows for precise measurement of blood CO concentrations from a small amount of blood (10 μL). Finally, this method was applied to measure the CO concentration of intoxicated human blood samples from autopsies.
Resumo:
Omnidirectional cameras offer a much wider field of view than the perspective ones and alleviate the problems due to occlusions. However, both types of cameras suffer from the lack of depth perception. A practical method for obtaining depth in computer vision is to project a known structured light pattern on the scene avoiding the problems and costs involved by stereo vision. This paper is focused on the idea of combining omnidirectional vision and structured light with the aim to provide 3D information about the scene. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. It is also discussed how this sensor can be used in robot navigation applications
Resumo:
We present a computer vision system that associates omnidirectional vision with structured light with the aim of obtaining depth information for a 360 degrees field of view. The approach proposed in this article combines an omnidirectional camera with a panoramic laser projector. The article shows how the sensor is modelled and its accuracy is proved by means of experimental results. The proposed sensor provides useful information for robot navigation applications, pipe inspection, 3D scene modelling etc
Resumo:
Coded structured light is an optical technique based on active stereovision that obtains the shape of objects. One shot techniques are based on projecting a unique light pattern with an LCD projector so that grabbing an image with a camera, a large number of correspondences can be obtained. Then, a 3D reconstruction of the illuminated object can be recovered by means of triangulation. The most used strategy to encode one-shot patterns is based on De Bruijn sequences. In This work a new way to design patterns using this type of sequences is presented. The new coding strategy minimises the number of required colours and maximises both the resolution and the accuracy
Resumo:
Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed
Resumo:
This paper presents the implementation details of a coded structured light system for rapid shape acquisition of unknown surfaces. Such techniques are based on the projection of patterns onto a measuring surface and grabbing images of every projection with a camera. Analyzing the pattern deformations that appear in the images, 3D information of the surface can be calculated. The implemented technique projects a unique pattern so that it can be used to measure moving surfaces. The structure of the pattern is a grid where the color of the slits are selected using a De Bruijn sequence. Moreover, since both axis of the pattern are coded, the cross points of the grid have two codewords (which permits to reconstruct them very precisely), while pixels belonging to horizontal and vertical slits have also a codeword. Different sets of colors are used for horizontal and vertical slits, so the resulting pattern is invariant to rotation. Therefore, the alignment constraint between camera and projector considered by a lot of authors is not necessary
Resumo:
Thanks to decades of research, gait analysis has become an efficient tool. However, mainly due to the price of the motion capture systems, standard gait laboratories have the capability to measure only a few consecutive steps of ground walking. Recently, wearable systems were proposed to measure human motion without volume limitation. Although accurate, these systems are incompatible with most of existing calibration procedures and several years of research will be necessary for their validation. A new approach consisting of using a stationary system with a small capture volume for the calibration procedure and then to measure gait using a wearable system could be very advantageous. It could benefit from the knowledge related to stationary systems, allow long distance monitoring and provide new descriptive parameters. The aim of this study was to demonstrate the potential of this approach. Thus, a combined system was proposed to measure the 3D lower body joints angles and segmental angular velocities. It was then assessed in terms of reliability towards the calibration procedure, repeatability and concurrent validity. The dispersion of the joint angles across calibrations was comparable to those of stationary systems and good reliability was obtained for the angular velocities. The repeatability results confirmed that mean cycle kinematics of long distance walks could be used for subjects' comparison and pointed out an interest for the variability between cycles. Finally, kinematics differences were observed between participants with different ankle conditions. In conclusion, this study demonstrated the potential of a mixed approach for human movement analysis.
Resumo:
The aims of this study were to assess whether high-mobility group box-1 protein can be determined in biological fluids collected during autopsy and evaluate the diagnostic potential of high-mobility group box-1 protein in identifying sepsis-related deaths. High-mobility group box-1 protein was measured in serum collected during hospitalization as well as in undiluted and diluted postmortem serum and pericardial fluid collected during autopsy in a group of sepsis-related deaths and control cases with noninfectious causes of death. Inclusion criteria consisted of full biological sample availability and postmortem interval not exceeding 6h. The preliminary results indicate that high-mobility group box-1 protein levels markedly increase after death. Concentrations beyond the upper limit of the calibration curve were obtained in undiluted postmortem serum in septic and traumatic control cases. In pericardial fluid, concentrations beyond the upper limit of the calibration curve were found in all cases. These findings suggest that the diagnostic potential of high-mobility group box-1 protein in the postmortem setting is extremely limited due to molecule release into the bloodstream after death, rendering antemortem levels difficult or impossible to estimate even after sample dilution.
Resumo:
The aim of the present study was to develop a short form of the Zuckerman-Kuhlman Personality Questionnaire (ZKPQ) with acceptable psychometric properties in four languages: English (United States), French (Switzerland), German (Germany), and Spanish (Spain). The total sample (N = 4,621) was randomly divided into calibration and validation samples. An exploratory factor analysis was conducted in the calibration sample. Eighty items, with loadings equal or higher than 0.30 on their own factor and lower on the remaining factors, were retained. A confirmatory factor analysis was performed over the survival items in the validation sample in order to select the best 10 items for each scale. This short version (named ZKPQ-50-CC) presents psychometric properties strongly similar to the original version in the four countries. Moreover, the factor structure are near equivalent across the four countries since the congruence indices were all higher than 0.90. It is concluded that the ZKPQ-50-CC presented a high cross-language replicability, and it could be an useful questionnaire that may be used for personality research.
Resumo:
This letter presents a comparison between threeFourier-based motion compensation (MoCo) algorithms forairborne synthetic aperture radar (SAR) systems. These algorithmscircumvent the limitations of conventional MoCo, namelythe assumption of a reference height and the beam-center approximation.All these approaches rely on the inherent time–frequencyrelation in SAR systems but exploit it differently, with the consequentdifferences in accuracy and computational burden. Aftera brief overview of the three approaches, the performance ofeach algorithm is analyzed with respect to azimuthal topographyaccommodation, angle accommodation, and maximum frequencyof track deviations with which the algorithm can cope. Also, ananalysis on the computational complexity is presented. Quantitativeresults are shown using real data acquired by the ExperimentalSAR system of the German Aerospace Center (DLR).
Resumo:
During the last decade the interest on space-borne Synthetic Aperture Radars (SAR) for remote sensing applications has grown as testified by the number of recent and forthcoming missions as TerraSAR-X, RADARSAT-2, COSMO-kyMed, TanDEM-X and the Spanish SEOSAR/PAZ. In this sense, this thesis proposes to study and analyze the performance of the state-of-the-Art space-borne SAR systems, with modes able to provide Moving Target Indication capabilities (MTI), i.e. moving object detection and estimation. The research will focus on the MTI processing techniques as well as the architecture and/ or configuration of the SAR instrument, setting the limitations of the current systems with MTI capabilities, and proposing efficient solutions for the future missions. Two European projects, to which the Universitat Politècnica de Catalunya provides support, are an excellent framework for the research activities suggested in this thesis. NEWA project proposes a potential European space-borne radar system with MTI capabilities in order to fulfill the upcoming European security policies. This thesis will critically review the state-of-the-Art MTI processing techniques as well as the readiness and maturity level of the developed capabilities. For each one of the techniques a performance analysis will be carried out based on the available technologies, deriving a roadmap and identifying the different technological gaps. In line with this study a simulator tool will be developed in order to validate and evaluate different MTI techniques in the basis of a flexible space-borne radar configuration. The calibration of a SAR system is mandatory for the accurate formation of the SAR images and turns to be critical in the advanced operation modes as MTI. In this sense, the SEOSAR/PAZ project proposes the study and estimation of the radiometric budget. This thesis will also focus on an exhaustive analysis of the radiometric budget considering the current calibration concepts and their possible limitations. In the framework of this project a key point will be the study of the Dual Receive Antenna (DRA) mode, which provides MTI capabilities to the mission. An additional aspect under study is the applicability of the Digital Beamforming on multichannel and/or multistatic radar platforms, which conform potential solutions for the NEWA project with the aim to fully exploit its capability jointly with MTI techniques.
Resumo:
Raman spectroscopy has become an attractive tool for the analysis of pharmaceutical solid dosage forms. In the present study it is used to ensure the identity of tablets. The two main applications of this method are release of final products in quality control and detection of counterfeits. Twenty-five product families of tablets have been included in the spectral library and a non-linear classification method, the Support Vector Machines (SVMs), has been employed. Two calibrations have been developed in cascade: the first one identifies the product family while the second one specifies the formulation. A product family comprises different formulations that have the same active pharmaceutical ingredient (API) but in a different amount. Once the tablets have been classified by the SVM model, API peaks detection and correlation are applied in order to have a specific method for the identification and allow in the future to discriminate counterfeits from genuine products. This calibration strategy enables the identification of 25 product families without error and in the absence of prior information about the sample. Raman spectroscopy coupled with chemometrics is therefore a fast and accurate tool for the identification of pharmaceutical tablets.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.