948 resultados para parallel robots,cable driven,underactuated,calibration,sensitivity,accuracy
Resumo:
OBJETIVO: Determinar a acurácia das variáveis: tempo de escada (tTE), potência de escada (PTE), teste de caminhada (TC6) e volume expiratório forçado (VEF1) utilizando o consumo máximo de oxigênio (VO2máx) como padrão-ouro. MÉTODOS: Os testes foram realizados em 51 pacientes. O VEF1 foi obtido através da espirometria. O TC6 foi realizado em corredor plano de 120m. O TE foi realizado em escada de 6 lances obtendo-se tTE e PTE. O VO2máx foi obtido por ergoespirometria, utilizando o protocolo de Balke. Foram calculados a correlação linear de Pearson (r) e os valores de p, entre VO2máx e variáveis. Para o cálculo da acurácia, foram obtidos os pontos de corte, através da curva característica operacional (ROC). A estatística Kappa (k) foi utilizada para cálculo da concordância. RESULTADOS: Obteve-se as acurácias: tTE - 86%, TC6 - 80%, PTE - 71%, VEF1(L) - 67%, VEF1% - 63%. Para o tTE e TC6 combinados em paralelo, obteve-se sensibilidade de 93,5% e em série, especificidade de 96,4%. CONCLUSÃO: O tTE foi a variável que apresentou a melhor acurácia. Quando combinados o tTE e TC6 podem ter especificidade e sensibilidade próxima de 100%. Estes testes deveriam ser mais usados rotineiramente, especialmente quando a ergoespirometria para a medida de VO2máx não é disponível.
Resumo:
A complete census of planetary systems around a volume-limited sample of solar-type stars (FGK dwarfs) in the Solar neighborhood (d a parts per thousand currency signaEuro parts per thousand 15 pc) with uniform sensitivity down to Earth-mass planets within their Habitable Zones out to several AUs would be a major milestone in extrasolar planets astrophysics. This fundamental goal can be achieved with a mission concept such as NEAT-the Nearby Earth Astrometric Telescope. NEAT is designed to carry out space-borne extremely-high-precision astrometric measurements at the 0.05 mu as (1 sigma) accuracy level, sufficient to detect dynamical effects due to orbiting planets of mass even lower than Earth's around the nearest stars. Such a survey mission would provide the actual planetary masses and the full orbital geometry for all the components of the detected planetary systems down to the Earth-mass limit. The NEAT performance limits can be achieved by carrying out differential astrometry between the targets and a set of suitable reference stars in the field. The NEAT instrument design consists of an off-axis parabola single-mirror telescope (D = 1 m), a detector with a large field of view located 40 m away from the telescope and made of 8 small movable CCDs located around a fixed central CCD, and an interferometric calibration system monitoring dynamical Young's fringes originating from metrology fibers located at the primary mirror. The mission profile is driven by the fact that the two main modules of the payload, the telescope and the focal plane, must be located 40 m away leading to the choice of a formation flying option as the reference mission, and of a deployable boom option as an alternative choice. The proposed mission architecture relies on the use of two satellites, of about 700 kg each, operating at L2 for 5 years, flying in formation and offering a capability of more than 20,000 reconfigurations. The two satellites will be launched in a stacked configuration using a Soyuz ST launch vehicle. The NEAT primary science program will encompass an astrometric survey of our 200 closest F-, G- and K-type stellar neighbors, with an average of 50 visits each distributed over the nominal mission duration. The main survey operation will use approximately 70% of the mission lifetime. The remaining 30% of NEAT observing time might be allocated, for example, to improve the characterization of the architecture of selected planetary systems around nearby targets of specific interest (low-mass stars, young stars, etc.) discovered by Gaia, ground-based high-precision radial-velocity surveys, and other programs. With its exquisite, surgical astrometric precision, NEAT holds the promise to provide the first thorough census for Earth-mass planets around stars in the immediate vicinity of our Sun.
Resumo:
Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.
Resumo:
The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.
Resumo:
The goal of this work was to increase the performance and to calibrate one of the ROSINA sensors, the Reflectron-type Time-Of-Flight mass spectrometer, currently flying aboard the ESA Rosetta spacecraft. Different optimization techniques were applied to both the lab and space models, and a static calibration was performed using different gas species expected to be detected in the vicinity of comet 67P/Churyumov-Gerasimenko. The database thus created was successfully applied to space data, giving consistent results with the other ROSINA sensors.
Resumo:
The goal of this work has been to calibrate sensitivities and fragmentation pattern of various molecules as well as further characterize the lab model of the ROSINA Double Focusing Mass Spectrometer (DFMS) on board ESA’s Rosetta spacecraft bound to comet 67P/Churyumov-Gerasimenko. The detailed calibration and characterization of the instrument is key to understand and interpret the results in the coma of the comet. A static calibration was performed for the following species: Ne, Ar, Kr, Xe, H2O, N2, CO2, CH4, C2H6, C3H8, C4H10, and C2H4. The purpose of the calibration was to obtain sensitivities for all detectors and emissions, the fragmentation behavior of the ion source and to show the capabilities to measure isotopic ratios at the comet. The calibration included the recording of different correction factors to evaluate the data, including a detailed investigation of the detector gain. The quality of the calibration that could be tested for different gas mixtures including the calibration of the density inside the ion source when calibration gas from the gas calibration unit is introduced. In conclusion the calibration shows that DFMS meets the design requirements and that DFMS will be able to measure the D/H at the comet and help shed more light on the puzzle about the origin of water on Earth.
Resumo:
High-resolution, small-bore PET systems suffer from a tradeoff between system sensitivity, and image quality degradation. In these systems long crystals allow mispositioning of the line of response due to parallax error and this mispositioning causes resolution blurring, but long crystals are necessary for high system sensitivity. One means to allow long crystals without introducing parallax errors is to determine the depth of interaction (DOI) of the gamma ray interaction within the detector module. While DOI has been investigated previously, newly available solid state photomultipliers (SSPMs) well-suited to PET applications and allow new modules for investigation. Depth of interaction in full modules is a relatively new field, and so even if high performance DOI capable modules were available, the appropriate means to characterize and calibrate the modules are not. This work presents an investigation of DOI capable arrays and techniques for characterizing and calibrating those modules. The methods introduced here accurately and reliably characterize and calibrate energy, timing, and event interaction positioning. Additionally presented is a characterization of the spatial resolution of DOI capable modules and a measurement of DOI effects for different angles between detector modules. These arrays have been built into a prototype PET system that delivers better than 2.0 mm resolution with a single-sided-stopping-power in excess of 95% for 511 keV g's. The noise properties of SSPMs scale with the active area of the detector face, and so the best signal-to-noise ratio is possible with parallel readout of each SSPM photodetector pixel rather than multiplexing signals together. This work additionally investigates several algorithms for improving timing performance using timing information from multiple SSPM pixels when light is distributed among several photodetectors.
Resumo:
The main purpose of robot calibration is the correction of the possible errors in the robot parameters. This paper presents a method for a kinematic calibration of a parallel robot that is equipped with one camera in hand. In order to preserve the mechanical configuration of the robot, the camera is utilized to acquire incremental positions of the end effector from a spherical object that is fixed in the word reference frame. The positions of the end effector are related to incremental positions of resolvers of the motors of the robot, and a kinematic model of the robot is used to find a new group of parameters which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and improving spatial measurements. Finally, the robotic system is designed to carry out tracking tasks and the calibration of the robot is validated by means of integrating the errors of the visual controller.
Resumo:
This article presents in an informal way some early results on the design of a series of paradigms for visualization of the parallel execution of logic programs. The results presented here refer to the visualization of or-parallelism, as in MUSE and Aurora, deterministic dependent and-parallelism, as in Andorra-I, and independent and-parallelism as in &-Prolog. A tool has been implemented for this purpose and has been interfaced with these systems. Results are presented showing the visualization of executions from these systems and the usefulness of the resulting tool is briefly discussed.
Resumo:
Several activities in service oriented computing, such as automatic composition, monitoring, and adaptation, can benefit from knowing properties of a given service composition before executing them. Among these properties we will focus on those related to execution cost and resource usage, in a wide sense, as they can be linked to QoS characteristics. In order to attain more accuracy, we formulate execution costs / resource usage as functions on input data (or appropriate abstractions thereof) and show how these functions can be used to make better, more informed decisions when performing composition, adaptation, and proactive monitoring. We present an approach to, on one hand, synthesizing these functions in an automatic fashion from the definition of the different orchestrations taking part in a system and, on the other hand, to effectively using them to reduce the overall costs of non-trivial service-based systems featuring sensitivity to data and possibility of failure. We validate our approach by means of simulations of scenarios needing runtime selection of services and adaptation due to service failure. A number of rebinding strategies, including the use of cost functions, are compared.
Resumo:
This article presents in an informal way some early results on the design of a series of paradigms for visualization of the parallel execution of logic programs. The results presented here refer to the visualization of or-parallelism, as in MUSE and Aurora, deterministic dependent and-parallelism, as in Andorra-I, and independent and-parallelism as in &-Prolog. A tool has been implemented for this purpose and has been interfaced with these systems. Results are presented showing the visualization of executions from these systems and the usefulness of the resulting tool is briefly discussed.