865 resultados para Distributed instrumentation
Resumo:
In this paper we study the approximate controllability of control systems with states and controls in Hilbert spaces, and described by a second-order semilinear abstract functional differential equation with infinite delay. Initially we establish a characterization for the approximate controllability of a second-order abstract linear system and, in the last section, we compare the approximate controllability of a semilinear abstract functional system with the approximate controllability of the associated linear system. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We suggest a new notion of behaviour preserving transition refinement based on partial order semantics. This notion is called transition refinement. We introduced transition refinement for elementary (low-level) Petri Nets earlier. For modelling and verifying complex distributed algorithms, high-level (Algebraic) Petri nets are usually used. In this paper, we define transition refinement for Algebraic Petri Nets. This notion is more powerful than transition refinement for elementary Petri nets because it corresponds to the simultaneous refinement of several transitions in an elementary Petri net. Transition refinement is particularly suitable for refinement steps that increase the degree of distribution of an algorithm, e.g. when synchronous communication is replaced by asynchronous message passing. We study how to prove that a replacement of a transition is a transition refinement.
Resumo:
In adolescent idiopathic scoliosis (AIS) there has been a shift towards increasing the number of implants and pedicle screws, which has not been proven to improve cosmetic correction. To evaluate if increasing cost of instrumentation correlates with cosmetic correction using clinical photographs. 58 Lenke 1A and B cases from a multicenter AIS database with at least 3 months follow-up of clinical photographs were used for analysis. Cosmetic parameters on PA and forward bending photographs included angular measurements of trunk shift, shoulder balance, rib hump, and ratio measurements of waist line asymmetry. Pre-op and follow-up X-rays were measured for coronal and sagittal deformity parameters. Cost density was calculated by dividing the total cost of instrumentation by the number of vertebrae being fused. Linear regression and spearman`s correlation were used to correlate cost density to X-ray and photo outcomes. Three independent observers verified radiographic and cosmetic parameters for inter/interobserver variability analysis. Average pre-op Cobb angle and instrumented correction were 54A degrees (SD 12.5) and 59% (SD 25) respectively. The average number of vertebrae fused was 10 (SD 1.9). The total cost of spinal instrumentation ranged from $6,769 to $21,274 (Mean $12,662, SD $3,858). There was a weak positive and statistically significant correlation between Cobb angle correction and cost density (r = 0.33, p = 0.01), and no correlation between Cobb angle correction of the uninstrumented lumbar spine and cost density (r = 0.15, p = 0.26). There was no significant correlation between all sagittal X-ray measurements or any of the photo parameters and cost density. There was good to excellent inter/intraobserver variability of all photographic parameters based on the intraclass correlation coefficient (ICC 0.74-0.98). Our method used to measure cosmesis had good to excellent inter/intraobserver variability, and may be an effective tool to objectively assess cosmesis from photographs. Since increasing cost density only improves mildly the Cobb angle correction of the main thoracic curve and not the correction of the uninstrumented spine or any of the cosmetic parameters, one should consider the cost of increasing implant density in Lenke 1A and B curves. In the area of rationalization of health care expenses, this study demonstrates that increasing the number of implants does not improve any relevant cosmetic or radiographic outcomes.
Resumo:
A 250-mum-diameter fiber of ytterbium-doped ZBLAN (fluorine combined with Zr, Ba, La, Al, and Na) has been cooled from room temperature. We coupled 1.0 W of laser light from a 1013-nm diode laser into the fiber. We measured the temperature of the fiber by using both fluorescence techniques and a microthermocouple. These microthermocouple measurements show that the cooled fiber can be used to refrigerate materials brought into contact with it. This, in conjunction with the use of a diode laser as the light source, demonstrates that practical solid-state laser coolers can be realized. (C) 2001 Optical Society of America.
Resumo:
Laser heating Ar-40/Ar-39 geochronology provides high analytical precision and accuracy, mum-scale spatial resolution. and statistically significant data sets for the study of geological and planetary processes, A newly commissioned Ar-40/Ar-39 laboratory at CPGeo/USP, Sao Paulo, Brazil, equips the Brazilian scientific community with a new powerful tool applicable to the study of geological and cosmochemical processes. Detailed information about laboratory layout, environmental conditions, and instrumentation provides the necessary parameters for the evaluation of the CPGeo/USp Ar-40/Ar-39 suitability to a diverse range of applications. Details about analytical procedures, including mineral separation, irradiation at the IPEN/CNEN reactor at USP, and mass spectrometric analysis enable potential researchers to design the necessary sampling and sample preparation program suitable to the objectives of their study. Finally, the results of calibration tests using Ca and K salts and glasses, international mineral standards, and in-house mineral standards show that the accuracy and precision obtained at the Ar-40/Ar-39 laboratory at CPGeo/USP are comparable to results obtained in the most respected laboratories internationally. The extensive calibration and standardization procedures under-taken ensure that the results of analytical studies carried out in our laboratories will gain immediate international credibility, enabling Brazilian students and scientists to conduct forefront research in earth and planetary sciences.
Resumo:
Fixed-point roundoff noise in digital implementation of linear systems arises due to overflow, quantization of coefficients and input signals, and arithmetical errors. In uniform white-noise models, the last two types of roundoff errors are regarded as uniformly distributed independent random vectors on cubes of suitable size. For input signal quantization errors, the heuristic model is justified by a quantization theorem, which cannot be directly applied to arithmetical errors due to the complicated input-dependence of errors. The complete uniform white-noise model is shown to be valid in the sense of weak convergence of probabilistic measures as the lattice step tends to zero if the matrices of realization of the system in the state space satisfy certain nonresonance conditions and the finite-dimensional distributions of the input signal are absolutely continuous.
Resumo:
The equipment used to measure magnetic fields and, electric currents in residences is described. The instrumentation consisted of current transformers, magnetic field probes and locally designed and, built signal conditioning modules. The data acquisition system was capable of unattended recording for extended time periods. The complete system was calibrated to verify its response to known physical inputs. (C) 2003 ISA-The Instrumentation Automation Society.
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.