997 resultados para integrity verification technique


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring pathology/regeneration in experimental models of de-/remyelination requires an accurate measure not only of functional changes but also of the amount of myelin. We tested whether X-ray diffraction (XRD), which measures periodicity in unfixed myelin, can assess the structural integrity of myelin in fixed tissue. From laboratories involved in spinal cord injury research and in studying the aging primate brain, we solicited "blind" samples and used an electronic detector to record rapidly the diffraction patterns (30 min each pattern) from them. We assessed myelin integrity by measuring its periodicity and relative amount. Fixation of tissue itself introduced +/-10% variation in periodicity and +/-40% variation in relative amount of myelin. For samples having the most native-like periods, the relative amounts of myelin detected allowed distinctions to be made between normal and demyelinating segments, between motor and sensory tracts within the spinal cord, and between aged and young primate CNS. Different periodicities also allowed distinctions to be made between samples from spinal cord and nerve roots and between well-fixed and poorly fixed samples. Our findings suggest that, in addition to evaluating the effectiveness of different fixatives, XRD could also be used as a robust and rapid technique for quantitating the relative amount of myelin among spinal cords and other CNS tissue samples from experimental models of de- and remyelination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The time-course of dark adaptation provides valuable insights into the function and interactions between the rod and cone pathways in the retina. Here we describe a technique that uses the flash electroretinogram (ERG) response to probe the functional integrity of the cone and rod pathways during the dynamic process of dark adaptation in the mouse. Retinal sensitivity was estimated from the stimulus intensity required to maintain a 30 microV criterion b-wave response during a 40 min period of dark adaptation. When tracked in this manner, dark adaptation functions in WT mice depended upon the bleaching effects of initial background adaptation conditions. Altered dark adaptation functions, commensurate with the functional deficit were recorded in pigmented mice that lacked cone function (Gnat2 ( cplf3 )) and in WT mice injected with a toxin, sodium iodate (NaIO(3)), which targets the retinal pigment epithelium and also has downstream effects on photoreceptors. These data demonstrate that this adaptive tracking procedure measures retinal sensitivity and the contributions of the rod and/or cone pathways during dark adaptation in both WT control and mutant mice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The verification possibilities of dynamically collimated treatment beams with a scanning liquid ionization chamber electronic portal image device (SLIC-EPID) are investigated. The ion concentration in the liquid of a SLIC-EPID and therefore the read-out signal is determined by two parameters of a differential equation describing the creation and recombination of the ions. Due to the form of this equation, the portal image detector describes a nonlinear dynamic system with memory. In this work, the parameters of the differential equation were experimentally determined for the particular chamber in use and for an incident open 6 MV photon beam. The mathematical description of the ion concentration was then used to predict portal images of intensity-modulated photon beams produced by a dynamic delivery technique, the sliding window approach. Due to the nature of the differential equation, a mathematical condition for 'reliable leaf motion verification' in the sliding window technique can be formulated. It is shown that the time constants for both formation and decay of the equilibrium concentration in the chamber is in the order of seconds. In order to guarantee reliable leaf motion verification, these time constants impose a constraint on the rapidity of the image-read out for a given maximum leaf speed. For a leaf speed of 2 cm s(-1), a minimum image acquisition frequency of about 2 Hz is required. Current SLIC-EPID systems are usually too slow since they need about a second to acquire a portal image. However, if the condition is fulfilled, the memory property of the system can be used to reconstruct the leaf motion. It is shown that a simple edge detecting algorithm can be employed to determine the leaf positions. The method is also very robust against image noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis covers the correction, and verification, development, and implementation of a computational fluid dynamics (CFD) model for an orifice plate meter. Past results were corrected and further expanded on with compressibility effects of acoustic waves being taken into account. One dynamic pressure difference transducer measures the time-varying differential pressure across the orifice meter. A dynamic absolute pressure measurement is also taken at the inlet of the orifice meter, along with a suitable temperature measurement of the mean flow gas. Together these three measurements allow for an incompressible CFD simulation (using a well-tested and robust model) for the cross-section independent time-varying mass flow rate through the orifice meter. The mean value of this incompressible mass flow rate is then corrected to match the mean of the measured flow rate( obtained from a Coriolis meter located up stream of the orifice meter). Even with the mean and compressibility corrections, significant differences in the measured mass flow rates at two orifice meters in a common flow stream were observed. This means that the compressibility effects associated with pulsatile gas flows is significant in the measurement of the time-varying mass flow rate. Future work (with the approach and initial runs covered here) will provide an indirect verification of the reported mass flow rate measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Compare changes in P-wave amplitude of the intra-atrial electrocardiogram (ECG) and its corresponding transesophageal echocardiography (TEE)-controlled position to verify the exact localization of a central venous catheter (CVC) tip. DESIGN: A prospective study. SETTING: University, single-institutional setting. PARTICIPANTS: Two hundred patients undergoing elective cardiac surgery. INTERVENTIONS: CVC placement via the right internal jugular vein with ECG control using the guidewire technique and TEE control in 4 different phases: phase 1: CVC placement with normalized P wave and measurement of distance from the crista terminalis to the CVC tip; phase 2: TEE-controlled placement of the CVC tip; parallel to the superior vena cava (SVC) and measurements of P-wave amplitude; phase 3: influence of head positioning on CVC migration; and phase 4: evaluation of positioning of the CVC postoperatively using a chest x-ray. MEASUREMENTS AND MAIN RESULTS: The CVC tip could only be visualized in 67 patients on TEE with a normalized P wave. In 198 patients with the CVC parallel to the SVC wall controlled by TEE (phase 2), an elevated P wave was observed. Different head movements led to no significant migration of the CVC (phase 3). On a postoperative chest-x-ray, the CVC position was correct in 87.6% (phase 4). CONCLUSION: The study suggests that the position of the CVC tip is located parallel to the SVC and 1.5 cm above the crista terminalis if the P wave starts to decrease during withdrawal of the catheter. The authors recommend that ECG control as per their study should be routinely used for placement of central venous catheters via the right internal jugular vein.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Establishing precise age-depth relationships of high-alpine ice cores is essential in order to deduce conclusive paleoclimatic information from these archives. Radiocarbon dating of carbonaceous aerosol particles incorporated in such glaciers is a promising tool to gain absolute ages, especially from the deepest parts where conventional methods are commonly inapplicable. In this study, we present a new validation for a published C-14 dating method for ice cores. Previously C-14-dated horizons of organic material from the Juvfonne ice patch in central southern Norway (61.676 degrees N, 8.354 degrees E) were used as reference dates for adjacent ice layers, which were C-14 dated based on their particulate organic carbon (POC) fraction. Multiple measurements were carried out on 3 sampling locations within the ice patch featuring modern to multimillennial ice. The ages obtained from the analyzed samples were in agreement with the given age estimates. In addition to previous validation work, this independent verification gives further confidence that the investigated method provides the actual age of the ice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The successful management of cancer with radiation relies on the accurate deposition of a prescribed dose to a prescribed anatomical volume within the patient. Treatment set-up errors are inevitable because the alignment of field shaping devices with the patient must be repeated daily up to eighty times during the course of a fractionated radiotherapy treatment. With the invention of electronic portal imaging devices (EPIDs), patient's portal images can be visualized daily in real-time after only a small fraction of the radiation dose has been delivered to each treatment field. However, the accuracy of human visual evaluation of low-contrast portal images has been found to be inadequate. The goal of this research is to develop automated image analysis tools to detect both treatment field shape errors and patient anatomy placement errors with an EPID. A moments method has been developed to align treatment field images to compensate for lack of repositioning precision of the image detector. A figure of merit has also been established to verify the shape and rotation of the treatment fields. Following proper alignment of treatment field boundaries, a cross-correlation method has been developed to detect shifts of the patient's anatomy relative to the treatment field boundary. Phantom studies showed that the moments method aligned the radiation fields to within 0.5mm of translation and 0.5$\sp\circ$ of rotation and that the cross-correlation method aligned anatomical structures inside the radiation field to within 1 mm of translation and 1$\sp\circ$ of rotation. A new procedure of generating and using digitally reconstructed radiographs (DRRs) at megavoltage energies as reference images was also investigated. The procedure allowed a direct comparison between a designed treatment portal and the actual patient setup positions detected by an EPID. Phantom studies confirmed the feasibility of the methodology. Both the moments method and the cross-correlation technique were implemented within an experimental radiotherapy picture archival and communication system (RT-PACS) and were used clinically to evaluate the setup variability of two groups of cancer patients treated with and without an alpha-cradle immobilization aid. The tools developed in this project have proven to be very effective and have played an important role in detecting patient alignment errors and field-shape errors in treatment fields formed by a multileaf collimator (MLC). ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently it has been proposed that the evaluation of effects of pollutants on aquatic organisms can provide an early warning system of potential environmental and human health risks (NRC 1991). Unfortunately there are few methods available to aquatic biologists to conduct assessments of the effects of pollutants on aquatic animal community health. The primary goal of this research was to develop and evaluate the feasibility of such a method. Specifically, the primary objective of this study was to develop a prototype rapid bioassessment technique similar to the Index of Biotic Integrity (IBI) for the upper Texas and Northwestern Gulf of Mexico coastal tributaries. The IBI consists of a series of "metrics" which describes specific attributes of the aquatic community. Each of these metrics are given a score which is then subtotaled to derive a total assessment of the "health" of the aquatic community. This IBI procedure may provide an additional assessment tool for professionals in water quality management.^ The experimental design consisted primarily of compiling previously collected data from monitoring conducted by the Texas Natural Resource Conservation Commission (TNRCC) at five bayous classified according to potential for anthropogenic impact and salinity regime. Standardized hydrological, chemical, and biological monitoring had been conducted in each of these watersheds. The identification and evaluation of candidate metrics for inclusion in the estuarine IBI was conducted through the use of correlation analysis, cluster analysis, stepwise and normal discriminant analysis, and evaluation of cumulative distribution frequencies. Scores of each included metric were determined based on exceedances of specific percentiles. Individual scores were summed and a total IBI score and rank for the community computed.^ Results of these analyses yielded the proposed metrics and rankings listed in this report. Based on the results of this study, incorporation of an estuarine IBI method as a water quality assessment tool is warranted. Adopted metrics were correlated to seasonal trends and less so to salinity gradients observed during the study (0-25 ppt). Further refinement of this method is needed using a larger more inclusive data set which includes additional habitat types, salinity ranges, and temporal variation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE In the present case series, the authors report on seven cases of erosively worn dentitions (98 posterior teeth) which were treated with direct resin composite. MATERIALS AND METHODS In all cases, both arches were restored by using the so-called stamp technique. All patients were treated with standardized materials and protocols. Prior to treatment, a waxup was made on die-cast models to build up the loss of occlusion as well as ensure the optimal future anatomy and function of the eroded teeth to be restored. During treatment, teeth were restored by using templates of silicone (ie, two "stamps," one on the vestibular, one on the oral aspect of each tooth), which were filled with resin composite in order to transfer the planned, future restoration (ie, in the shape of the waxup) from the extra- to the intraoral situation. Baseline examinations were performed in all patients after treatment, and photographs as well as radiographs were taken. To evaluate the outcome, the modified United States Public Health Service criteria (USPHS) were used. RESULTS The patients were re-assessed after a mean observation time of 40 months (40.8 ± 7.2 months). The overall outcome of the restorations was good, and almost exclusively "Alpha" scores were given. Only the marginal integrity and the anatomical form received a "Charlie" score (10.2%) in two cases. CONCLUSION Direct resin composite restorations made with the stamp technique are a valuable treatment option for restoring erosively worn dentitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this thesis lies in the development of a sensitive method for the analysis of protein primary structure which can be easily used to confirm the DNA sequence of a protein's gene and determine the modifications which are made after translation. This technique involves the use of dipeptidyl aminopeptidase (DAP) and dipeptidyl carboxypeptidase (DCP) to hydrolyze the protein and the mass spectrometric analysis of the dipeptide products.^ Dipeptidyl carboxypeptidase was purified from human lung tissue and characterized with respect to its proteolytic activity. The results showed that the enzyme has a relatively unrestricted specificity, making it useful for the analysis of the C-terminal of proteins. Most of the dipeptide products were identified using gas chromatography/mass spectrometry (GC/MS). In order to analyze the peptides not hydrolyzed by DCP and DAP, as well as the dipeptides not identified by GC/MS, a FAB ion source was installed on a quadrupole mass spectrometer and its performance evaluated with a variety of compounds.^ Using these techniques, the sequences of the N-terminal and C-terminal regions and seven fragments of bacteriophage P22 tail protein have been verified. All of the dipeptides identified in these analysis were in the same DNA reading frame, thus ruling out the possibility of a single base being inserted or deleted from the DNA sequence. The verification of small sequences throughout the protein sequence also indicates that no large portions of the protein have been removed after translation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usage of intensity modulated radiotherapy (IMRT) treatments necessitates a significant amount of patient-specific quality assurance (QA). This research has investigated the precision and accuracy of Kodak EDR2 film measurements for IMRT verifications, the use of comparisons between 2D dose calculations and measurements to improve treatment plan beam models, and the dosimetric impact of delivery errors. New measurement techniques and software were developed and used clinically at M. D. Anderson Cancer Center. The software implemented two new dose comparison parameters, the 2D normalized agreement test (NAT) and the scalar NAT index. A single-film calibration technique using multileaf collimator (MLC) delivery was developed. EDR2 film's optical density response was found to be sensitive to several factors: radiation time, length of time between exposure and processing, and phantom material. Precision of EDR2 film measurements was found to be better than 1%. For IMRT verification, EDR2 film measurements agreed with ion chamber results to 2%/2mm accuracy for single-beam fluence map verifications and to 5%/2mm for transverse plane measurements of complete plan dose distributions. The same system was used to quantitatively optimize the radiation field offset and MLC transmission beam modeling parameters for Varian MLCs. While scalar dose comparison metrics can work well for optimization purposes, the influence of external parameters on the dose discrepancies must be minimized. The ability of 2D verifications to detect delivery errors was tested with simulated data. The dosimetric characteristics of delivery errors were compared to patient-specific clinical IMRT verifications. For the clinical verifications, the NAT index and percent of pixels failing the gamma index were exponentially distributed and dependent upon the measurement phantom but not the treatment site. Delivery errors affecting all beams in the treatment plan were flagged by the NAT index, although delivery errors impacting only one beam could not be differentiated from routine clinical verification discrepancies. Clinical use of this system will flag outliers, allow physicists to examine their causes, and perhaps improve the level of agreement between radiation dose distribution measurements and calculations. The principles used to design and evaluate this system are extensible to future multidimensional dose measurements and comparisons. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technique of Abstract Interpretation has allowed the development of very sophisticated global program analyses which are at the same time provably correct and practical. We present in a tutorial fashion a novel program development framework which uses abstract interpretation as a fundamental tool. The framework uses modular, incremental abstract interpretation to obtain information about the program. This information is used to validate programs, to detect bugs with respect to partial specifications written using assertions (in the program itself and/or in system libraries), to generate and simplify run-time tests, and to perform high-level program transformations such as multiple abstract specialization, parallelization, and resource usage control, all in a provably correct way. In the case of validation and debugging, the assertions can refer to a variety of program points such as procedure entry, procedure exit, points within procedures, or global computations. The system can reason with much richer information than, for example, traditional types. This includes data structure shape (including pointer sharing), bounds on data structure sizes, and other operational variable instantiation properties, as well as procedure-level properties such as determinacy, termination, nonfailure, and bounds on resource consumption (time or space cost). CiaoPP, the preprocessor of the Ciao multi-paradigm programming system, which implements the described functionality, will be used to illustrate the fundamental ideas.