897 resultados para fault correction
Resumo:
Pectus Carinatum is a deformity of the chest wall, characterized by an anterior protrusion of the sternum, often corrected surgically due to cosmetic motivation. This work presents an alternative approach to the current open surgery option, proposing a novel technique based on a personalized orthosis. Two different processes for the orthosis’ personalization are presented. One based on a 3D laser scan of the patient chest, followed by the reconstruction of the thoracic wall mesh using a radial basis function, and a second one, based on a computer tomography scan followed by a neighbouring cells algorithm. The axial position where the orthosis is to be located is automatically calculated using a Ray-Triangle intersection method, whose outcome is input to a pseudo Kochenek interpolating spline method to define the orthosis curvature. Results show that no significant differences exist between the patient chest physiognomy and the curvature angle and size of the orthosis, allowing a better cosmetic outcome and less initial discomfort.
Resumo:
ABSTRACT In areas cultivated under no-tillage system, the availability of phosphorus (P) can be raised by means of the gradual corrective fertilization, applying phosphorus into sowing furrows at doses higher than those required by the crops. The objective of this work was to establish the amount of P to be applied in soybean crop to increase content of P to pre-established values at the depth of 0.0 to 0.10 m. An experiment was carried out on a clayey Haplorthox soil with a randomized block experimental design distributed in split-split plot, with four replications. Two soybean crop systems (single or intercropped with Panicum maximum Jaca cv. Aruana) were evaluated in the plots. In addition, it was evaluated four P levels (0, 60, 120 and 180 kg ha-1 P2O5) applied in the first year in the split plots; and four P levels (0, 30, 60 and 90 kg ha-1 P2O5) applied in the two subsequent crops in the split-split plot. Contents of P were extracted by Mehlich-1 and Anion Exchange Resin methods from soil samples collected in the split-split plot. It was found that it is necessary to apply 19.4 or 11.1 kg ha-1 of P2O5, via triple superphosphate as source, to increase 1 mg dm-3 of P extracted by Mehlich-1 or Resin, respectively, in the 0.0 to 0.10 m layer of depth. The soil drain P character decreases as the amount of this nutrient supplied in the previous crops is increased.
Resumo:
A previously developed model is used to numerically simulate real clinical cases of the surgical correction of scoliosis. This model consists of one-dimensional finite elements with spatial deformation in which (i) the column is represented by its axis; (ii) the vertebrae are assumed to be rigid; and (iii) the deformability of the column is concentrated in springs that connect the successive rigid elements. The metallic rods used for the surgical correction are modeled by beam elements with linear elastic behavior. To obtain the forces at the connections between the metallic rods and the vertebrae geometrically, non-linear finite element analyses are performed. The tightening sequence determines the magnitude of the forces applied to the patient column, and it is desirable to keep those forces as small as possible. In this study, a Genetic Algorithm optimization is applied to this model in order to determine the sequence that minimizes the corrective forces applied during the surgery. This amounts to find the optimal permutation of integers 1, ... , n, n being the number of vertebrae involved. As such, we are faced with a combinatorial optimization problem isomorph to the Traveling Salesman Problem. The fitness evaluation requires one computing intensive Finite Element Analysis per candidate solution and, thus, a parallel implementation of the Genetic Algorithm is developed.
Resumo:
Tomographic image can be degraded, partially by patient based attenuation. The aim of this paper is to quantitatively verify the effects of attenuation correction methods Chang and CT in 111In studies through the analysis of profiles from abdominal SPECT, correspondent to a uniform radionuclide uptake organ, the left kidney.
Resumo:
The Gulf of Cadiz, as part of the Azores-Gibraltar plate boundary, is recognized as a potential source of big earthquakes and tsunamis that may affect the bordering countries, as occurred on 1 November 1755. Preparing for the future, Portugal is establishing a national tsunami warning system in which the threat caused by any large-magnitude earthquake in the area is estimated from a comprehensive database of scenarios. In this paper we summarize the knowledge about the active tectonics in the Gulf of Cadiz and integrate the available seismological information in order to propose the generation model of destructive tsunamis to be applied in tsunami warnings. The fault model derived is then used to estimate the recurrence of large earthquakes using the fault slip rates obtained by Cunha et al. (2012) from thin-sheet neotectonic modelling. Finally we evaluate the consistency of seismicity rates derived from historical and instrumental catalogues with the convergence rates between Eurasia and Nubia given by plate kinematic models.
Resumo:
Introduction Myocardial Perfusion Imaging (MPI) is a very important tool in the assessment of Coronary Artery Disease ( CAD ) patient s and worldwide data demonstrate an increasingly wider use and clinical acceptance. Nevertheless, it is a complex process and it is quite vulnerable concerning the amount and type of possible artefacts, some of them affecting seriously the overall quality and the clinical utility of the obtained data. One of the most in convenient artefacts , but relatively frequent ( 20% of the cases ) , is relate d with patient motion during image acquisition . Mostly, in those situations, specific data is evaluated and a decisi on is made between A) accept the results as they are , consider ing that t he “noise” so introduced does not affect too seriously the final clinical information, or B) to repeat the acquisition process . Another possib ility could be to use the “ Motion Correcti on Software” provided within the software package included in any actual gamma camera. The aim of this study is to compare the quality of the final images , obtained after the application of motion correction software and after the repetition of image acqui sition. Material and Methods Thirty cases of MPI affected by Motion Artefacts and repeated , were used. A group of three, independent (blinded for the differences of origin) expert Nuclear Medicine Clinicians had been invited to evaluate the 30 sets of thre e images - one set for each patient - being ( A) original image , motion uncorrected , (B) original image, motion corrected, and (C) second acquisition image, without motion . The results so obtained were statistically analysed . Results and Conclusion Results obtained demonstrate that the use of the Motion Correction Software is useful essentiall y if the amplitude of movement is not too important (with this specific quantification found hard to define precisely , due to discrepancies between clinicians and other factors , namely between one to another brand); when that is not the case and the amplitude of movement is too important , the n the percentage of agreement between clinicians is much higher and the repetition of the examination is unanimously considered ind ispensable.
Resumo:
Introduction: The quantification of th e differential renal function in adults can be difficult due to many factors - on e of the se is the variances in kidney depth and the attenuation related with all the tissue s between the kidney and the camera. Some authors refer that t he lower attenuation i n p ediatric patients makes unnecessary the use of attenuation correction algorithms. This study will com pare the values of differential renal function obtained with and with out attenuation correction techniques . Material and Methods: Images from a group consisting of 15 individuals (aged 3 years +/ - 2) were used and two attenuation correction method s were applied – Tonnesen correction factors and the geometric mean method . The mean time of acquisition (time post 99m Tc - DMSA administration) was 3.5 hours +/ - 0.8h. Results: T he absence of any method of attenuation correction apparently seems to lead to consistent values that seem to correlate well with the ones obtained with the incorporation of methods of attenuation correction . The differences found between the values obtained with and without attenuation correction were not significant. Conclusion: T he decision of not doing any kind of attenuation correction method can apparently be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for a really accurate value of the relative kidney uptake, then an attenuation correction method should be used.
Resumo:
Introduction: Although relative uptake values aren’t the most important objective of a 99mTc-DMSA scan, they are important quantitative information. In most of the dynamic renal scintigraphies attenuation correction is essential if one wants to obtain a reliable result of the quantification process. Although in DMSA scans the absent of significant background and the lesser attenuation in pediatric patients, makes that this attenuation correction techniques are actually not applied. The geometric mean is the most common method, but that includes the acquisition of an anterior (extra) projection, which it is not acquired by a large number of NM departments. This method and the attenuation factors proposed by Tonnesen will be correlated with the absence of attenuation correction procedures. Material and Methods: Images from 20 individuals (aged 3 years +/- 2) were used and the two attenuation correction methods applied. The mean time of acquisition (time post DMSA administration) was 3.5 hours +/- 0.8h. Results: The absence of attenuation correction showed a good correlation with both attenuation methods (r=0.73 +/- 0.11) and the mean difference verified on the uptake values between the different methods were 4 +/- 3. The correlation was higher when the age was lower. The attenuation correction methods correlation was higher between them two than with the “no attenuation correction” method (r=0.82 +/- 0.8), and the mean differences of the uptake values were 2 +/- 2. Conclusion: The decision of not doing any kind of attenuation correction method can be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for an accurate value of the relative kidney uptake, then an attenuation correction method should be used. Attenuation correction factors proposed by Tonnesen can be easily implemented and so become a practical and easy to implement alternative, namely when the anterior projection - needed for the geometric mean methodology – is not acquired.
Resumo:
This paper presents an architecture (Multi-μ) being implemented to study and develop software based fault tolerant mechanisms for Real-Time Systems, using the Ada language (Ada 95) and Commercial Off-The-Shelf (COTS) components. Several issues regarding fault tolerance are presented and mechanisms to achieve fault tolerance by software active replication in Ada 95 are discussed. The Multi-μ architecture, based on a specifically proposed Fault Tolerance Manager (FTManager), is then described. Finally, some considerations are made about the work being done and essential future developments.
Resumo:
This paper presents a novel phase correction technique for Passive Radar which uses targets of opportunity present in the target area as references. The proposed methodology is quite simple and enables the use of low cost hardware with independent oscillators for the reference and surveillance channels which can be geographically distributed. © 2014 IEEE.
Resumo:
This paper presents solutions for fault detection and diagnosis of two-level, three phase voltage-source inverter (VSI) topologies with IGBT devices. The proposed solutions combine redundant standby VSI structures and contactors (or relays) to improve the fault-tolerant capabilities of power electronics in applications with safety requirements. The suitable combination of these elements gives the inverter the ability to maintain energy processing in the occurrence of several failure modes, including short-circuit in IGBT devices, thus extending its reliability and availability. A survey of previously developed fault-tolerant VSI structures and several aspects of failure modes, detection and isolation mechanisms within VSI is first discussed. Hardware solutions for the protection of power semiconductors with fault detection and diagnosis mechanisms are then proposed to provide conditions to isolate and replace damaged power devices (or branches) in real time. Experimental results from a prototype are included to validate the proposed solutions.
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.
Resumo:
With very few exceptions, M > 4 tectonic earthquakes in the Azores show normal fault solution and occur away from the islands. Exceptionally, the 1998 shock was pure strike-slip and occurred within the northern edge of the Pico-Faial Ridge. Fault plane solutions show two possible planes of rupture striking ENE-WSW (dextral) and NNW-SSE (sinistral). The former has not been recognised in the Azores, but is parallel to the transform direction related to the relative motion between the Eurasia and Nubia plates. Therefore, the main question we address in the present study is: do transform faults related to the Eurasia/Nubia plate boundary exist in the Azores? Knowing that the main source of strain is related to plate kinematics, we conclude that the sinistral strike-slip NNW-SSE fault plane solution is not consistent with either the fault dip (ca. 65, which is typical of a normal fault) or the ca. ENE-WSW direction of maximum extension; both are consistent with a normal fault, as observed in most major earthquakes on faults striking around NNW-SSE in the Azores. In contrast, the dextral strike-slip ENE-WSW fault plane solution is consistent with the transform direction related to the anticlockwise rotation of Nubia relative to Eurasia. Altogether, tectonic data, measured ground motion, observed destruction, and modelling are consistent with a dextral strike-slip source fault striking ENE-WSW. Furthermore, the bulk clockwise rotation measured by GPS is typical of bookshelf block rotations observed at the termination of such master strike-slip faults. Therefore, we suggest that the 1998 earthquake can be related to the WSW termination of a transform (ENE-WSW fault plane solution) associated with the Nubia-Eurasia diffuse plate boundary. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements
Resumo:
Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead.