928 resultados para Sinogram-affirmed iterative reconstruction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A numerical comparison is performed between three methods of third order with the same structure, namely BSC, Halley’s and Euler–Chebyshev’s methods. As the behavior of an iterative method applied to a nonlinear equation can be highly sensitive to the starting points, the numerical comparison is carried out, allowing for complex starting points and for complex roots, on the basins of attraction in the complex plane. Several examples of algebraic and transcendental equations are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pectus excavatum is the most common congenital deformity of the anterior thoracic wall. The surgical correction of such deformity, using Nuss procedure, consists in the placement of a personalized convex prosthesis into sub-sternal position to correct the deformity. The aim of this work is the CT-scan substitution by ultrasound imaging for the pre-operative diagnosis and pre-modeling of the prosthesis, in order to avoid patient radiation exposure. To accomplish this, ultrasound images are acquired along an axial plane, followed by a rigid registration method to obtain the spatial transformation between subsequent images. These images are overlapped to reconstruct an axial plane equivalent to a CT-slice. A phantom was used to conduct preliminary experiments and the achieved results were compared with the corresponding CT-data, showing that the proposed methodology can be capable to create a valid approximation of the anterior thoracic wall, which can be used to model/bend the prosthesis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: An accurate percutaneous puncture is essential for disintegration and removal of renal stones. Although this procedure has proven to be safe, some organs surrounding the renal target might be accidentally perforated. This work describes a new intraoperative framework where tracked surgical tools are superimposed within 4D ultrasound imaging for security assessment of the percutaneous puncture trajectory (PPT). Methods: A PPT is first generated from the skin puncture site towards an anatomical target, using the information retrieved by electromagnetic motion tracking sensors coupled to surgical tools. Then, 2D ultrasound images acquired with a tracked probe are used to reconstruct a 4D ultrasound around the PPT under GPU processing. Volume hole-filling was performed in different processing time intervals by a tri-linear interpolation method. At spaced time intervals, the volume of the anatomical structures was segmented to ascertain if any vital structure is in between PPT and might compromise the surgical success. To enhance the volume visualization of the reconstructed structures, different render transfer functions were used. Results: Real-time US volume reconstruction and rendering with more than 25 frames/s was only possible when rendering only three orthogonal slice views. When using the whole reconstructed volume one achieved 8-15 frames/s. 3 frames/s were reached when one introduce the segmentation and detection if some structure intersected the PPT. Conclusions: The proposed framework creates a virtual and intuitive platform that can be used to identify and validate a PPT to safely and accurately perform the puncture in percutaneous nephrolithotomy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). Methods: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. Results: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. Conclusion: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In heterogeneous environments, diversity of resources among the devices may affect their ability to perform services with specific QoS constraints, and drive peers to group themselves in a coalition for cooperative service execution. The dynamic selection of peers should be influenced by user’s QoS requirements as well as local computation availability, tailoring provided service to user’s specific needs. However, complex dynamic real-time scenarios may prevent the possibility of computing optimal service configurations before execution. An iterative refinement approach with the ability to trade off deliberation time for the quality of the solution is proposed. We state the importance of quickly finding a good initial solution and propose heuristic evaluation functions that optimise the rate at which the quality of the current solution improves as the algorithms have more time to run.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Constrained and unconstrained Nonlinear Optimization Problems often appear in many engineering areas. In some of these cases it is not possible to use derivative based optimization methods because the objective function is not known or it is too complex or the objective function is non-smooth. In these cases derivative based methods cannot be used and Direct Search Methods might be the most suitable optimization methods. An Application Programming Interface (API) including some of these methods was implemented using Java Technology. This API can be accessed either by applications running in the same computer where it is installed or, it can be remotely accessed through a LAN or the Internet, using webservices. From the engineering point of view, the information needed from the API is the solution for the provided problem. On the other hand, from the optimization methods researchers’ point of view, not only the solution for the problem is needed. Also additional information about the iterative process is useful, such as: the number of iterations; the value of the solution at each iteration; the stopping criteria, etc. In this paper are presented the features added to the API to allow users to access to the iterative process data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brain dopamine transporters imaging by Single Photon Emission Tomography (SPECT) with 123I-FP-CIT has become an important tool in the diagnosis and evaluation of parkinsonian syndromes, since this radiopharmaceutical exhibits high affinity for membrane transporters responsible for cellular reabsorption of dopamine on the striatum. However, Ordered Subset Expectation Maximization (OSEM) is the method recommended in the literature for imaging reconstruction. Filtered Back Projection (FBP) is still used due to its fast processing, even if it presents some disadvantages. The aim of this work is to investigate the influence of reconstruction parameters for FBP in semiquantification of Brain Studies with 123I-FPCIT compared with those obtained with OSEM recommended reconstruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The acquisition of a Myocardial Perfusion image (MPI) is of great importance for the diagnosis of the coronary artery disease, since it allows to evaluate which areas of the heart aren’t being properly perfused, in rest and stress situations. This exam is greatly influenced by photon attenuation which creates image artifacts and affects quantification. The acquisition of a Computerized Tomography (CT) image makes it possible to get an atomic images which can be used to perform high-quality attenuation corrections of the radiopharmaceutical distribution, in the MPI image. Studies show that by using hybrid imaging to perform diagnosis of the coronary artery disease, there is an increase on the specificity when evaluating the perfusion of the right coronary artery (RCA). Using an iterative algorithm with a resolution recovery software for the reconstruction, which balances the image quality, the administered activity and the scanning time, we aim to evaluate the influence of attenuation correction on the MPI image and the outcome in perfusion quantification and imaging quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of the present work is the use of mineralogical data corresponding to sediment fine fractions (silt and clay) of Quaternary littoral deposits for the definition of a more detailed vertical zonography and to discriminate the most significant morphoclimatic changes concerned with sediment source areas and sediment deposition areas. The analysis of the available mineralogical data reveals a vertical evolution of the mineral composition. The following aspects deserve particular reference: 1) fine fractions (<38 nm) are composed of quartz and phyllosilicates associated to feldspars, prevailing over other minerals; however in certain sections iron hydroxides and evaporitic minerals occur in significant amounts; 2) clay fractions (<2 nm) show a general prevalence of illite associated with kaolinite and oscillations, in relative terms, of kaolinite and illite contents. Qualitative and quantitative lateral and vertical variations of clay and non clay minerals allow the discrimination of sedimentary sequences and the establishment of the ritmicity and periodicity of the morphoclimatic Quaternary episodes that occurred in the Cortegaça and Maceda beaches. To each one of the sedimentary sequences corresponds, in a first stage, a littoral environment that increasingly became more continental. Climate would be mild to cold, sometimes with humidity - aridity oscillations. Warmer and moister episodes alternated with cooler and dryer ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores - Ramo de Sistemas Autónomos

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Upper eyelid tumours, particularly basal cell carcinomas, are relatively frequent. Surgical ablation of these lesions creates defects of variable complexity. Although several options are available for lower eyelid reconstruction, fewer surgical alternatives exist for upper eyelid reconstruction. Large defects of this region are usually reconstructed with two-step procedures. In 1997, Okada et al. described a horizontal V-Y myotarsocutaneous advancement flap for reconstruction of a large upper eyelid defect in a single operative time. However, no further studies were published regarding the use of this particular flap in upper eyelid reconstruction. In addition, this flap is not described in most plastic surgery textbooks. The authors report here their experience of 16 cases of horizontal V-Y myotarsocutaneous advancement flaps used to reconstruct full-thickness defects of the upper eyelid after tumour excision. The tumour histological types were as follows: 12 basal cell carcinomas, 2 cases of squamous cell carcinomas, 1 case of sebaceous cell carcinoma and 1 of malignant melanoma. This technique allowed closure of defects of up to 60% of the eyelid width. None of the flaps suffered necrosis. The mean operative time was 30 min. No additional procedures were necessary as good functional and cosmetic results were achieved in all cases. No recurrences were noted. In this series, the horizontal V-Y myotarsocutaneous advancement flap proved to be a technically simple, reliable and expeditious option for reconstruction of full-thickness upper eyelid defects (as wide as 60% of the eyelid width) in a single operative procedure. In the future this technique may become the preferential option for such defects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cases of extensive damage to the foot, with significant bone loss, it is generally accepted that reconstruction must include bone flaps or grafts either in the emergency setting or subsequently. In this report, we describe the case of an 18-year-old student with an avulsion injury of the dorsum of his right foot. Consequently, he lost most of the soft tissue over the dorsum of the foot and the cuboid, navicular, and cuneiform bones. A latissimus dorsi free flap was used to reconstruct the defect. A functional pseudoarthrosis developed between the remaining bones of the foot, and the patient experienced satisfactory foot function after rehabilitation. For this reason, no additional reconstructive procedure was undertaken. This case suggests that it might be adequate to use the latissimus dorsi muscle flap more liberally than previously reported in the reconstruction of extensive defects of the dorsum of the foot, including cases with significant bone loss. This option could avoid the morbidity and inconvenience of a second surgery and the need to harvest a bone flap or graft.