18 resultados para standard batch algorithms

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo (MC) dose calculation algorithms have been widely used to verify the accuracy of intensity-modulated radiotherapy (IMRT) dose distributions computed by conventional algorithms due to the ability to precisely account for the effects of tissue inhomogeneities and multileaf collimator characteristics. Both algorithms present, however, a particular difference in terms of dose calculation and report. Whereas dose from conventional methods is traditionally computed and reported as the water-equivalent dose (Dw), MC dose algorithms calculate and report dose to medium (Dm). In order to compare consistently both methods, the conversion of MC Dm into Dw is therefore necessary. This study aims to assess the effect of applying the conversion of MC-based Dm distributions to Dw for prostate IMRT plans generated for 6 MV photon beams. MC phantoms were created from the patient CT images using three different ramps to convert CT numbers into material and mass density: a conventional four material ramp (CTCREATE) and two simplified CT conversion ramps: (1) air and water with variable densities and (2) air and water with unit density. MC simulations were performed using the BEAMnrc code for the treatment head simulation and the DOSXYZnrc code for the patient dose calculation. The conversion of Dm to Dw by scaling with the stopping power ratios of water to medium was also performed in a post-MC calculation process. The comparison of MC dose distributions calculated in conventional and simplified (water with variable densities) phantoms showed that the effect of material composition on dose-volume histograms (DVH) was less than 1% for soft tissue and about 2.5% near and inside bone structures. The effect of material density on DVH was less than 1% for all tissues through the comparison of MC distributions performed in the two simplified phantoms considering water. Additionally, MC dose distributions were compared with the predictions from an Eclipse treatment planning system (TPS), which employed a pencil beam convolution (PBC) algorithm with Modified Batho Power Law heterogeneity correction. Eclipse PBC and MC calculations (conventional and simplified phantoms) agreed well (<1%) for soft tissues. For femoral heads, differences up to 3% were observed between the DVH for Eclipse PBC and MC calculated in conventional phantoms. The use of the CT conversion ramp of water with variable densities for MC simulations showed no dose discrepancies (0.5%) with the PBC algorithm. Moreover, converting Dm to Dw using mass stopping power ratios resulted in a significant shift (up to 6%) in the DVH for the femoral heads compared to the Eclipse PBC one. Our results show that, for prostate IMRT plans delivered with 6 MV photon beams, no conversion of MC dose from medium to water using stopping power ratio is needed. In contrast, MC dose calculations using water with variable density may be a simple way to solve the problem found using the dose conversion method based on the stopping power ratio.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A motivação para este trabalho vem da necessidade que o autor tem em poder registar as notas tocadas na guitarra durante o processo de improviso. Quando o músico está a improvisar na guitarra, muitas vezes não se recorda das notas tocadas no momento, este trabalho trata o desenvolvimento de uma aplicação para guitarristas, que permita registar as notas tocadas na guitarra eléctrica ou clássica. O sinal é adquirido a partir da guitarra e processado com requisitos de tempo real na captura do sinal. As notas produzidas pela guitarra eléctrica, ligada ao computador, são representadas no formato de tablatura e/ou partitura. Para este efeito a aplicação capta o sinal proveniente da guitarra eléctrica a partir da placa de som do computador e utiliza algoritmos de detecção de frequência e algoritmos de estimação de duração de cada sinal para construir o registo das notas tocadas. A aplicação é desenvolvida numa perspectiva multi-plataforma, podendo ser executada em diferentes sistemas operativos Windows e Linux, usando ferramentas e bibliotecas de domínio público. Os resultados obtidos mostram a possibilidade de afinar a guitarra com valores de erro na ordem de 2 Hz em relação às frequências de afinação standard. A escrita da tablatura apresenta resultados satisfatórios, mas que podem ser melhorados. Para tal será necessário melhorar a implementação de técnicas de processamento do sinal bem como a comunicação entre processos para resolver os problemas encontrados nos testes efectuados.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta tese tem como principal objectivo a investigação teórica e experimental do desempenho de um sensor polarimétrico baseado num cristal líquido para medição da concentração de glicose. Recentemente uma série de sensores polarimétricos baseados em cristais líquidos foram propostos na literatura e receberam considerável interesse devido as suas características únicas. De facto, em comparação com outros moduladores electro-ópticos, o cristal líquido funciona com tensões mais baixas, tem baixo consumo de energia e maior ângulo de rotação. Além disso, este tipo de polarímetro pode ter pequenas dimensões que é uma característica interessante para dispositivos portáteis e compactos. Existem por outro lado algumas desvantagens, nomeadamente o facto do desempenho do polarímetro ser fortemente dependente do tipo de cristal líquido e da tensão a ele aplicada o que coloca desafios na escolha dos parâmetros óptimos de operação. Esta tese descreve o desenvolvimento do sensor polarimétrico, incluindo a integração dos componentes de óptica e electrónica, os algoritmos de processamento de sinal e um interface gráfico que facilita a programação de diversos parâmetros de operação e a calibração do sensor. Após a optimização dos parâmetros de operação verificou-se que o dispositivo mede a concentração da glicose em amostras com uma concentração de 8 mg/ml, com uma percentagem de erro inferior a 6% e um desvio padrão de 0,008o. Os resultados foram obtidos para uma amostra com percurso óptico de apenas 1 cm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado de Radiações aplicadas às Tecnologias da Saúde. Área de especialização: Imagem Digital com Radiação X.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the implications of the searches based on H -> tau(+)tau-by the ATLAS and CMS collaborations on the parameter space of the two-Higgs-doublet model (2HDM). In the 2HDM, the scalars can decay into a tau pair with a branching ratio larger than the SM one, leading to constraints on the 2HDM parameter space. We show that in model II, values of tan beta > 1.8 are definitively excluded if the pseudoscalar is in the mass range 110 GeV < m(A) < 145 GeV. We have also discussed the implications for the 2HDM of the recent dimuon search by the ATLAS collaboration for a CP-odd scalar in the mass range 4-12 GeV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims at investigating the impact of treating breast cancer using different radiation therapy (RT) techniques – forwardly-planned intensity-modulated, f-IMRT, inversely-planned IMRT and dynamic conformal arc (DCART) RT – and their effects on the whole-breast irradiation and in the undesirable irradiation of the surrounding healthy tissues. Two algorithms of iPlan BrainLAB treatment planning system were compared: Pencil Beam Convolution (PBC) and commercial Monte Carlo (iMC). Seven left-sided breast patients submitted to breast-conserving surgery were enrolled in the study. For each patient, four RT techniques – f-IMRT, IMRT using 2-fields and 5-fields (IMRT2 and IMRT5, respectively) and DCART – were applied. The dose distributions in the planned target volume (PTV) and the dose to the organs at risk (OAR) were compared analyzing dose–volume histograms; further statistical analysis was performed using IBM SPSS v20 software. For PBC, all techniques provided adequate coverage of the PTV. However, statistically significant dose differences were observed between the techniques, in the PTV, OAR and also in the pattern of dose distribution spreading into normal tissues. IMRT5 and DCART spread low doses into greater volumes of normal tissue, right breast, right lung and heart than tangential techniques. However, IMRT5 plans improved distributions for the PTV, exhibiting better conformity and homogeneity in target and reduced high dose percentages in ipsilateral OAR. DCART did not present advantages over any of the techniques investigated. Differences were also found comparing the calculation algorithms: PBC estimated higher doses for the PTV, ipsilateral lung and heart than the iMC algorithm predicted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new high performance architecture for the computation of all the DCT operations adopted in the H.264/AVC and HEVC standards is proposed in this paper. Contrasting to other dedicated transform cores, the presented multi-standard transform architecture is supported on a completely configurable, scalable and unified structure, that is able to compute not only the forward and the inverse 8×8 and 4×4 integer DCTs and the 4×4 and 2×2 Hadamard transforms defined in the H.264/AVC standard, but also the 4×4, 8×8, 16×16 and 32×32 integer transforms adopted in HEVC. Experimental results obtained using a Xilinx Virtex-7 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which outperforms its more prominent related designs by at least 1.8 times. When integrated in a multi-core embedded system, this architecture allows the computation, in real-time, of all the transforms mentioned above for resolutions as high as the 8k Ultra High Definition Television (UHDTV) (7680×4320 @ 30fps).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

3D laser scanning is becoming a standard technology to generate building models of a facility's as-is condition. Since most constructions are constructed upon planar surfaces, recognition of them paves the way for automation of generating building models. This paper introduces a new logarithmically proportional objective function that can be used in both heuristic and metaheuristic (MH) algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces. It can also adopt itself to the structural density of a scanned construction. In this paper, a metaheuristic method, genetic algorithm (GA), is used to test this introduced objective function on a synthetic point cloud. The results obtained show the proposed method is capable to find all plane configurations of planar surfaces (with a wide variety of sizes) in the point cloud with a minor distance to the actual configurations. © 2014 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real structures can be thought as an assembly of components, as for instances plates, shells and beams. This later type of component is very commonly found in structures like frames which can involve a significant degree of complexity or as a reinforcement element of plates or shells. To obtain the desired mechanical behavior of these components or to improve their operating conditions when rehabilitating structures, one of the eventual parameters to consider for that purpose, when possible, is the location of the supports. In the present work, a beam-type structure is considered, and for a set of cases concerning different number and types of supports, as well as different load cases, the authors optimize the location of the supports in order to obtain minimum values of the maximum transverse deflection. The optimization processes are carried out using genetic algorithms. The results obtained, clearly show a good performance of the approach proposed. © 2014 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Summarize all relevant findings in published literature regarding the potential dose reduction related to image quality using Sinogram-Affirmed Iterative Reconstruction (SAFIRE) compared to Filtered Back Projection (FBP). Background: Computed Tomography (CT) is one of the most used radiographic modalities in clinical practice providing high spatial and contrast resolution. However it also delivers a relatively high radiation dose to the patient. Reconstructing raw-data using Iterative Reconstruction (IR) algorithms has the potential to iteratively reduce image noise while maintaining or improving image quality of low dose standard FBP reconstructions. Nevertheless, long reconstruction times made IR unpractical for clinical use until recently. Siemens Medical developed a new IR algorithm called SAFIRE, which uses up to 5 different strength levels, and poses an alternative to the conventional IR with a significant reconstruction time reduction. Methods: MEDLINE, ScienceDirect and CINAHL databases were used for gathering literature. Eleven articles were included in this review (from 2012 to July 2014). Discussion: This narrative review summarizes the results of eleven articles (using studies on both patients and phantoms) and describes SAFIRE strengths for noise reduction in low dose acquisitions while providing acceptable image quality. Conclusion: Even though the results differ slightly, the literature gathered for this review suggests that the dose in current CT protocols can be reduced at least 50% while maintaining or improving image quality. There is however a lack of literature concerning paediatric population (with increased radiation sensitivity). Further studies should also assess the impact of SAFIRE on diagnostic accuracy.