7 resultados para standard map
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Objetivos – Com este estudo pretendeu-se i) avaliar o contributo da aplicação da sequência de difusão na caracterização das lesões mamárias malignas; ii) considerar se a sequência de difusão deve incorporar o protocolo standard em RM mamária e iii) correlacionar os resultados dos valores de coeficiente aparente de difusão (ADC) e os resultados histológicos. Metodologia – A amostra incluiu 18 pacientes do sexo feminino, com idades compreendidas entre 38 e 71 anos, que apresentavam lesões mamárias malignas confirmadas histologicamente. Foi adicionado ao protocolo de RM mamária a sequência de difusão, de modo a calcular os valores de ADC das lesões observadas. Resultados – Verificou-se que a range de valores de ADC para lesões malignas em ROI’s calculados no centro da lesão apresentavam uma média e desvio-padrão de (0,89 ± 0,14x10-3mm2/s). O método da utilização dos valores de ADC na caracterização de lesões mamárias malignas demonstrou uma sensibilidade de 100%. Conclusões – Neste estudo, com uma sensibilidade de 100%, a ponderação em difusão demonstrou ser uma técnica vantajosa na caracterização de lesões mamárias malignas pelo que se sugere a sua introdução no protocolo standard da RM mamária. ABSTRACT - Aims – The aim of this study was i) to evaluate the potential of the DWI sequence in the characterization of malignant breast lesions; ii) to verify if this sequence should incorporate the breast MRI protocol and iii) to correlate the apparent diffusion coefficients (ADC) values and histological results. Methodology – The sample includes 18 female patients between the ages of 38 and 71 years, who presented with malignant breast lesion confirmed by histology. The DWI sequence was added to the MRI standard protocol to calculate the ADC values. Results – In the results obtained we observed that the range of the ADC values calculated in the center of the malignant lesions, showed a mean and standard deviation of 0.89 ± 0.14 x10-3 mm2 / s. This method of using the ADC values for the detection of malignant lesions showed a sensitivity of 100%. Conclusion – The DWI technique proved to be a useful method in the characterization of malignant breast lesions, as it showed a sensitivity of 100%, so we suggest its inclusion in the Breast MR standard protocol.
Resumo:
We study the implications of the searches based on H -> tau(+)tau-by the ATLAS and CMS collaborations on the parameter space of the two-Higgs-doublet model (2HDM). In the 2HDM, the scalars can decay into a tau pair with a branching ratio larger than the SM one, leading to constraints on the 2HDM parameter space. We show that in model II, values of tan beta > 1.8 are definitively excluded if the pseudoscalar is in the mass range 110 GeV < m(A) < 145 GeV. We have also discussed the implications for the 2HDM of the recent dimuon search by the ATLAS collaboration for a CP-odd scalar in the mass range 4-12 GeV.
The use of non-standard CT conversion ramps for Monte Carlo verification of 6 MV prostate IMRT plans
Resumo:
Monte Carlo (MC) dose calculation algorithms have been widely used to verify the accuracy of intensity-modulated radiotherapy (IMRT) dose distributions computed by conventional algorithms due to the ability to precisely account for the effects of tissue inhomogeneities and multileaf collimator characteristics. Both algorithms present, however, a particular difference in terms of dose calculation and report. Whereas dose from conventional methods is traditionally computed and reported as the water-equivalent dose (Dw), MC dose algorithms calculate and report dose to medium (Dm). In order to compare consistently both methods, the conversion of MC Dm into Dw is therefore necessary. This study aims to assess the effect of applying the conversion of MC-based Dm distributions to Dw for prostate IMRT plans generated for 6 MV photon beams. MC phantoms were created from the patient CT images using three different ramps to convert CT numbers into material and mass density: a conventional four material ramp (CTCREATE) and two simplified CT conversion ramps: (1) air and water with variable densities and (2) air and water with unit density. MC simulations were performed using the BEAMnrc code for the treatment head simulation and the DOSXYZnrc code for the patient dose calculation. The conversion of Dm to Dw by scaling with the stopping power ratios of water to medium was also performed in a post-MC calculation process. The comparison of MC dose distributions calculated in conventional and simplified (water with variable densities) phantoms showed that the effect of material composition on dose-volume histograms (DVH) was less than 1% for soft tissue and about 2.5% near and inside bone structures. The effect of material density on DVH was less than 1% for all tissues through the comparison of MC distributions performed in the two simplified phantoms considering water. Additionally, MC dose distributions were compared with the predictions from an Eclipse treatment planning system (TPS), which employed a pencil beam convolution (PBC) algorithm with Modified Batho Power Law heterogeneity correction. Eclipse PBC and MC calculations (conventional and simplified phantoms) agreed well (<1%) for soft tissues. For femoral heads, differences up to 3% were observed between the DVH for Eclipse PBC and MC calculated in conventional phantoms. The use of the CT conversion ramp of water with variable densities for MC simulations showed no dose discrepancies (0.5%) with the PBC algorithm. Moreover, converting Dm to Dw using mass stopping power ratios resulted in a significant shift (up to 6%) in the DVH for the femoral heads compared to the Eclipse PBC one. Our results show that, for prostate IMRT plans delivered with 6 MV photon beams, no conversion of MC dose from medium to water using stopping power ratio is needed. In contrast, MC dose calculations using water with variable density may be a simple way to solve the problem found using the dose conversion method based on the stopping power ratio.
Resumo:
A new high performance architecture for the computation of all the DCT operations adopted in the H.264/AVC and HEVC standards is proposed in this paper. Contrasting to other dedicated transform cores, the presented multi-standard transform architecture is supported on a completely configurable, scalable and unified structure, that is able to compute not only the forward and the inverse 8×8 and 4×4 integer DCTs and the 4×4 and 2×2 Hadamard transforms defined in the H.264/AVC standard, but also the 4×4, 8×8, 16×16 and 32×32 integer transforms adopted in HEVC. Experimental results obtained using a Xilinx Virtex-7 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which outperforms its more prominent related designs by at least 1.8 times. When integrated in a multi-core embedded system, this architecture allows the computation, in real-time, of all the transforms mentioned above for resolutions as high as the 8k Ultra High Definition Television (UHDTV) (7680×4320 @ 30fps).
Resumo:
The growing heterogeneity of networks, devices and consumption conditions asks for flexible and adaptive video coding solutions. The compression power of the HEVC standard and the benefits of the distributed video coding paradigm allow designing novel scalable coding solutions with improved error robustness and low encoding complexity while still achieving competitive compression efficiency. In this context, this paper proposes a novel scalable video coding scheme using a HEVC Intra compliant base layer and a distributed coding approach in the enhancement layers (EL). This design inherits the HEVC compression efficiency while providing low encoding complexity at the enhancement layers. The temporal correlation is exploited at the decoder to create the EL side information (SI) residue, an estimation of the original residue. The EL encoder sends only the data that cannot be inferred at the decoder, thus exploiting the correlation between the original and SI residues; however, this correlation must be characterized with an accurate correlation model to obtain coding efficiency improvements. Therefore, this paper proposes a correlation modeling solution to be used at both encoder and decoder, without requiring a feedback channel. Experiments results confirm that the proposed scalable coding scheme has lower encoding complexity and provides BD-Rate savings up to 3.43% in comparison with the HEVC Intra scalable extension under development. © 2014 IEEE.
Resumo:
Motivated by the dark matter and the baryon asymmetry problems, we analyze a complex singlet extension of the Standard Model with a Z(2) symmetry (which provides a dark matter candidate). After a detailed two-loop calculation of the renormalization group equations for the new scalar sector, we study the radiative stability of the model up to a high energy scale (with the constraint that the 126 GeV Higgs boson found at the LHC is in the spectrum) and find it requires the existence of a new scalar state mixing with the Higgs with a mass larger than 140 GeV. This bound is not very sensitive to the cutoff scale as long as the latter is larger than 10(10) GeV. We then include all experimental and observational constraints/measurements from collider data, from dark matter direct detection experiments, and from the Planck satellite and in addition force stability at least up to the grand unified theory scale, to find that the lower bound is raised to about 170 GeV, while the dark matter particle must be heavier than about 50 GeV.
Resumo:
The Evidence Accumulation Clustering (EAC) paradigm is a clustering ensemble method which derives a consensus partition from a collection of base clusterings obtained using different algorithms. It collects from the partitions in the ensemble a set of pairwise observations about the co-occurrence of objects in a same cluster and it uses these co-occurrence statistics to derive a similarity matrix, referred to as co-association matrix. The Probabilistic Evidence Accumulation for Clustering Ensembles (PEACE) algorithm is a principled approach for the extraction of a consensus clustering from the observations encoded in the co-association matrix based on a probabilistic model for the co-association matrix parameterized by the unknown assignments of objects to clusters. In this paper we extend the PEACE algorithm by deriving a consensus solution according to a MAP approach with Dirichlet priors defined for the unknown probabilistic cluster assignments. In particular, we study the positive regularization effect of Dirichlet priors on the final consensus solution with both synthetic and real benchmark data.