22 resultados para RM extended algorithm
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
This paper presents an algorithm to efficiently generate the state-space of systems specified using the IOPT Petri-net modeling formalism. IOPT nets are a non-autonomous Petri-net class, based on Place-Transition nets with an extended set of features designed to allow the rapid prototyping and synthesis of system controllers through an existing hardware-software co-design framework. To obtain coherent and deterministic operation, IOPT nets use a maximal-step execution semantics where, in a single execution step, all enabled transitions will fire simultaneously. This fact increases the resulting state-space complexity and can cause an arc "explosion" effect. Real-world applications, with several million states, will reach a higher order of magnitude number of arcs, leading to the need for high performance state-space generator algorithms. The proposed algorithm applies a compilation approach to read a PNML file containing one IOPT model and automatically generate an optimized C program to calculate the corresponding state-space.
Resumo:
A imagem por difusão em RM caracteriza o movimento microscópico e aleatório das moléculas de água no tecido e a sua quantificação através do ADC permite avaliar a celularidade e estrutura do mesmo. O valor b corresponde ao factor de sensibilização à difusão, sendo que as imagens podem ser mais ou menos ponderadas em difusão. Segundo vários autores torna-se importante a determinação dos valores de b mais adequados pois este parâmetro é variável com o tipo de equipamento utilizado, podendo influenciar a qualidade diagnóstica do método.
Resumo:
RESUMO: Objetivos – Determinar a sensibilidade e especificidade das ponderações Difusão (DWI) e T2 Fluid-Attenuated Inversion Recovery (FLAIR) na avaliação de lesões da substância branca (SB) e verificar em que medida se complementam, por forma a criar um conjunto de boas práticas na RM cranioencefálica de rotina. Metodologia – Recorrendo-se a uma metodologia quantitativa, efetuou-se uma análise retrospetiva da qual foram selecionados 30 pacientes, 10 sem patologia e 20 com patologia (2 com EM, 7 com Leucoencefalopatia, 6 com doença microangiopática e 5 com patologia da substância branca indefinida). Obteve-se uma amostra de 60 imagens, nomeadamente: 30 imagens ponderadas em DWI e 30 em T2 FLAIR. Recorrendo ao programa Viewdex®, três observadores avaliaram um conjunto de imagens segundo sete critérios: visibilidade, deteção, homogeneidade, localização, margens e dimensões da lesão e capacidade de diagnóstico. Com os resultados obtidos recorreu-se ao cálculo de sensibilidade e especificidade pelas Curvas ROC, bem como à análise estatística, nomeadamente, Teste-T, Índice de Concordância Kappa e coeficiente de correlação de Pearson entre as variáveis em estudo. Resultados – Os resultados de sensibilidade e de especificidade obtidos para a ponderação T2 FLAIR foram superiores (0,915 e 0,038, respetivamente) aos da ponderação DWI (0,08 e 0,100, respetivamente). Não se verificaram variâncias populacionais significativas. Obteve-se uma elevada correlação linear entre as variáveis com um valor r situado entre 0,8 e 0,99. Verificou-se também uma variabilidade considerável entre os observadores. Conclusões – Dados os baixos valores de sensibilidade e especificidade obtidos para a DWI, sugere-se que esta deva ser incluída no protocolo de rotina de crânio como auxiliar no diagnóstico diferencial com outras patologias.
Resumo:
Mestrado em Radioterapia.
Resumo:
Mestrado em Radioterapia
Resumo:
Linear unmixing decomposes a hyperspectral image into a collection of reflectance spectra of the materials present in the scene, called endmember signatures, and the corresponding abundance fractions at each pixel in a spatial area of interest. This paper introduces a new unmixing method, called Dependent Component Analysis (DECA), which overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical properties of hyperspectral data. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. The performance of the method is illustrated using simulated and real data.
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Resumo:
The calculation of the dose is one of the key steps in radiotherapy planning1-5. This calculation should be as accurate as possible, and over the years it became feasible through the implementation of new algorithms to calculate the dose on the treatment planning systems applied in radiotherapy. When a breast tumour is irradiated, it is fundamental a precise dose distribution to ensure the planning target volume (PTV) coverage and prevent skin complications. Some investigations, using breast cases, showed that the pencil beam convolution algorithm (PBC) overestimates the dose in the PTV and in the proximal region of the ipsilateral lung. However, underestimates the dose in the distal region of the ipsilateral lung, when compared with analytical anisotropic algorithm (AAA). With this study we aim to compare the performance in breast tumors of the PBC and AAA algorithms.
Resumo:
Conferência - 16th International Symposium on Wireless Personal Multimedia Communications (WPMC)- Jun 24-27, 2013
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Objectivo do estudo: comparar o desempenho dos algoritmos Pencil Beam Convolution (PBC) e do Analytical Anisotropic Algorithm (AAA) no planeamento do tratamento de tumores de mama com radioterapia conformacional a 3D.
Resumo:
In visual sensor networks, local feature descriptors can be computed at the sensing nodes, which work collaboratively on the data obtained to make an efficient visual analysis. In fact, with a minimal amount of computational effort, the detection and extraction of local features, such as binary descriptors, can provide a reliable and compact image representation. In this paper, it is proposed to extract and code binary descriptors to meet the energy and bandwidth constraints at each sensing node. The major contribution is a binary descriptor coding technique that exploits the correlation using two different coding modes: Intra, which exploits the correlation between the elements that compose a descriptor; and Inter, which exploits the correlation between descriptors of the same image. The experimental results show bitrate savings up to 35% without any impact in the performance efficiency of the image retrieval task. © 2014 EURASIP.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde
Resumo:
The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.