24 resultados para algorithm Context

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Develop a new model of Absorptive Capacity taking into account two variables namely Learning and knowledge to explain how companies transform information into knowledge

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Malaria, schistosomiasis and geohelminth infection are linked to maternal and child morbidity and mortality in sub-Saharan Africa. Knowing the prevalence levels of these infections is vital to guide governments towards the implementation of successful and cost-effective disease control initiatives. Methodology/Principal Findings: A cross-sectional study of 1,237 preschool children (0–5 year olds), 1,142 school-aged children (6–15 year olds) and 960 women (.15 year olds) was conducted to understand the distribution of malnutrition, anemia, malaria, schistosomiasis (intestinal and urinary) and geohelminths in a north-western province of Angola. We used a recent demographic surveillance system (DSS) database to select and recruit suitable households. Malnutrition was common among children (23.3% under-weight, 9.9% wasting and 32.2% stunting), and anemia was found to be a severe public health problem (i.e., .40%). Malaria prevalence was highest among preschool children reaching 20.2%. Microhematuria prevalence levels reached 10.0% of preschool children, 16.6% of school-aged children and 21.7% of mothers. Geohelminth infections were common, affecting 22.3% of preschool children, 31.6% of school-aged children and 28.0% of mothers. Conclusions: Here we report prevalence levels of malaria, schistosomiasis and geohelminths; all endemic in this poorly described area where a DSS has been recently established. Furthermore we found evidence that the studied infections are associated with the observed levels of anemia and malnutrition, which can justify the implementation of integrated interventions for the control of these diseases and morbidities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an algorithm to efficiently generate the state-space of systems specified using the IOPT Petri-net modeling formalism. IOPT nets are a non-autonomous Petri-net class, based on Place-Transition nets with an extended set of features designed to allow the rapid prototyping and synthesis of system controllers through an existing hardware-software co-design framework. To obtain coherent and deterministic operation, IOPT nets use a maximal-step execution semantics where, in a single execution step, all enabled transitions will fire simultaneously. This fact increases the resulting state-space complexity and can cause an arc "explosion" effect. Real-world applications, with several million states, will reach a higher order of magnitude number of arcs, leading to the need for high performance state-space generator algorithms. The proposed algorithm applies a compilation approach to read a PNML file containing one IOPT model and automatically generate an optimized C program to calculate the corresponding state-space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cancer is a national and international health care concern. It’s important to find strategies for early diagnosis as well as for the optimization of the various therapeutic options currently existing in Portugal. Cancer is the second leading cause of death in Portugal, the choice of this study, is due to the importance of radiotherapy approach in cancer treatment and because is the therapy used in 40% of oncology patients. Radiation therapy has evolve data technological level, that allows new treatment techniques that are more efficient and that also promotes greater professional satisfaction. The hadrons are charged particles, used in cancer therapy. These particles can bring a paradigm shift regarding the therapeutic approach in radiotherapy. The technique used is proton therapy, that reveal to be more accurate, efficacious and less toxic to surrounding tissue. Proton therapy may be a promising development in the field of oncology and how the treatment is given in radiotherapy. Although there is awareness of the benefits of proton therapy in oncology it’s also important to take in consideration the costs of these therapy, because they are considerably higher than conventional treatments of radiotherapy. Given the lack of a proton therapy service in Portugal, this study aims to be a documentary analysis of clinical records that will achieve the following objectives: to identify the number of cancer patients diagnosed in 2010 in Portugal and to calculate the estimated number of patients that could have been treated with proton therapy according to the Health Council of the Netherlands registration document.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Linear unmixing decomposes a hyperspectral image into a collection of reflectance spectra of the materials present in the scene, called endmember signatures, and the corresponding abundance fractions at each pixel in a spatial area of interest. This paper introduces a new unmixing method, called Dependent Component Analysis (DECA), which overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical properties of hyperspectral data. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. The performance of the method is illustrated using simulated and real data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The calculation of the dose is one of the key steps in radiotherapy planning1-5. This calculation should be as accurate as possible, and over the years it became feasible through the implementation of new algorithms to calculate the dose on the treatment planning systems applied in radiotherapy. When a breast tumour is irradiated, it is fundamental a precise dose distribution to ensure the planning target volume (PTV) coverage and prevent skin complications. Some investigations, using breast cases, showed that the pencil beam convolution algorithm (PBC) overestimates the dose in the PTV and in the proximal region of the ipsilateral lung. However, underestimates the dose in the distal region of the ipsilateral lung, when compared with analytical anisotropic algorithm (AAA). With this study we aim to compare the performance in breast tumors of the PBC and AAA algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conferência - 16th International Symposium on Wireless Personal Multimedia Communications (WPMC)- Jun 24-27, 2013

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on cluster analysis for categorical data continues to develop, new clustering algorithms being proposed. However, in this context, the determination of the number of clusters is rarely addressed. We propose a new approach in which clustering and the estimation of the number of clusters is done simultaneously for categorical data. We assume that the data originate from a finite mixture of multinomial distributions and use a minimum message length criterion (MML) to select the number of clusters (Wallace and Bolton, 1986). For this purpose, we implement an EM-type algorithm (Silvestre et al., 2008) based on the (Figueiredo and Jain, 2002) approach. The novelty of the approach rests on the integration of the model estimation and selection of the number of clusters in a single algorithm, rather than selecting this number based on a set of pre-estimated candidate models. The performance of our approach is compared with the use of Bayesian Information Criterion (BIC) (Schwarz, 1978) and Integrated Completed Likelihood (ICL) (Biernacki et al., 2000) using synthetic data. The obtained results illustrate the capacity of the proposed algorithm to attain the true number of cluster while outperforming BIC and ICL since it is faster, which is especially relevant when dealing with large data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Informática e de Computadores