63 resultados para Data Retention


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work liver contour is semi-automatically segmented and quantified in order to help the identification and diagnosis of diffuse liver disease. The features extracted from the liver contour are jointly used with clinical and laboratorial data in the staging process. The classification results of a support vector machine, a Bayesian and a k-nearest neighbor classifier are compared. A population of 88 patients at five different stages of diffuse liver disease and a leave-one-out cross-validation strategy are used in the classification process. The best results are obtained using the k-nearest neighbor classifier, with an overall accuracy of 80.68%. The good performance of the proposed method shows a reliable indicator that can improve the information in the staging of diffuse liver disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Steatosis, also known as fatty liver, corresponds to an abnormal retention of lipids within the hepatic cells and reflects an impairment of the normal processes of synthesis and elimination of fat. Several causes may lead to this condition, namely obesity, diabetes, or alcoholism. In this paper an automatic classification algorithm is proposed for the diagnosis of the liver steatosis from ultrasound images. The features are selected in order to catch the same characteristics used by the physicians in the diagnosis of the disease based on visual inspection of the ultrasound images. The algorithm, designed in a Bayesian framework, computes two images: i) a despeckled one, containing the anatomic and echogenic information of the liver, and ii) an image containing only the speckle used to compute the textural features. These images are computed from the estimated RF signal generated by the ultrasound probe where the dynamic range compression performed by the equipment is taken into account. A Bayes classifier, trained with data manually classified by expert clinicians and used as ground truth, reaches an overall accuracy of 95% and a 100% of sensitivity. The main novelties of the method are the estimations of the RF and speckle images which make it possible to accurately compute textural features of the liver parenchyma relevant for the diagnosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Perante os contínuos desafios com que se defrontam as organizações, consequência dos elevados níveis de competitividade, é-lhes exigido uma nova dinâmica de gestão, onde os recursos humanos se assumem como o seu principal elemento diferenciador. Neste contexto, é fundamental a existência de uma gestão estratégica dos recursos humanos, a institucionalização de um conjunto de práticas que permitam transformar os recursos humanos num activo estratégico, que conduza à execução da estratégia organizacional. Essas práticas passam pela atracção e retenção de talentos, oportunidades de desenvolvimento, propiciar boas condições de trabalho quer a nível quantitativo quer a nível qualitativo. E como cada pessoa é um ser único, dotado de características próprias, impossíveis de imitar, deve ser reconhecida a capacidade de serem uma fonte de vantagem competitiva. Não é suficiente o estabelecimento de um conjunto de boas práticas para que se possuam recursos humanos estratégicos. É fundamental fazer o acompanhamento dessas práticas através da monitorização. Na gestão o que não pode ser medido não pode ser gerido. É fundamental sensibilizar os gestores, profissionais de recursos humanos, para a criação de sistemas de medida e métricas que possam aferir a contribuição do Capital Humano para a missão e estratégia das organizações. O Balanced Scorecard é uma ferramenta de gestão que possibilita, através da informação dos seus indicadores, a implementação das estratégias nas organizações. A finalidade é garantir que os indicadores definidos estejam coerentes com a estratégia global. Essa metodologia tem assim o mérito de compatibilizar (através de indicadores quantitativos) a gestão de recursos humanos com os objectivos a longo prazo da organização. A existência de indicadores qualitativos permite ainda às organizações mensurar o nível de desempenho e motivação, factores influentes no clima organizacional

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A detailed analytic and numerical study of baryogenesis through leptogenesis is performed in the framework of the standard model of electroweak interactions extended by the addition of three right-handed neutrinos, leading to the seesaw mechanism. We analyze the connection between GUT-motivated relations for the quark and lepton mass matrices and the possibility of obtaining a viable leptogenesis scenario. In particular, we analyze whether the constraints imposed by SO(10) GUTs can be compatible with all the available solar, atmospheric and reactor neutrino data and, simultaneously, be capable of producing the required baryon asymmetry via the leptogenesis mechanism. It is found that the Just-So(2) and SMA solar solutions lead to a viable leptogenesis even for the simplest SO(10) GUT, while the LMA, LOW and VO solar solutions would require a different hierarchy for the Dirac neutrino masses in order to generate the observed baryon asymmetry. Some implications on CP violation at low energies and on neutrinoless double beta decay are also considered. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nickel-copper metallic foams were electrodeposited from an acidic electrolyte, using hydrogen bubble evolution as a dynamic template. Their morphology and chemical composition was studied by scanning electron microscopy and related to the deposition parameters (applied current density and deposition time). For high currents densities (above 1 A cm(-2)) the nickel-copper deposits have a three-dimensional foam-like morphology with randomly distributed nearly-circular pores whose walls present an open dendritic structure. The nickel-copper foams are crystalline and composed of pure nickel and a copper-rich phase containing nickel in solid solution. The electrochemical behaviour of the material was studied by cyclic voltammetry and chronopotentiometry (charge-discharge curves) aiming at its application as a positive electrode for supercapacitors. Cyclic voltammograms showed that the Ni-Cu foams have a pseudocapacitive behaviour. The specific capacitance was calculated from charge-discharge data and the best value (105 F g(-1) at 1 mA cm(-2)) was obtained for nickel-copper foams deposited at 1.8 A cm(-2) for 180 s. Cycling stability of these foams was also assessed and they present a 90 % capacitance retention after 10,000 cycles at 10 mA cm(-2).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The hand is one of the most important instruments of the human body, mainly due to the possibility of grip movements. Grip strength has been described as an important predictor of functional capacity. There are several factors that may influence it, such as gender, age and anthropometric characteristics. Functional capacity refers to the ability to perform daily activities which allow the individual to self-care and to live with autonomy. Composite Physical Function (CPF) scale is an evaluation tool for functional capacity that includes daily activities, self-care, sports activities, upper limb function and gait capacity. In 2011, Portugal had 15% of young population (0-14years) and 19% of elderly population (over 65 years). Considering the double-ageing phenomen, it is important to understand the effect of the grip strength in elderly individuals, considering their characteristics, as the need to maintainin dependency as long as possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A estimativa da idade gestacional (IG) em restos cadavéricos fetais é importante em contextos forenses. Para esse efeito, os especialistas forenses recorrem à avaliação do padrão de calcificação dentária e/ou ao estudo do esqueleto. Neste último, o comprimento das diáfises de ossos longos é um dos métodos mais utilizados, sendo utilizadas equações de regressão de obras pouco atuais ou baseadas em dados ecográficos, cujas medições diferem das efetuadas diretamente no osso. Este trabalho tem como objetivo principal a obtenção de equações de regressão para a população Portuguesa, com base na medição das diáfises de fémur, tíbia e úmero, utilizando radiografias postmortem. A amostra é constituída por 80 fetos de IG conhecida. Tratando-se de um estudo retrospectivo, os casos foram selecionados com base nas informações clínicas e anatomopatológicas, excluindo-se aqueles cujo normal crescimento se encontrava efetiva ou potencialmente comprometido. Os resultados confirmaram uma forte correlação entre o comprimento das diáfises estudadas e a IG, apresentando o fémur a correlação mais forte (r=0.967; p <0,01). Assim, foi possível obter uma equação de regressão para cada um dos ossos estudados. Concluindo, os objetivos do estudo foram atingidos com a obtenção das equações de regressão para os ossos estudados. Pretende-se, futuramente, alargar a amostra para validar e consolidar os resultados obtidos neste estudo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to develop models for experimental open-channel water delivery systems and assess the use of three data-driven modeling tools toward that end. Water delivery canals are nonlinear dynamical systems and thus should be modeled to meet given operational requirements while capturing all relevant dynamics, including transport delays. Typically, the derivation of first principle models for open-channel systems is based on the use of Saint-Venant equations for shallow water, which is a time-consuming task and demands for specific expertise. The present paper proposes and assesses the use of three data-driven modeling tools: artificial neural networks, composite local linear models and fuzzy systems. The canal from Hydraulics and Canal Control Nucleus (A parts per thousand vora University, Portugal) will be used as a benchmark: The models are identified using data collected from the experimental facility, and then their performances are assessed based on suitable validation criterion. The performance of all models is compared among each other and against the experimental data to show the effectiveness of such tools to capture all significant dynamics within the canal system and, therefore, provide accurate nonlinear models that can be used for simulation or control. The models are available upon request to the authors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on the problem of feature selection for clustering continues to develop. This is a challenging task, mainly due to the absence of class labels to guide the search for relevant features. Categorical feature selection for clustering has rarely been addressed in the literature, with most of the proposed approaches having focused on numerical data. In this work, we propose an approach to simultaneously cluster categorical data and select a subset of relevant features. Our approach is based on a modification of a finite mixture model (of multinomial distributions), where a set of latent variables indicate the relevance of each feature. To estimate the model parameters, we implement a variant of the expectation-maximization algorithm that simultaneously selects the subset of relevant features, using a minimum message length criterion. The proposed approach compares favourably with two baseline methods: a filter based on an entropy measure and a wrapper based on mutual information. The results obtained on synthetic data illustrate the ability of the proposed expectation-maximization method to recover ground truth. An application to real data, referred to official statistics, shows its usefulness.