16 resultados para Penalized likelihood

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aimed to determine and evaluate the diagnostic accuracy of visual screening tests for detecting vision loss in elderly. This study is defined as study of diagnostic performance. The diagnostic accuracy of 5 visual tests -near convergence point, near accommodation point, stereopsis, contrast sensibility and amsler grid—was evaluated by means of the ROC method (receiver operating characteristics curves), sensitivity, specificity, positive and negative likelihood ratios (LR+/LR−). Visual acuity was used as the reference standard. A sample of 44 elderly aged 76.7 years (±9.32), who were institutionalized, was collected. The curves of contrast sensitivity and stereopsis are the most accurate (area under the curves were 0.814−p = 0.001, C.I.95%[0.653;0.975]— and 0.713−p = 0.027, C.I.95%[0,540;0,887], respectively). The scores with the best diagnostic validity for the stereopsis test were 0.605 (sensitivity 0.87, specificity 0.54; LR+ 1.89, LR−0.24) and 0.610 (sensitivity 0.81, specificity 0.54; LR+1.75, LR−0.36). The scores with higher diagnostic validity for the contrast sensibility test were 0.530 (sensitivity 0.94, specificity 0.69; LR+ 3.04, LR−0.09). The contrast sensitivity and stereopsis test's proved to be clinically useful in detecting vision loss in the elderly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this investigation is to explore and understand the justifications given by students to the existence of dishonest behavior and understanding the extent to which the justifications given might influence denouncing and cheating behavior. 1277 undergraduate students of two Portuguese Public Universities were surveyed about their own cheating behavior, their propensity to denounce and the ―neutralizing attitudes‖. As predicted, ―neutralizing attitudes‖ was negatively correlated with self cheating behavior and positively correlated with reporting. The likelihood of copying is greater when the purpose is ―helping a friend‖, ―when the courses are more difficult‖, ―to get higher marks/grades‖, and because ―peers accept and tend to see copying practices as normal‖. Results support the notion that context emerges as a very important influence in the decision to cheating. The environment-peer pressure and the normalized attitudes towards academic dishonesty are the main influences on the propensity to cheating.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A primary tool for regional tsunami hazard assessment is a reliable historical and instrumental catalogue of events. Morocco by its geographical situation, with two marine sides, stretching along the Atlantic coast to the west and along the Mediterranean coast to the north, is the country of Western Africa most exposed to the risk of tsunamis. Previous information on tsunami events affecting Morocco are included in the Iberian and/or the Mediterranean lists of tsunami events, as it is the case of the European GITEC Tsunami Catalogue, but there is a need to organize this information in a dataset and to assess the likelihood of claimed historical tsunamis in Morocco. Due to the fact that Moroccan sources are scarce, this compilation rely on historical documentation from neighbouring countries (Portugal and Spain) and so the compatibility between the new tsunami catalogue presented here and those that correspond to the same source areas is also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective - We aimed to identify the clinical and genetic [IL23 receptor (IL23R) single nucleotide polymorphisms (SNPs)] predictors of response to therapy in patients with ulcerative colitis. Patients and methods - A total of 174 patients with ulcerative colitis, 99 women and 75 men, were included. The mean age of the patients was 47±15 years and the mean disease duration was 11±9 years. The number of patients classified as responders (R) or nonresponders (NR) to several therapies was as follows: 110 R and 53 NR to mesalazine (5-ASA), 28 R and 20 NR to azathioprine (AZT), 18 R and 7 NR to infliximab. Clinical and demographic variables were recorded. A total of four SNPs were studied: IL23R G1142A, C2370A, G43045A, and G9T. Genotyping was performed by real-time PCR using Taqman probes. Results - Older patients were more prone to respond to 5-ASA (P=0.004), whereas those with pancolitis were less likely to respond to such therapies (P=0.002). Patients with extraintestinal manifestations (EIMs) were less likely to respond to 5-ASA (P=0.001), AZT (P=0.03), and corticosteroids (P=0.06). Carriers of the mutant allele for IL23R SNPs had a significantly higher probability of developing EIMs (P<0.05), a higher probability of being refractory to 5-ASA (P<0.03), but a higher likelihood of responding to AZT (P=0.05). A significant synergism was observed between IL23R C2370A and EIMs with respect to nonresponse to 5-ASA (P=0.03). Conclusion - Besides extent of disease and age at disease onset, the presence of EIMs may be a marker of refractoriness to 5-ASA, corticosteroids, and AZT. IL23R SNPs are associated both with EIMs and with nonresponse to 5-ASA and corticosteroids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Informática

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are several hazards in histopathology laboratories and its staff must ensure that their professional activity is set to the highest standards while complying with the best safety procedures. Formalin is one of the chemical hazards to which such professionals are routinely exposed. To decrease this contact, it is suggested that 10% neutral buffered liquid formalin (FL) is replaced by 10% formalin-gel (FG), given the later reduces the likelihood of spills and splashes, and decreased fume levels are released during its handling, proving itself less harmful. However, it is mandatory to assess the effectiveness of FG as a fixative and ensure that the subsequent complementary techniques, such as immunohistochemistry (IHC), are not compromised. Two groups of 30 samples from human placenta have been fixed with FG and FL fixatives during different periods of time (12, 24, and 48 hours) and, thereafter, processed, embedded, and sectioned. IHC for six different antibodies was performed and the results were scored (0–100) using an algorithm that took into account immunostaining intensity, percentage of staining structures, non-specific immunostaining, contrast, and morphological preservation. Parametric and non-parametric statistical tests were used (alpha = 0•05). All results were similar for both fixatives, with global score means of 95•36±6•65 for FL and 96•06±5•80 for FG, and without any statistical difference (P>0•05). The duration of the fixation had no statistical relevance also (P>0•05). So it is proved here FG could be an effective alternative to FL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objetivos – Demonstrar o potencial da espetroscopia (1H) por ressonância magnética na doença degenerativa discal lombar e defender a integração desta técnica na rotina clínico‑imagiológica para a precisa classificação da involução vs degenerescência dos discos L4‑L5 e L5‑S1 em doentes com lombalgia não relacionável com causa mecânica. Material e métodos – O estudo incluiu 102 discos intervertebrais lombares de 123 doentes. Foram estudados 61 discos de L4‑L5, 41 discos de L5‑S1 e 34 discos de D12‑L1. Utilizou‑se um sistema de ressonância magnética de 1,5 T e técnica monovoxel. Obtiveram‑se os rácios [Lac/Nacetyl] e [Nacetyl/(Lac+Lípidos)] e aplicou‑se a ressonância de lípidos para avaliar a bioquímica do disco com o fim de conhecer o estado de involução vs degenerescência que o suscetibilizam para a instabilidade e sobrecarga. Avaliou‑se o comportamento dos rácios e do teor lipídico dos discos L4‑L5‑S1 e as diferenças apresentadas em relação a D12‑L1. Foi também realizada a comparação entre os discos L4‑L5, L5‑S1 e D12‑L1 na ponderação T2 (T2W), segundo a classificação ajustada (1‑4) de Pfirrmann. Resultados – Verificou‑se que os rácios e o valor dos lípidos dos discos L4‑L5‑S1 apresentaram diferenças estatisticamente significativas quando relacionados com os discos D12‑L1. O rácio [Lac/Nacetyl] em L4‑L5‑S1 mostrou‑se aumentado em relação a D12‑L1 (p=0,033 para os discos com grau de involução [1+2] e p=0,004 para os discos com grau [3+4]). Estes resultados sugerem que a involução vs degenerescência dos discos nos graus mais elevados condiciona um decréscimo do pico do Lactato. O rácio [Nacetyl/(Lac+Lip)] discrimina os graus de involução [1+2] do [3+4] no nível L4‑L5, apresentando os valores dos rácios (média 0,65 e 0,5 respetivamente com p=0,04). O rácio médio de [Nacetyl/(Lac+Lip)] dos discos L4‑L5 foi 1,8 vezes mais elevado do que em D12‑L1. O espetro lipídico em L4‑L5‑S1 nos graus mais elevados não mostrou ter uma prevalência constante quanto às frequências de ressonância. Conclusão – A espetroscopia (1H) dos discos intervertebrais poderá ter aplicação na discriminação dos graus de involução vs degenerescência e representar um contributo semiológico importante em suplemento à ponderação T2 convencional. As ressonâncias de lípidos dos discos L4‑L5 e L5‑S1, involuídos ou degenerados, devem ser avaliadas em relação a D12‑L1, utilizando este valor como referência, pois este último é o nível considerado estável e com baixa probabilidade de degenerescência.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research on cluster analysis for categorical data continues to develop, new clustering algorithms being proposed. However, in this context, the determination of the number of clusters is rarely addressed. We propose a new approach in which clustering and the estimation of the number of clusters is done simultaneously for categorical data. We assume that the data originate from a finite mixture of multinomial distributions and use a minimum message length criterion (MML) to select the number of clusters (Wallace and Bolton, 1986). For this purpose, we implement an EM-type algorithm (Silvestre et al., 2008) based on the (Figueiredo and Jain, 2002) approach. The novelty of the approach rests on the integration of the model estimation and selection of the number of clusters in a single algorithm, rather than selecting this number based on a set of pre-estimated candidate models. The performance of our approach is compared with the use of Bayesian Information Criterion (BIC) (Schwarz, 1978) and Integrated Completed Likelihood (ICL) (Biernacki et al., 2000) using synthetic data. The obtained results illustrate the capacity of the proposed algorithm to attain the true number of cluster while outperforming BIC and ICL since it is faster, which is especially relevant when dealing with large data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cluster analysis for categorical data has been an active area of research. A well-known problem in this area is the determination of the number of clusters, which is unknown and must be inferred from the data. In order to estimate the number of clusters, one often resorts to information criteria, such as BIC (Bayesian information criterion), MML (minimum message length, proposed by Wallace and Boulton, 1968), and ICL (integrated classification likelihood). In this work, we adopt the approach developed by Figueiredo and Jain (2002) for clustering continuous data. They use an MML criterion to select the number of clusters and a variant of the EM algorithm to estimate the model parameters. This EM variant seamlessly integrates model estimation and selection in a single algorithm. For clustering categorical data, we assume a finite mixture of multinomial distributions and implement a new EM algorithm, following a previous version (Silvestre et al., 2008). Results obtained with synthetic datasets are encouraging. The main advantage of the proposed approach, when compared to the above referred criteria, is the speed of execution, which is especially relevant when dealing with large data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion. © 2014 Springer-Verlag Berlin Heidelberg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a blind method to detect interference in GNSS signals whereby the algorithms do not require knowledge of the interference or channel noise features. A sample covariance matrix is constructed from the received signal and its eigenvalues are computed. The generalized likelihood ratio test (GLRT) and the condition number test (CNT) are developed and compared in the detection of sinusoidal and chirp jamming signals. A computationally-efficient decision threshold was proposed for the CNT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.