847 resultados para multimodal biometrics
Resumo:
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms
Resumo:
A Psoríase é uma patologia crónica do foro dermatológico que afecta cerca de 2% da população mundial, sendo, por isso, considerada um problema de saúde pública. A fotoquimioterapia envolvendo a administração oral ou tópica de psorelos em associação com a utilização de radiações UVA (PUVA) tem sido aceite como um dos métodos terapêuticos mais eficazes no tratamento da Psoríase. Apesar dos benefícios, o impacto real que, em particular a forte exposição á radiação pode implicar, em especial nas zonas de pele não lesada, é ainda alvo de alguma especulação, face à quase total ausência de resultados publicados. Assim, o presente estudo procura avaliar, de forma objectiva, os efeitos desta terapêutica na pele do doente com psoríase vulgar. Os voluntários foram suheitos avaliação biométrica, incluindo a quantificação da "função barreira" e da dinâmica hídrica (Evaporimetria e capacitância epidérmica), a avaliação do comportamento biomecânico (por cutometria). Os resultados obtidos demonstraram a relação inversa entre a PTEA e a capacidade epidérmica. As zonas de lesão dificilmente recuperam para os valores basais e, mesmo nas zonas de pele sã evidenciam alteração, que não foi inteiramente corrigida no período de estudo, apesar da melhoria dos sinais e sintomas em todos os doentes.
Resumo:
A Doença Renal Crónica (DRC) é de natureza insidiosa, progressiva e irreversível e uma grande causa de morbilidade e mortalidade em gatos. O comportamento natural da espécie felina fica comprometido no meio doméstico, originando situações de stress que desempenham um papel importante na patogénese da doença crónica. A literatura sugere que a activação contínua do sistema nervoso simpático desencadeia uma série de processos fisiológicos que se traduzem por último no aparecimento de fibrose renal, contribuindo assim para a progressão da DRC. Esta dissertação pretende avaliar essa relação. Para tal, foram analisados questionários que permitissem avaliar as condições em que viviam uma amostra de 139 gatos e realizados painéis hematológicos e bioquímicos a uma sub-amostra para verificar as correlações existentes. Ainda que não tenha sido possível concluir que a presença de um parâmetro individual possa ser apontada como causa directa do desenvolvimento de DRC, podemos identificar um conjunto de factores ambientais causadores de stress como prováveis factores de risco para a degradação desta doença e a sua transição para fases mais avançadas. Desta forma, a implementação de estratégias de enriquecimento ambiental MEMO (Multimodal Environmental Modification) não só visa melhorar a qualidade de vida destes animais como se pode revelar uma chave de sucesso na prevenção e maneio de doenças crónicas.
Resumo:
La Constitución define al estado como plurinacional y el reconoce a la Naturaleza como sujeto de derechos. Pese a ello, durante el año 2009 se violaron los derechos de la naturaleza y los Derechos Colectivos de Pueblos y Nacionalidades en los siguientes casos: el Eje Multimodal Manta Manaos; la iniciativa ITT; la normatividad y políticas para la explotación minera industrial en la Cordillera del Cóndor, concesiones y privilegios para misiones católicas en la Región Amazónica, el cierre de la radio La Voz de Arutam, el ataque de los Pueblos del Yasuní y el caso de Sarayaku. Los casos evidencian que el poder gubernamental continúa atado al modelo preconstituyente, los afanes desarrollistas y extractivistas han llenado de conflictos a los pobladores ancestrales.
Resumo:
El eje multimodal Manta-Manaos evoca la idea de la articulación comercial del Ecuador con Brasil y, consiguientemente, afloran sentimientos positivos sobre la integración regional sudamericana. Sin embargo, cuando se realiza un análisis más integral de las diferentes aristas que componen este megaproyecto, surgen varios escenarios y actores que deben ser considerados, si el gobierno ecuatoriano pretende alcanzar un auténtico desarrollo social, comercial y ecológico de la región amazónica, principal implicada en el corredor. El estudio profundiza dos líneas fundamentales: la primera es que los intentos de integración comercial entre Ecuador y el gigante latinoamericano deben ser conceptualizados desde la configuración económica de ambos países, reconociendo las verdaderas potencialidades y limitantes que existen. Y la segunda es que, tal y como se perfila la vía, una fractura ambiental de la Amazonía ecuatoriana es inminente, pues Ecuador no dispone de un plan de desarrollo integral que incluya principalmente el reordenamiento territorial e inclusión de los pobladores amazónicos. Crear alternativas para una integración que satisfaga los intereses comerciales, medio ambientales y humanos del Ecuador dependerá de la capacidad del gobierno para incluir a los actores, sincerar los escenarios comerciales y promover propuestas que efectivamente conduzcan a la inserción estratégica del Ecuador en el mundo.
Resumo:
Context-aware multimodal interactive systems aim to adapt to the needs and behavioural patterns of users and offer a way forward for enhancing the efficacy and quality of experience (QoE) in human-computer interaction. The various modalities that constribute to such systems each provide a specific uni-modal response that is integratively presented as a multi-modal interface capable of interpretation of multi-modal user input and appropriately responding to it through dynamically adapted multi-modal interactive flow management , This paper presents an initial background study in the context of the first phase of a PhD research programme in the area of optimisation of data fusion techniques to serve multimodal interactivite systems, their applications and requirements.
Resumo:
In clinical trials, situations often arise where more than one response from each patient is of interest; and it is required that any decision to stop the study be based upon some or all of these measures simultaneously. Theory for the design of sequential experiments with simultaneous bivariate responses is described by Jennison and Turnbull (Jennison, C., Turnbull, B. W. (1993). Group sequential tests for bivariate response: interim analyses of clinical trials with both efficacy and safety endpoints. Biometrics 49:741-752) and Cook and Farewell (Cook, R. J., Farewell, V. T. (1994). Guidelines for monitoring efficacy and toxicity responses in clinical trials. Biometrics 50:1146-1152) in the context of one efficacy and one safety response. These expositions are in terms of normally distributed data with known covariance. The methods proposed require specification of the correlation, ρ between test statistics monitored as part of the sequential test. It can be difficult to quantify ρ and previous authors have suggested simply taking the lowest plausible value, as this will guarantee power. This paper begins with an illustration of the effect that inappropriate specification of ρ can have on the preservation of trial error rates. It is shown that both the type I error and the power can be adversely affected. As a possible solution to this problem, formulas are provided for the calculation of correlation from data collected as part of the trial. An adaptive approach is proposed and evaluated that makes use of these formulas and an example is provided to illustrate the method. Attention is restricted to the bivariate case for ease of computation, although the formulas derived are applicable in the general multivariate case.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
This article describes an approach to optimal design of phase II clinical trials using Bayesian decision theory. The method proposed extends that suggested by Stallard (1998, Biometrics54, 279–294) in which designs were obtained to maximize a gain function including the cost of drug development and the benefit from a successful therapy. Here, the approach is extended by the consideration of other potential therapies, the development of which is competing for the same limited resources. The resulting optimal designs are shown to have frequentist properties much more similar to those traditionally used in phase II trials.
Resumo:
In a sequential clinical trial, accrual of data on patients often continues after the stopping criterion for the study has been met. This is termed “overrunning.” Overrunning occurs mainly when the primary response from each patient is measured after some extended observation period. The objective of this article is to compare two methods of allowing for overrunning. In particular, simulation studies are reported that assess the two procedures in terms of how well they maintain the intended type I error rate. The effect on power resulting from the incorporation of “overrunning data” using the two procedures is evaluated.
Resumo:
Bayesian decision procedures have recently been developed for dose escalation in phase I clinical trials concerning pharmacokinetic responses observed in healthy volunteers. This article describes how that general methodology was extended and evaluated for implementation in a specific phase I trial of a novel compound. At the time of writing, the study is ongoing, and it will be some time before the sponsor will wish to put the results into the public domain. This article is an account of how the study was designed in a way that should prove to be safe, accurate, and efficient whatever the true nature of the compound. The study involves the observation of two pharmacokinetic endpoints relating to the plasma concentration of the compound itself and of a metabolite as well as a safety endpoint relating to the occurrence of adverse events. Construction of the design and its evaluation via simulation are presented.
Resumo:
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Resumo:
For people with motion impairments, access to and independent control of a computer can be essential. Symptoms such as tremor and spasm, however, can make the typical keyboard and mouse arrangement for computer interaction difficult or even impossible to use. This paper describes three approaches to improving computer input effectivness for people with motion impairments. The three approaches are: (1) to increase the number of interaction channels, (2) to enhance commonly existing interaction channels, and (3) to make more effective use of all the available information in an existing input channel. Experiments in multimodal input, haptic feedback, user modelling, and cursor control are discussed in the context of the three approaches. A haptically enhanced keyboard emulator with perceptive capability is proposed, combining approaches in a way that improves computer access for motion impaired users.
Resumo:
Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.
Resumo:
Brain activity can be measured with several non-invasive neuroimaging modalities, but each modality has inherent limitations with respect to resolution, contrast and interpretability. It is hoped that multimodal integration will address these limitations by using the complementary features of already available data. However, purely statistical integration can prove problematic owing to the disparate signal sources. As an alternative, we propose here an advanced neural population model implemented on an anatomically sound cortical mesh with freely adjustable connectivity, which features proper signal expression through a realistic head model for the electroencephalogram (EEG), as well as a haemodynamic model for functional magnetic resonance imaging based on blood oxygen level dependent contrast (fMRI BOLD). It hence allows simultaneous and realistic predictions of EEG and fMRI BOLD from the same underlying model of neural activity. As proof of principle, we investigate here the influence on simulated brain activity of strengthening visual connectivity. In the future we plan to fit multimodal data with this neural population model. This promises novel, model-based insights into the brain's activity in sleep, rest and task conditions.