931 resultados para Temporal information
Resumo:
In this paper, a novel approach for exploiting multitemporal remote sensing data focused on real-time monitoring of agricultural crops is presented. The methodology is defined in a dynamical system context using state-space techniques, which enables the possibility of merging past temporal information with an update for each new acquisition. The dynamic system context allows us to exploit classical tools in this domain to perform the estimation of relevant variables. A general methodology is proposed, and a particular instance is defined in this study based on polarimetric radar data to track the phenological stages of a set of crops. A model generation from empirical data through principal component analysis is presented, and an extended Kalman filter is adapted to perform phenological stage estimation. Results employing quad-pol Radarsat-2 data over three different cereals are analyzed. The potential of this methodology to retrieve vegetation variables in real time is shown.
Resumo:
Evolutionary change results from selection acting on genetic variation. For migration to be successful, many different aspects of an animal's physiology and behaviour need to function in a co-coordinated way. Changes in one migratory trait are therefore likely to be accompanied by changes in other migratory and life-history traits. At present, we have some knowledge of the pressures that operate at the various stages of migration, but we know very little about the extent of genetic variation in various aspects of the migratory syndrome. As a consequence, our ability to predict which species is capable of what kind of evolutionary change, and at which rate, is limited. Here, we review how our evolutionary understanding of migration may benefit from taking a quantitative-genetic approach and present a framework for studying the causes of phenotypic variation. We review past research, that has mainly studied single migratory traits in captive birds, and discuss how this work could be extended to study genetic variation in the wild and to account for genetic correlations and correlated selection. In the future, reaction-norm approaches may become very important, as they allow the study of genetic and environmental effects on phenotypic expression within a single framework, as well as of their interactions. We advocate making more use of repeated measurements on single individuals to study the causes of among-individual variation in the wild, as they are easier to obtain than data on relatives and can provide valuable information for identifying and selecting traits. This approach will be particularly informative if it involves systematic testing of individuals under different environmental conditions. We propose extending this research agenda by using optimality models to predict levels of variation and covariation among traits and constraints. This may help us to select traits in which we might expect genetic variation, and to identify the most informative environmental axes. We also recommend an expansion of the passerine model, as this model does not apply to birds, like geese, where cultural transmission of spatio-temporal information is an important determinant of migration patterns and their variation.
Resumo:
Attractor properties of a popular discrete-time neural network model are illustrated through numerical simulations. The most complex dynamics is found to occur within particular ranges of parameters controlling the symmetry and magnitude of the weight matrix. A small network model is observed to produce fixed points, limit cycles, mode-locking, the Ruelle-Takens route to chaos, and the period-doubling route to chaos. Training algorithms for tuning this dynamical behaviour are discussed. Training can be an easy or difficult task, depending whether the problem requires the use of temporal information distributed over long time intervals. Such problems require training algorithms which can handle hidden nodes. The most prominent of these algorithms, back propagation through time, solves the temporal credit assignment problem in a way which can work only if the relevant information is distributed locally in time. The Moving Targets algorithm works for the more general case, but is computationally intensive, and prone to local minima.
Resumo:
A solar power satellite is paid attention to as a clean, inexhaustible large- scale base-load power supply. The following technology related to beam control is used: A pilot signal is sent from the power receiving site and after direction of arrival estimation the beam is directed back to the earth by same direction. A novel direction-finding algorithm based on linear prediction technique for exploiting cyclostationary statistical information (spatial and temporal) is explored. Many modulated communication signals exhibit a cyclostationarity (or periodic correlation) property, corresponding to the underlying periodicity arising from carrier frequencies or baud rates. The problem was solved by using both cyclic second-order statistics and cyclic higher-order statistics. By evaluating the corresponding cyclic statistics of the received data at certain cycle frequencies, we can extract the cyclic correlations of only signals with the same cycle frequency and null out the cyclic correlations of stationary additive noise and all other co-channel interferences with different cycle frequencies. Thus, the signal detection capability can be significantly improved. The proposed algorithms employ cyclic higher-order statistics of the array output and suppress additive Gaussian noise of unknown spectral content, even when the noise shares common cycle frequencies with the non-Gaussian signals of interest. The proposed method completely exploits temporal information (multiple lag ), and also can correctly estimate direction of arrival of desired signals by suppressing undesired signals. Our approach was generalized over direction of arrival estimation of cyclostationary coherent signals. In this paper, we propose a new approach for exploiting cyclostationarity that seems to be more advanced in comparison with the other existing direction finding algorithms.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
A tenet of modern radiotherapy (RT) is to identify the treatment target accurately, following which the high-dose treatment volume may be expanded into the surrounding tissues in order to create the clinical and planning target volumes. Respiratory motion can induce errors in target volume delineation and dose delivery in radiation therapy for thoracic and abdominal cancers. Historically, radiotherapy treatment planning in the thoracic and abdominal regions has used 2D or 3D images acquired under uncoached free-breathing conditions, irrespective of whether the target tumor is moving or not. Once the gross target volume has been delineated, standard margins are commonly added in order to account for motion. However, the generic margins do not usually take the target motion trajectory into consideration. That may lead to under- or over-estimate motion with subsequent risk of missing the target during treatment or irradiating excessive normal tissue. That introduces systematic errors into treatment planning and delivery. In clinical practice, four-dimensional (4D) imaging has been popular in For RT motion management. It provides temporal information about tumor and organ at risk motion, and it permits patient-specific treatment planning. The most common contemporary imaging technique for identifying tumor motion is 4D computed tomography (4D-CT). However, CT has poor soft tissue contrast and it induce ionizing radiation hazard. In the last decade, 4D magnetic resonance imaging (4D-MRI) has become an emerging tool to image respiratory motion, especially in the abdomen, because of the superior soft-tissue contrast. Recently, several 4D-MRI techniques have been proposed, including prospective and retrospective approaches. Nevertheless, 4D-MRI techniques are faced with several challenges: 1) suboptimal and inconsistent tumor contrast with large inter-patient variation; 2) relatively low temporal-spatial resolution; 3) it lacks a reliable respiratory surrogate. In this research work, novel 4D-MRI techniques applying MRI weightings that was not used in existing 4D-MRI techniques, including T2/T1-weighted, T2-weighted and Diffusion-weighted MRI were investigated. A result-driven phase retrospective sorting method was proposed, and it was applied to image space as well as k-space of MR imaging. Novel image-based respiratory surrogates were developed, improved and evaluated.
Resumo:
Brain injury due to lack of oxygen or impaired blood flow around the time of birth, may cause long term neurological dysfunction or death in severe cases. The treatments need to be initiated as soon as possible and tailored according to the nature of the injury to achieve best outcomes. The Electroencephalogram (EEG) currently provides the best insight into neurological activities. However, its interpretation presents formidable challenge for the neurophsiologists. Moreover, such expertise is not widely available particularly around the clock in a typical busy Neonatal Intensive Care Unit (NICU). Therefore, an automated computerized system for detecting and grading the severity of brain injuries could be of great help for medical staff to diagnose and then initiate on-time treatments. In this study, automated systems for detection of neonatal seizures and grading the severity of Hypoxic-Ischemic Encephalopathy (HIE) using EEG and Heart Rate (HR) signals are presented. It is well known that there is a lot of contextual and temporal information present in the EEG and HR signals if examined at longer time scale. The systems developed in the past, exploited this information either at very early stage of the system without any intelligent block or at very later stage where presence of such information is much reduced. This work has particularly focused on the development of a system that can incorporate the contextual information at the middle (classifier) level. This is achieved by using dynamic classifiers that are able to process the sequences of feature vectors rather than only one feature vector at a time.
Resumo:
[EN]This paper describes an approach for detection of frontal faces in real time (20-35Hz) for further processing. This approach makes use of a combination of previous detection tracking and color for selecting interest areas. On those areas, later facial features such as eyes, nose and mouth are searched based on geometric tests, appearance veri cation, temporal and spatial coherence. The system makes use of very simple techniques applied in a cascade approach, combined and coordinated with temporal information for improving performance. This module is a component of a complete system designed for detection, tracking and identi cation of individuals [1].
Resumo:
L’estimation temporelle de l’ordre des secondes à quelques minutes requiert des ressources attentionnelles pour l’accumulation d’information temporelle pendant l’intervalle à estimer (Brown, 2006; Buhusi & Meck, 2009; Zakay & Block, 2004). Ceci est démontré dans le paradigme de double tâche, où l’exécution d’une tâche concurrente pendant l’estimation d’un intervalle mène à un effet d’interférence, soit une distorsion de la durée perçue se traduisant par des productions temporelles plus longues et plus variables que si l’estimation de l’intervalle était effectuée seule (voir Brown, 1997; 2010). Un effet d’interférence est également observé lorsqu’une interruption est attendue pendant l’intervalle à estimer, l’allongement étant proportionnel à la durée d’attente de l’interruption (Fortin & Massé, 2000). Cet effet a mené à l’hypothèse que la production avec interruption serait sous-tendue par un mécanisme de partage attentionnel similaire à la double tâche (Fortin, 2003). Afin d’étudier cette hypothèse, deux études empiriques ont été effectuées dans des contextes expérimentaux associés respectivement à une augmentation et à une diminution de l’effet d’interférence, soit le vieillissement (Chapitre II) et l’entraînement cognitif (Chapitre III). Dans le Chapitre II, la tâche de production avec interruption est étudiée chez des participants jeunes et âgés à l’aide de la spectroscopie proche infrarouge fonctionnelle (SPIRf). Les résultats montrent que l’attente de l’interruption est associée à des coûts comportementaux et fonctionnels similaires à la double tâche. Au niveau comportemental, un allongement des productions proportionnel à la durée d’attente de l’interruption est observé chez l’ensemble des participants, mais cet effet est plus prononcé chez les participants âgés que chez les jeunes. Ce résultat est compatible avec les observations réalisées dans le paradigme de double tâche (voir Verhaegen, 2011 pour une revue). Au niveau fonctionnel, la production avec et sans interruption est associée à l’activation du cortex préfrontal droit et des régions préfrontales dorsolatérales connues pour leur rôle au niveau de l’estimation temporelle explicite (production d’intervalle) et implicite (processus préparatoires). En outre, l’attente de l’interruption est associée à l’augmentation de l’activation corticale préfrontale dans les deux hémisphères chez l’ensemble des participants, incluant le cortex ventrolatéral préfrontal associé au contrôle attentionnel dans la double tâche. Finalement, les résultats montrent que les participants âgés se caractérisent par une activation corticale bilatérale lors de la production sans et avec interruption. Dans le cadre des théories du vieillissement cognitif (Park & Reuter-Lorenz, 2009), cela suggère que l’âge est associé à un recrutement inefficace des ressources attentionnelles pour la production d’intervalle, ceci nuisant au recrutement de ressources additionnelles pour faire face aux demandes liées à l’attente de l’interruption. Dans le Chapitre III, la tâche de production avec interruption est étudiée en comparant la performance de participants assignés à l’une ou l’autre de deux conditions d’exécution extensive (cinq sessions successives) de double tâche ou de production avec interruption. Des sessions pré et post-test sont aussi effectuées afin de tester le transfert entre les conditions. Les résultats montrent un effet d’interférence et de durée d’interférence tant en production avec double tâche qu’en production avec interruption. Ces effets sont toutefois plus prononcés lors de la production avec interruption et tendent à augmenter au fil des sessions, ce qui n’est pas observé en double tâche. Cela peut être expliqué par l’influence des processus préparatoires pendant la période pré-interruption et pendant l’interruption. Finalement, les résultats ne mettent pas en évidence d’effets de transfert substantiels entre les conditions puisque les effets de la pratique concernent principalement la préparation temporelle, un processus spécifique à la production avec interruption. Par la convergence que permet l’utilisation d’un même paradigme avec des méthodologies distinctes, ces travaux approfondissent la connaissance des mécanismes attentionnels associés à l’estimation temporelle et plus spécifiquement à la production avec interruption. Les résultats supportent l’hypothèse d’un partage attentionnel induit par l’attente de l’interruption. Les ressources seraient partagées entre les processus d’estimation temporelle explicite et implicite, une distinction importante récemment mise de l’avant dans la recherche sur l’estimation du temps (Coull, Davranche, Nazarian & Vidal, 2013). L’implication de processus dépendant des ressources attentionnelles communes pour le traitement de l’information temporelle peut rendre compte de l’effet d’interférence robuste et systématique observé dans la tâche de production avec interruption.
Resumo:
Understanding spatial patterns of land use and land cover is essential for studies addressing biodiversity, climate change and environmental modeling as well as for the design and monitoring of land use policies. The aim of this study was to create a detailed map of land use land cover of the deforested areas of the Brazilian Legal Amazon up to 2008. Deforestation data from and uses were mapped with Landsat-5/TM images analysed with techniques, such as linear spectral mixture model, threshold slicing and visual interpretation, aided by temporal information extracted from NDVI MODIS time series. The result is a high spatial resolution of land use and land cover map of the entire Brazilian Legal Amazon for the year 2008 and corresponding calculation of area occupied by different land use classes. The results showed that the four classes of Pasture covered 62% of the deforested areas of the Brazilian Legal Amazon, followed by Secondary Vegetation with 21%. The area occupied by Annual Agriculture covered less than 5% of deforested areas; the remaining areas were distributed among six other land use classes. The maps generated from this project ? called TerraClass - are available at INPE?s web site (http://www.inpe.br/cra/projetos_pesquisas/terraclass2008.php)
Resumo:
Investigates the use of temporal lip information, in conjunction with speech information, for robust, text-dependent speaker identification. We propose that significant speaker-dependent information can be obtained from moving lips, enabling speaker recognition systems to be highly robust in the presence of noise. The fusion structure for the audio and visual information is based around the use of multi-stream hidden Markov models (MSHMM), with audio and visual features forming two independent data streams. Recent work with multi-modal MSHMMs has been performed successfully for the task of speech recognition. The use of temporal lip information for speaker identification has been performed previously (T.J. Wark et al., 1998), however this has been restricted to output fusion via single-stream HMMs. We present an extension to this previous work, and show that a MSHMM is a valid structure for multi-modal speaker identification
Resumo:
A method was developed for relative radiometric calibration of single multitemporal Landsat TM image, several multitemporal images covering each others, and several multitemporal images covering different geographic locations. The radiometricly calibrated difference images were used for detecting rapid changes on forest stands. The nonparametric Kernel method was applied for change detection. The accuracy of the change detection was estimated by inspecting the image analysis results in field. The change classification was applied for controlling the quality of the continuously updated forest stand information. The aim was to ensure that all the manmade changes and any forest damages were correctly updated including the attribute and stand delineation information. The image analysis results were compared with the registered treatments and the stand information base. The stands with discrepancies between these two information sources were recommended to be field inspected.