17 resultados para non-conscious cognitive processing (NCCP) time.
Resumo:
Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.
Resumo:
Numerosi studi mostrano che gli intervalli temporali sono rappresentati attraverso un codice spaziale che si estende da sinistra verso destra, dove gli intervalli brevi sono rappresentati a sinistra rispetto a quelli lunghi. Inoltre tale disposizione spaziale del tempo può essere influenzata dalla manipolazione dell’attenzione-spaziale. La presente tesi si inserisce nel dibattito attuale sulla relazione tra rappresentazione spaziale del tempo e attenzione-spaziale attraverso l’uso di una tecnica che modula l’attenzione-spaziale, ovvero, l’Adattamento Prismatico (AP). La prima parte è dedicata ai meccanismi sottostanti tale relazione. Abbiamo mostrato che spostando l’attenzione-spaziale con AP, verso un lato dello spazio, si ottiene una distorsione della rappresentazione di intervalli temporali, in accordo con il lato dello spostamento attenzionale. Questo avviene sia con stimoli visivi, sia con stimoli uditivi, nonostante la modalità uditiva non sia direttamente coinvolta nella procedura visuo-motoria di AP. Questo risultato ci ha suggerito che il codice spaziale utilizzato per rappresentare il tempo, è un meccanismo centrale che viene influenzato ad alti livelli della cognizione spaziale. La tesi prosegue con l’indagine delle aree corticali che mediano l’interazione spazio-tempo, attraverso metodi neuropsicologici, neurofisiologici e di neuroimmagine. In particolare abbiamo evidenziato che, le aree localizzate nell’emisfero destro, sono cruciali per l’elaborazione del tempo, mentre le aree localizzate nell’emisfero sinistro sono cruciali ai fini della procedura di AP e affinché AP abbia effetto sugli intervalli temporali. Infine, la tesi, è dedicata allo studio dei disturbi della rappresentazione spaziale del tempo. I risultati ci indicano che un deficit di attenzione-spaziale, dopo danno emisferico destro, provoca un deficit di rappresentazione spaziale del tempo, che si riflette negativamente sulla vita quotidiana dei pazienti. Particolarmente interessanti sono i risultati ottenuti mediante AP. Un trattamento con AP, efficace nel ridurre il deficit di attenzione-spaziale, riduce anche il deficit di rappresentazione spaziale del tempo, migliorando la qualità di vita dei pazienti.
Resumo:
STUDY OBJECTIVE: Cyclic Alternating Pattern (CAP) is a fluctuation of the arousal level during NREM sleep and consists of the alternation between two phases: phase A (divided into three subtypes A1, A2, and A3) and phase B. A1 is thought to be generated by the frontal cortex and is characterized by the presence of K complexes or delta bursts; additionally, CAP A1 seems to have a role in the involvement of sleep slow wave activity in cognitive processing. Our hypothesis was that an overall CAP rate would have a negative influence on cognitive performance due to excessive fluctuation of the arousal level during NREM sleep. However, we also predicted that CAP A1 would be positively correlated with cognitive functions, especially those related to frontal lobe functioning. For this reason, the objective of our study was to correlate objective sleep parameters with cognitive behavioral measures in normal healthy adults. METHODS: 8 subjects (4 males; 4 females; mean age 27.75 years, range 2334) were recruited for this study. Two nocturnal polysomnography (night 2 and 3 = N2 and N3) were carried out after a night of adaptation. A series of neuropsychological tests were performed by the subjects in the morning and afternoon of the second day (D2am; D2pm) and in the morning of the third day (D3am). Raw scores from the neuropsychological tests were used as dependent variables in the statistical analysis of the results. RESULTS: We computed a series of partial correlations between sleep microstructure parameters (CAP, A1, A2 and A3 rate) and a number of indices of cognitive functioning. CAP rate was positively correlated with visuospatial working memory (Corsi block test), Trial Making Test Part A (planning and motor sequencing) and the retention of words from the Hopkins Verbal Learning Test (HVLT). Conversely, CAP was negatively correlated with visuospatial fluency (Ruff Figure Fluency Test). CAP A1 were correlated with many of the tests of neuropsychological functioning, such as verbal fluency (as measured by the COWAT), working memory (as measured by the Digit Span – Backward test), and both delay recall and retention of the words from the HVLT. The same parameters were found to be negatively correlated with CAP A2 subtypes. CAP 3 were negatively correlated with the Trial Making Test Parts A and B. DISCUSSION: To our knowledge this is the first study indicating a role of CAP A1 and A2 on behavioral cognitive performance of healthy adults. The results suggest that high rate of CAP A1 might be related to an improvement whereas high rate of CAP A2 to a decline of cognitive functions. Further studies need to be done to better determine the role of the overall CAP rate and CAP A3 on cognitive behavioral performances.
Resumo:
Emotional Intelligence (EI) has increasingly gained widespread popularity amongst both lay people and scientists in a wide range of contexts and across several research areas. In spite of rigorous inquiry into its applications in educational, social, health and clinical settings, substantial disagreement exists regarding the definition of EI, with respect to both terminology and operationalizations. Actually, there is a consensus about a conceptual distinction between Trait EI, or trait emotional self-efficacy (Petrides & Furnham, 2001), and Ability EI, or cognitive-emotional ability (Mayer & Salovey, 1997). Trait EI is measured via self-report questionnaires, whereas Ability EI is assessed via maximum performance tests. Moreover, EI is the broadest of the emotional constructs, and it subsumes various constructs, as Emotional Awareness (Lane & Schwartz, 1987). To date, EI research has focused primarily on adults, with fewer studies conducted with child samples. The aim of the present study is to investigate the role of different models of EI in childhood and early adolescence (N = 670; 353 females; Mage= 10.25 years ; SD = 1.57). In addition, a further goal is to evaluate the relationship of each construct with personality, non verbal cognitive intelligence, school performance, peer relationships, and affective disorders (anxiety and depression). Results shows significant correlations between Trait EI and Emotional Awareness, whereas Trait and Ability EI appear as independent constructs. We also found significant positive associations between age and Ablity EI and Emotional Awareness (although with add of verbal productivity), while gender differences emerged in favour of females in all EI-related measures. The results provide evidence that Trait EI is partially determined by all of the Big Five personality dimensions, but independent of cognitive ability. Finally, the present study highlights the role of EI on social interactions, school performance and, especially, a negative relationship between Trait EI and psychopathology.
Resumo:
Introduzione. La movimentazione manuale di carichi è stata recentemente proposta come un possibile determinante del distacco di retina. Al fine di confortare quest’ipotesi, sono stati analizzati i tassi di incidenza di distacco di retina regmatogeno (DRR) idiopatico, trattato chirurgicamente, tra i residenti in Toscana addetti ad attività lavorative manuali, non manuali e casalinghe. Metodi. Le schede di dimissione ospedaliera (SDO) della Toscana contengono anche informazioni codificate sulla categoria generica di impiego. Sono stati utilizzati i dati di tutti i pazienti residenti in Toscana con una SDO emessa da un qualsiasi ospedale italiano nel periodo 1997-2009, con diagnosi principale di DRR (ICD-9: 361,0-361,07 e 361,9) e con DRG 36 (“interventi sulla retina”). Dopo l’eliminazione dei soggetti che non soddisfacevano i criteri di eligibilità, è stato deciso di restringere la popolazione in studio ai soggetti di età 25-59 anni, successivamente classificati in addetti ad attività lavorative manuali, non manuali o casalinghe. Risultati. Sono stati identificati 1.946 casi. Tra gli uomini, gli addetti ad attività lavorative manuali hanno riportato un tasso di incidenza standardizzato per età 1,8 volte più alto rispetto agli addetti ad attività lavorative non manuali (17,4 [IC95%, 16,1–18,7] vs. 9,8 [IC95%, 8,8–10,8]). Tra le donne, i tassi di incidenza standardizzati per età erano 1,9 volte più alti negli addetti ad attività lavorative manuali (11,1 [IC95%, 9,8–12,3]) e 1,7 volte più alti nelle casalinghe (9,5 [IC95%, 8,3–10,8]) rispetto agli addetti ad attività lavorative non manuali (5,7 [IC95%, 4,8–6,6]). Conclusioni. Lo studio mette in evidenza come gli addetti ad attività lavorative manuali siano maggiormente affetti da DRR idiopatico rispetto agli addetti ad attività lavorative non manuali. Questi risultati supportano l’ipotesi che la movimentazione manuale di carichi, che difficilmente può ritrovarsi come compito di attività lavorative non manuali, possa avere un ruolo causale nella genesi della patologia.
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.
Resumo:
Cesarean Delivery (CD) rates are rising in many parts of the world. In order to define strategies to reduce them, it is important to explore the role of clinical and organizational factors. This thesis has the objective to describe the contemporary CD practice and study clinical and organizational variables as determinants of CD in all women who gave birth between 2005 and June 2010 in the Emilia Romagna region (Italy). All hospital discharge abstracts of women who delivered between 2005 and mid 2010 in the region were selected and linked with birth certificates. In addition to descriptive statistics, in order to study the role of clinical and organizational variables (teaching or non-teaching hospital, birth volumes, time and day of delivery) multilevel Poisson regression models and a classification tree were used. A substantial inter-hospital variability in CD rate was found, and this was only partially explained by the considered variables. The most important risk factors of CD were: previous CD (RR 4,95; 95%CI: 4,85-5,05), cord prolapse (RR 3,51; 95% CI:2,96-4,16), and malposition/malpresentation (RR 2,72; 95%CI: 2,66-2,77). Delivery between 7 pm and 7 am and during non working days protect against CD in all subgroups including those with a small number of elective CDs while delivery at a teaching hospital and birth volumes were not statistically significant risk factors. The classification tree shows that previous CD and malposition/malpresentation are the most important variables discriminating between high and low risk of CD. These results indicate that other not considered factors might explain CD variability and do not provide clear evidence that small hospitals have a poor performance in terms of CD rate. Some strategies to reduce CD could be found by focusing on the differences in delivery practice between day and night and between working and no-working day deliveries.
Resumo:
Emotional intelligence (EI) represents an attribute of contemporary attractiveness for the scientific psychology community. Of particular interest for the present thesis are the conundrum related to the representation of this construct conceptualized as a trait (i.e., trait EI), which are in turn reflected in the current lack of agreement upon its constituent elements, posing significant challenges to research and clinical progress. Trait EI is defined as an umbrella personality-alike construct reflecting emotion-related dispositions and self-perceptions. The Trait Emotional Intelligence Questionnaire (TEIQue) was chosen as main measure, given its strong theoretical and psychometrical basis, including superior predictive validity when compared to other trait EI measures. Studies 1 and 2 aimed at validating the Italian 153-items forms of the TEIQue devoted to adolescents and adults. Analyses were done to investigate the structure of the questionnaire, its internal consistencies and gender differences at the facets, factor, and global level of both versions. Despite some low reliabilities, results from Studies 1 and 2 confirm the four-factor structure of the TEIQue. Study 3 investigated the utility of trait EI in a sample of adolescents over internalizing conditions (i.e., symptoms of anxiety and depression) and academic performance (grades at math and Italian language/literacy). Beyond trait EI, concurrent effects of demographic variables, higher order personality dimensions and non-verbal cognitive ability were controlled for. Study 4a and Study 4b addressed analogue research questions, through a meta-analysis and new data in on adults. In the latter case, effects of demographics, emotion regulation strategies, and the Big Five were controlled. Overall, these studies showed the incremental utility of the TEIQue in different domains beyond relevant predictors. Analyses performed at the level of the four-TEIQue factors consistently indicated that its predictive effects were mainly due to the factor Well-Being. Findings are discussed with reference to potential implication for theory and practice.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
The surface electrocardiogram (ECG) is an established diagnostic tool for the detection of abnormalities in the electrical activity of the heart. The interest of the ECG, however, extends beyond the diagnostic purpose. In recent years, studies in cognitive psychophysiology have related heart rate variability (HRV) to memory performance and mental workload. The aim of this thesis was to analyze the variability of surface ECG derived rhythms, at two different time scales: the discrete-event time scale, typical of beat-related features (Objective I), and the “continuous” time scale of separated sources in the ECG (Objective II), in selected scenarios relevant to psychophysiological and clinical research, respectively. Objective I) Joint time-frequency and non-linear analysis of HRV was carried out, with the goal of assessing psychophysiological workload (PPW) in response to working memory engaging tasks. Results from fourteen healthy young subjects suggest the potential use of the proposed indices in discriminating PPW levels in response to varying memory-search task difficulty. Objective II) A novel source-cancellation method based on morphology clustering was proposed for the estimation of the atrial wavefront in atrial fibrillation (AF) from body surface potential maps. Strong direct correlation between spectral concentration (SC) of atrial wavefront and temporal variability of the spectral distribution was shown in persistent AF patients, suggesting that with higher SC, shorter observation time is required to collect spectral distribution, from which the fibrillatory rate is estimated. This could be time and cost effective in clinical decision-making. The results held for reduced leads sets, suggesting that a simplified setup could also be considered, further reducing the costs. In designing the methods of this thesis, an online signal processing approach was kept, with the goal of contributing to real-world applicability. An algorithm for automatic assessment of ambulatory ECG quality, and an automatic ECG delineation algorithm were designed and validated.
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.