901 resultados para Computational model, Synaptic connections, Tactile perception, Weber’s illusion
Resumo:
This article describes further evidence for a new neural network theory of biological motion perception that is called a Motion Boundary Contour System. This theory clarifies why parallel streams Vl-> V2 and Vl-> MT exist for static form and motion form processing among the areas Vl, V2, and MT of visual cortex. The Motion Boundary Contour System consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a Motion Oriented Contrast Filter, or MOC Filter, for preprocessing moving images; and a Cooperative-Competitive Feedback Loop, or CC Loop, for generating emergent boundary segmentations of the filtered signals. The present article uses the MOC Filter to explain a variety of classical and recent data about short-range and long-range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed-up of motion velocity as interfiash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte's Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem, including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90°, whereas opposite directions differ by 180°, and why a cortical stream Vl -> V2 -> MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the Motion Boundary Contour System design.
Resumo:
Evaluation of temperature distribution in cold rooms is an important consideration in the design of food storage solutions. Two common approaches used in both industry and academia to address this question are the deployment of wireless sensors, and modelling with Computational Fluid Dynamics (CFD). However, for a realworld evaluation of temperature distribution in a cold room, both approaches have their limitations. For wireless sensors, it is economically unfeasible to carry out large-scale deployment (to obtain a high resolution of temperature distribution); while with CFD modelling, it is usually not accurate enough to get a reliable result. In this paper, we propose a model-based framework which combines the wireless sensors technique with CFD modelling technique together to achieve a satisfactory trade-off between minimum number of wireless sensors and the accuracy of temperature profile in cold rooms. A case study is presented to demonstrate the usability of the framework.
Resumo:
In the area of food and pharmacy cold storage, temperature distribution is considered as a key factor. Inappropriate distribution of temperature during the cooling process in cold rooms will cause the deterioration of the quality of products and therefore shorten their life-span. In practice, in order to maintain the distribution of temperature at an appropriate level, large amount of electrical energy has to be consumed to cool down the volume of space, based on the reading of a single temperature sensor placed in every cold room. However, it is not clear and visible that what is the change of energy consumption and temperature distribution over time. It lacks of effective tools to visualise such a phenomenon. In this poster, we initially present a solution which combines a visualisation tool with a Computational Fluid Dynamics (CFD) model together to enable users to explore such phenomenon.
Resumo:
Both the emission properties and the evolution of the radio jets of Active Galactic Nuclei are dependent on the magnetic (B) fields that thread them. A number of observations of AGN jets suggest that the B fields they carry have a significant helical component, at least on parsec scales. This thesis uses a model, first proposed by Laing and then developed by Papageorgiou, to explore how well the observed properties of AGN jets can be reproduced by assuming a helical B field with three parameters; pitch angle, viewing angle and degree of entanglement. This model has been applied to multifrequency Very Long Baseline Interferometry (VLBI) observations of the AGN jets of Markarian 501 and M87, making it possible to derive values for the helical pitch angle, the viewing angle and the degree of entanglement for these jets. Faraday rotation measurements are another important tool for investigating the B fields of AGN jets. A helical B field component should result in a systematic gradient in the observed Faraday rotation across the jet. Real observed radio images have finite resolution; typical beam sizes for cm-wavelength VLBI observations are often comparable to or larger than the intrinsic jet widths, raising questions about how well resolved a jet must be in the transverse direction in order to reliably detect transverse Faraday-rotation structure. This thesis presents results of Monte Carlo simulations of Faraday rotation images designed to directly investigate this question, together with a detailed investigation into the probabilities of observing spurious Faraday Rotation gradients as a result of random noise and finite resolution. These simulations clearly demonstrate the possibility of detecting transverse Faraday-rotation structures even when the intrinsic jet widths are appreciably smaller than the beam width.
Resumo:
The main objective of this thesis is the critical analysis of the evolution of the criminal justice systems throughout the past decade, with special attention to the fight against transnational terrorism. It is evident – for any observer - that such threats and the associated risk that terrorism entails, has changed significantly throughout the past decade. This perception has generated answers – many times radical ones – by States, as they have committed themselves to warrant the safety of their populations and to ease a growing sentiment of social panic. This thesis seeks to analyse the characteristics of this new threat and the responses that States have developed in the fight against terrorism since 9/11, which have questioned some of the essential principles and values in place in their own legal systems. In such sense, freedom and security are placed into perspective throughout the analysis of the specific antiterrorist legal reforms of five different States: Israel, Portugal, Spain, the United Kingdom and the United States of America. On the other hand, in light of those antiterrorist reforms, it will be questioned if it is possible to speak of the emergence of a new system of criminal justice (and of a process of a convergence between common law and civil law systems), built upon a control and preventive security framework, significantly different from traditional models. Finally, this research project has the fundamental objective to contribute to a better understanding on the economic, social and civilization costs of those legal reforms regarding human rights, the rule of law and democracy in modern States.
Resumo:
Alzheimer’s disease (AD) is an incurable neurodegenerative disorder, accounting for over 60% of all cases of dementia. The primary risk factor for AD is age, however several genetic and environmental factors are also involved. The pathological characteristics of AD include extracellular deposition of the beta-amyloid peptide (Aβ) and intraneuronal accumulation of neurofibrillary tangles (NFTs) made of aggregated paired helical filaments (PHFs) of the hyperphosphorylated tau protein, along with synaptic loss and neuronal death. There are numerous biochemical mechanisms involved in AD pathogenesis, however the reigning hypothesis points to toxic oligomeric Aβ species as the primary causative factor in a cascade of events leading to neuronal stress and dyshomeostasis that initiate abnormal regulation of tau. The insulin and IGF-1 receptors (IR, IGF-1R) are the primary activators of PI3- K/Akt through which they regulate cell growth, development, glucose metabolism, and learning and memory. Work in our lab and others shows increased Akt activity and phosphorylation of its downstream targets in AD brain, along with insulin and insulin-like growth factor-1 signalling (IIS) dysfunction. This is supported by studies of AD models in vivo and in vitro. Our group and others hypothesise that Aβ activates Akt through IIS to initiate a negative feedback mechanism that desensitises neurons to insulin/IGF-1, and sustains activation of Akt. In this study the functions of endogenous Akt, IR, and the insulin receptor substrate (IRS-1) were examined in relationship to Aβ and tau pathology in the 3xTg-AD mouse model, which contains three mutant human transgenes associated with familial AD or dementia. The 3xTg-AD mouse develops Aβ and tau pathology in a spatiotemporal manner that best recapitulates the progression of AD in human brain. Western blotting and immunofluorescent microscopy techniques were utilised in vivo and in vitro, to examine the relationship between IIS, Akt, and AD pathology. I first characterised in detail AD pathology in 3xTg-AD mice, where an age-related accumulation of intraneuronal Aβ and tau was observed in the hippocampal formation, amygdala, and entorhinal cortex, and at late stages (18 months), extracellular amyloid plaques and NFTs, primarily in the subiculum and the CA1 layer of the hippocampal formation. Increased activity of Akt, detected with antibody to phosphoSer473-Akt, was increased in 3xTg-AD mice compared to age-matched non-transgenic mice (non-Tg), and in direct correlation to the accumulation of Aβ and tau in neuronal somatodendritic compartments. Akt phosphorylates tau at residue Ser214 within a highly specific consensus sequence for Akt phosphorylation, and phosphoSer214-tau strongly decreases microtubule (MT) stabilisation by preventing tau-MT binding. PhosphoSer214-tau increased concomitantly with this in the same age-related and region-specific fashion. Polarisation of tau phosphorylation was observed, where PHF-1 (tauSer396/404) and phosphoSer214-tau both appeared early in 3xTg-AD mice in distinct neuronal compartments: PHF-1 in axons, and phosphoSer214-tau in neuronal soma and dendrites. At 18 months, phosphoSer214-tau strongly colocalised with NFTs positive for the PHF- 1 and AT8 (tauSer202/Thr205) phosphoepitopes. IR was decreased with age in 3xTg-AD brain and in comparison to age-matched non-Tg, and this was specific for brain regions containing Aβ, tau, and hyperactive Akt. IRS-1 was similarly decreased, and both proteins showed altered subcellular distribution. Phosphorylation of IRS-1Ser312 is a strong indicator of IIS dysfunction and insulin resistance, and was increased in 3xTg-AD mice with age and in relation to pathology. Of particular note was our observation that abberant IIS and Akt signalling in 3xTg-AD brain related to Aβ and tau pathology on a gross anatomical level, and specifically localised to the brain regions and circuitry of the perforant path. Finally, I conducted a preliminary study of the effects of synthetic Aβ oligomers on embryonic rat hippocampus neuronal cultures to support these results and those in the literature. Taken together, these novel findings provide evidence for IIS and Akt signal transduction dysfunction as the missing link between Aβ and tau pathogenesis, and contribute to the overall understanding of the biochemical mechanisms of AD.
Resumo:
BACKGROUND: Scale-invariant neuronal avalanches have been observed in cell cultures and slices as well as anesthetized and awake brains, suggesting that the brain operates near criticality, i.e. within a narrow margin between avalanche propagation and extinction. In theory, criticality provides many desirable features for the behaving brain, optimizing computational capabilities, information transmission, sensitivity to sensory stimuli and size of memory repertoires. However, a thorough characterization of neuronal avalanches in freely-behaving (FB) animals is still missing, thus raising doubts about their relevance for brain function. METHODOLOGY/PRINCIPAL FINDINGS: To address this issue, we employed chronically implanted multielectrode arrays (MEA) to record avalanches of action potentials (spikes) from the cerebral cortex and hippocampus of 14 rats, as they spontaneously traversed the wake-sleep cycle, explored novel objects or were subjected to anesthesia (AN). We then modeled spike avalanches to evaluate the impact of sparse MEA sampling on their statistics. We found that the size distribution of spike avalanches are well fit by lognormal distributions in FB animals, and by truncated power laws in the AN group. FB data surrogation markedly decreases the tail of the distribution, i.e. spike shuffling destroys the largest avalanches. The FB data are also characterized by multiple key features compatible with criticality in the temporal domain, such as 1/f spectra and long-term correlations as measured by detrended fluctuation analysis. These signatures are very stable across waking, slow-wave sleep and rapid-eye-movement sleep, but collapse during anesthesia. Likewise, waiting time distributions obey a single scaling function during all natural behavioral states, but not during anesthesia. Results are equivalent for neuronal ensembles recorded from visual and tactile areas of the cerebral cortex, as well as the hippocampus. CONCLUSIONS/SIGNIFICANCE: Altogether, the data provide a comprehensive link between behavior and brain criticality, revealing a unique scale-invariant regime of spike avalanches across all major behaviors.
Resumo:
Primates must navigate complex social landscapes in their daily lives: gathering information from and about others, competing with others for food and mates, and cooperating to obtain rewards as well. Gaze-following often provides important clues as to what others see, know, or will do; using information about social attention is thus crucial for primates to be competent social actors. However, the cognitive bases of the gaze-following behaviors that primates exhibit appear to vary widely across species. The ultimate challenge of such analyses will therefore be to understand why such different cognitive mechanisms have evolved across species.
Resumo:
A model of telescoping is proposed that assumes no systematic errors in dating. Rather, the overestimation of recent occurrences of events is based on the combination of three factors: (1) Retention is greater for recent events; (2) errors in dating, though unbiased, increase linearly with the time since the dated event; and (3) intrusions often occur from events outside the period being asked about, but such intrusions do not come from events that have not yet occurred. In Experiment 1, we found that recall for colloquia fell markedly over a 2-year interval, the magnitude of errors in psychologists' dating of the colloquia increased at a rate of .4 days per day of delay, and the direction of the dating error was toward the middle of the interval. In Experiment 2, the model used the retention function and dating errors from the first study to predict the distribution of the actual dates of colloquia recalled as being within a 5-month period. In Experiment 3, the findings of the first study were replicated with colloquia given by, instead of for, the subjects.
Resumo:
Huntington's disease (HD) is a neurodegenerative disease caused by the expansion of a poly-glutamine (poly-Q) stretch in the huntingtin (Htt) protein. Gain-of-function effects of mutant Htt have been extensively investigated as the major driver of neurodegeneration in HD. However, loss-of-function effects of poly-Q mutations recently emerged as potential drivers of disease pathophysiology. Early synaptic problems in the excitatory cortical and striatal connections have been reported in HD, but the role of Htt protein in synaptic connectivity was unknown. Therefore, we investigated the role of Htt in synaptic connectivity in vivo by conditionally silencing Htt in the developing mouse cortex. When cortical Htt function was silenced, cortical and striatal excitatory synapses formed and matured at an accelerated pace through postnatal day 21 (P21). This exuberant synaptic connectivity was lost over time in the cortex, resulting in the deterioration of synapses by 5 weeks. Synaptic decline in the cortex was accompanied with layer- and region-specific reactive gliosis without cell loss. To determine whether the disease-causing poly-Q mutation in Htt affects synapse development, we next investigated the synaptic connectivity in a full-length knock-in mouse model of HD, the zQ175 mouse. Similar to the cortical conditional knock-outs, we found excessive excitatory synapse formation and maturation in the cortices of P21 zQ175, which was lost by 5 weeks. Together, our findings reveal that cortical Htt is required for the correct establishment of cortical and striatal excitatory circuits, and this function of Htt is lost when the mutant Htt is present.
Resumo:
We introduce a dynamic directional model (DDM) for studying brain effective connectivity based on intracranial electrocorticographic (ECoG) time series. The DDM consists of two parts: a set of differential equations describing neuronal activity of brain components (state equations), and observation equations linking the underlying neuronal states to observed data. When applied to functional MRI or EEG data, DDMs usually have complex formulations and thus can accommodate only a few regions, due to limitations in spatial resolution and/or temporal resolution of these imaging modalities. In contrast, we formulate our model in the context of ECoG data. The combined high temporal and spatial resolution of ECoG data result in a much simpler DDM, allowing investigation of complex connections between many regions. To identify functionally segregated sub-networks, a form of biologically economical brain networks, we propose the Potts model for the DDM parameters. The neuronal states of brain components are represented by cubic spline bases and the parameters are estimated by minimizing a log-likelihood criterion that combines the state and observation equations. The Potts model is converted to the Potts penalty in the penalized regression approach to achieve sparsity in parameter estimation, for which a fast iterative algorithm is developed. The methods are applied to an auditory ECoG dataset.
Resumo:
Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.
Resumo:
Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.
We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.
We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.
Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.
This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.
Resumo:
One thing is (a) to develop a system that handles some task to one's satisfaction, and also has a universally recognized myrthful side to its output. Another thing is (b) to provide an analysis of why you are getting such a byproduct. Yet another thing is (c) to develop a model that incorporates reflection about some phenomenon in humor for its own sake. This paper selects for discussion especially Alibi, going on to describe the preliminaries of Columbus. The former, which fits in (a), is a planner with an explanatory capability. It invents pretexts. It's no legal defense, but it is relevant to evidential thinking in AI & Law. Some of the output pretext are myrthful. Not in the sense they are silly: they are not. A key factor seems to be the very alacrity at explaining out detail after detail of globally damning evidence. I attempt a reanalysis of Alibi in respect of (b). As to Columbus, it fits instead in (c). We introduce here the basics of this (unimplemented) model, developed to account for a sample text in parody.
Resumo:
This paper describes a project aimed at making Computational Fluid Dynamics (CFD) based fire simulation accessible to members of the fire safety engineering community. Over the past few years, the practise of CFD based fire simulation has begun the transition from the confines of the research laboratory to the desk of the fire safety engineer. To a certain extent, this move has been driven by the demands of performance based building codes. However, while CFD modelling has many benefits over other forms of fire simulation, it requires a great deal of expertise on the user’s part to obtain reasonable simulation results. The project described in this paper, SMARTFIRE, aims to relieve some of this dependence on expertise so that users are less concerned with the details of CFD analysis and can concentrate on results. This aim is achieved by the use of an expert system component as part of the software suite which takes some of the expertise burden away from the user. SMARTFIRE also makes use of the latest developments in CFD technology in order to make the CFD analysis more efficient. This paper describes design considerations of the SMARTFIRE software, emphasising its open architecture, CFD engine and knowledge based systems.