958 resultados para BRAIN NETWORKS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The world of mapping has changed. Earlier, only professional experts were responsible for map production, but today ordinary people without any training or experience can become map-makers. The number of online mapping sites, and the number of volunteer mappers has increased significantly. The development of the technology, such as satellite navigation systems, Web 2.0, broadband Internet connections, and smartphones, have had one of the key roles in enabling the rise of volunteered geographic information (VGI). As opening governmental data to public is a current topic in many countries, the opening of high quality geographical data has a central role in this study. The aim of this study is to investigate how is the quality of spatial data produced by volunteers by comparing it with the map data produced by public authorities, to follow what occurs when spatial data are opened for users, and to get acquainted with the user profile of these volunteer mappers. A central part of this study is OpenStreetMap project (OSM), which aim is to create a map of the entire world by volunteers. Anyone can become an OpenStreetMap contributor, and the data created by the volunteers are free to use for anyone without restricting copyrights or license charges. In this study OpenStreetMap is investigated from two viewpoints. In the first part of the study, the aim was to investigate the quality of volunteered geographic information. A pilot project was implemented by following what occurs when a high-resolution aerial imagery is released freely to the OpenStreetMap contributors. The quality of VGI was investigated by comparing the OSM datasets with the map data of The National Land Survey of Finland (NLS). The quality of OpenStreetMap data was investigated by inspecting the positional accuracy and the completeness of the road datasets, as well as the differences in the attribute datasets between the studied datasets. Also the OSM community was under analysis and the development of the map data of OpenStreetMap was investigated by visual analysis. The aim of the second part of the study was to analyse the user profile of OpenStreetMap contributors, and to investigate how the contributors act when collecting data and editing OpenStreetMap. The aim was also to investigate what motivates users to map and how is the quality of volunteered geographic information envisaged. The second part of the study was implemented by conducting a web inquiry to the OpenStreetMap contributors. The results of the study show that the quality of OpenStreetMap data compared with the data of National Land Survey of Finland can be defined as good. OpenStreetMap differs from the map of National Land Survey especially because of the amount of uncertainty, for example because of the completeness and uniformity of the map are not known. The results of the study reveal that opening spatial data increased notably the amount of the data in the study area, and both the positional accuracy and completeness improved significantly. The study confirms the earlier arguments that only few contributors have created the majority of the data in OpenStreetMap. The inquiry made for the OpenStreetMap users revealed that the data are most often collected by foot or by bicycle using GPS device, or by editing the map with the help of aerial imageries. According to the responses, the users take part to the OpenStreetMap project because they want to make maps better, and want to produce maps, which have information that is up-to-date and cannot be found from any other maps. Almost all of the users exploit the maps by themselves, most popular methods being downloading the map into a navigator or into a mobile device. The users regard the quality of OpenStreetMap as good, especially because of the up-to-dateness and the accuracy of the map.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fast excitatory transmission between neurons in the central nervous system is mainly mediated by L-glutamate acting on ligand gated (ionotropic) receptors. These are further categorized according to their pharmacological properties to AMPA (2-amino-3-(5-methyl-3-oxo-1,2- oxazol-4-yl)propanoic acid), NMDA (N-Methyl-D-aspartic acid) and kainate (KAR) subclasses. In the rat and the mouse hippocampus, development of glutamatergic transmission is most dynamic during the first postnatal weeks. This coincides with the declining developmental expression of the GluK1 subunit-containing KARs. However, the function of KARs during early development of the brain is poorly understood. The present study reveals novel types of tonically active KARs (hereafter referred to as tKARs) which play a central role in functional development of the hippocampal CA3-CA1 network. The study shows for the first time how concomitant pre- and postsynaptic KAR function contributes to development of CA3-CA1 circuitry by regulating transmitter release and interneuron excitability. Moreover, the tKAR-dependent regulation of transmitter release provides a novel mechanism for silencing and unsilencing early synapses and thus shaping the early synaptic connectivity. The role of GluK1-containing KARs was studied in area CA3 of the neonatal hippocampus. The data demonstrate that presynaptic KARs in excitatory synapses to both pyramidal cells and interneurons are tonically activated by ambient glutamate and that they regulate glutamate release differentially, depending on target cell type. At synapses to pyramidal cells these tKARs inhibit glutamate release in a G-protein dependent manner but in contrast, at synapses to interneurons, tKARs facilitate glutamate release. On the network level these mechanisms act together upregulating activity of GABAergic microcircuits and promoting endogenous hippocampal network oscillations. By virtue of this, tKARs are likely to have an instrumental role in the functional development of the hippocampal circuitry. The next step was to investigate the role of GluK1 -containing receptors in the regulation of interneuron excitability. The spontaneous firing of interneurons in the CA3 stratum lucidum is markedly decreased during development. The shift involves tKARs that inhibit medium-duration afterhyperpolarization (mAHP) in these neurons during the first postnatal week. This promotes burst spiking of interneurons and thereby increases GABAergic activity in the network synergistically with the tKAR-mediated facilitation of their excitatory drive. During development the amplitude of evoked medium afterhyperpolarizing current (ImAHP) is dramatically increased due to decoupling tKAR activation and ImAHP modulation. These changes take place at the same time when the endogeneous network oscillations disappear. These tKAR-driven mechanisms in the CA3 area regulate both GABAergic and glutamatergic transmission and thus gate the feedforward excitatory drive to the area CA1. Here presynaptic tKARs to CA1 pyramidal cells suppress glutamate release and enable strong facilitation in response to high-frequency input. Therefore, CA1 synapses are finely tuned to high-frequency transmission; an activity pattern that is common in neonatal CA3-CA1 circuitry both in vivo and in vitro. The tKAR-regulated release probability acts as a novel presynaptic silencing mechanism that can be unsilenced in response to Hebbian activity. The present results shed new light on the mechanisms modulating the early network activity that paves the way for oscillations lying behind cognitive tasks such as learning and memory. Kainate receptor antagonists are already being developed for therapeutic use for instance against pain and migraine. Because of these modulatory actions, tKARs also represent an attractive candidate for therapeutic treatment of developmentally related complications such as learning disabilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a distributed algorithm that finds a maximal edge packing in O(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network; here Δ is the maximum degree of the graph and W is the maximum weight. As a direct application, we have a distributed 2-approximation algorithm for minimum-weight vertex cover, with the same running time. We also show how to find an f-approximation of minimum-weight set cover in O(f2k2 + fk log* W) rounds; here k is the maximum size of a subset in the set cover instance, f is the maximum frequency of an element, and W is the maximum weight of a subset. The algorithms are deterministic, and they can be applied in anonymous networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of energy harvesting (EH) nodes as cooperative relays is a promising and emerging solution in wireless systems such as wireless sensor networks. It harnesses the spatial diversity of a multi-relay network and addresses the vexing problem of a relay's batteries getting drained in forwarding information to the destination. We consider a cooperative system in which EH nodes volunteer to serve as amplify-and-forward relays whenever they have sufficient energy for transmission. For a general class of stationary and ergodic EH processes, we introduce the notion of energy constrained and energy unconstrained relays and analytically characterize the symbol error rate of the system. Further insight is gained by an asymptotic analysis that considers the cases where the signal-to-noise-ratio or the number of relays is large. Our analysis quantifies how the energy usage at an EH relay and, consequently, its availability for relaying, depends not only on the relay's energy harvesting process, but also on its transmit power setting and the other relays in the system. The optimal static transmit power setting at the EH relays is also determined. Altogether, our results demonstrate how a system that uses EH relays differs in significant ways from one that uses conventional cooperative relays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The three dimensional structure of a protein is formed and maintained by the noncovalent interactions among the amino acid residues of the polypeptide chain These interactions can be represented collectively in the form of a network So far such networks have been investigated by considering the connections based on distances between the amino acid residues Here we present a method of constructing the structure network based on interaction energies among the amino acid residues in the protein We have investigated the properties of such protein energy based networks (PENs) and have shown correlations to protein structural features such as the clusters of residues involved in stability formation of secondary and super secondary structural units Further we demonstrate that the analysis of PENs in terms of parameters such as hubs and shortest paths can provide a variety of biologically important information such as the residues crucial for stabilizing the folded units and the paths of communication between distal residues in the protein Finally the energy regimes for different levels of stabilization in the protein structure have clearly emerged from the PEN analysis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traumatic brain injury (TBI) affects people of all ages and is a cause of long-term disability. In recent years, the epidemiological patterns of TBI have been changing. TBI is a heterogeneous disorder with different forms of presentation and highly individual outcome regarding functioning and health-related quality of life (HRQoL). The meaning of disability differs from person to person based on the individual s personality, value system, past experience, and the purpose he or she sees in life. Understanding of all these viewpoints is needed in comprehensive rehabilitation. This study examines the epidemiology of TBI in Finland as well as functioning and HRQoL after TBI, and compares the subjective and objective assessments of outcome. The frame of reference is the International Classification of Functioning, Disability and Health (ICF). The subjects of Study I represent the population of Finnish TBI patients who experienced their first TBI between 1991 and 2005. The 55 Finnish subjects of Studies II and IV participated in the first wave of the international Quality of life after brain injury (QOLIBRI) validation study. The 795 subjects from six language areas of Study III formed the second wave of the QOLIBRI validation study. The average annual incidence of Finnish hospitalised TBI patients during the years 1991-2005 was 101:100 000 in patients who had TBI as the primary diagnosis and did not have a previous TBI in their medical history. Males (59.2%) were at considerably higher risk of getting a TBI than females. The most common external cause of the injury was falls in all age groups. The number of TBI patients ≥ 70 years of age increased by 59.4% while the number of inhabitants older than 70 years increased by 30.3% in the population of Finland during the same time period. The functioning of a sample of 55 persons with TBI was assessed by extracting information from the patients medical documents using the ICF checklist. The most common problems were found in the ICF components of Body Functions (b) and Activities and Participation (d). HRQoL was assessed with the QOLIBRI which showed the highest level of satisfaction on the Emotions, Physical Problems and Daily Life and Autonomy scales. The highest scores were obtained by the youngest participants and participants living independently without the help of other people, and by people who were working. The relationship between the functional outcome and HRQoL was not straightforward. The procedure of linking the QOLIBRI and the GOSE to the ICF showed that these two outcome measures cover the relevant domains of TBI patients functioning. The QOLIBRI provides the patients subjective view, while the GOSE summarises the objective elements of functioning. Our study indicates that there are certain domains of functioning that are not traditionally sufficiently documented but are important for the HRQoL of persons with TBI. This was the finding especially in the domains of interpersonal relationships, social and leisure activities, self, and the environment. Rehabilitation aims to optimize functioning and to minimize the experience of disability among people with health conditions, and it needs to be based on a comprehensive understanding of human functioning. As an integrative model, the ICF may serve as a frame of reference in achieving such an understanding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brain size and architecture exhibit great evolutionary and ontogenetic variation. Yet, studies on population variation (within a single species) in brain size and architecture, or in brain plasticity induced by ecologically relevant biotic factors have been largely overlooked. Here, I address the following questions: (i) do locally adapted populations differ in brain size and architecture, (ii) can the biotic environment induce brain plasticity, and (iii) do locally adapted populations differ in levels of brain plasticity? In the first two chapters I report large variation in both absolute and relative brain size, as well as in the relative sizes of brain parts, among divergent nine-spined stickleback (Pungitius pungitius) populations. Some traits show habitat-dependent divergence, implying natural selection being responsible for the observed patterns. Namely, marine sticklebacks have relatively larger bulbi olfactorii (chemosensory centre) and telencephala (involved in learning) than pond sticklebacks. Further, I demonstrate the importance of common garden studies in drawing firm evolutionary conclusions. In the following three chapters I show how the social environment and perceived predation risk shapes brain development. In common frog (Rana temporaria) tadpoles, I demonstrate that under the highest per capita predation risk, tadpoles develop smaller brains than in less risky situations, while high tadpole density results in enlarged tectum opticum (visual brain centre). Visual contact with conspecifics induces enlarged tecta optica in nine-spined sticklebacks, whereas when only olfactory cues from conspecifics are available, bulbus olfactorius become enlarged.Perceived predation risk results in smaller hypothalami (complex function) in sticklebacks. Further, group-living has a negative effect on relative brain size in the competition-adapted pond sticklebacks, but not in the predation-adapted marine sticklebacks. Perceived predation risk induces enlargement of bulbus olfactorius in pond sticklebacks, but not in marine sticklebacks who have larger bulbi olfactorii than pond fish regardless of predation. In sum, my studies demonstrate how applying a microevolutionary approach can help us to understand the enormous variation observed in the brains of wild animals a point-of-view which I high-light in the closing review chapter of my thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose an efficient and parameter-free scoring criterion, the factorized conditional log-likelihood (ˆfCLL), for learning Bayesian network classifiers. The proposed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as well as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-theoretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-of-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show that ˆfCLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, using significantly less computational resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Standard-cell design methodology is an important technique in semicustom-VLSI design. It lends itself to the easy automation of the crucial layout part, and many algorithms have been proposed in recent literature for the efficient placement of standard cells. While many studies have identified the Kerninghan-Lin bipartitioning method as being superior to most others, it must be admitted that the behaviour of the method is erratic, and that it is strongly dependent on the initial partition. This paper proposes a novel algorithm for overcoming some of the deficiencies of the Kernighan-Lin method. The approach is based on an analogy of the placement problem with neural networks, and, by the use of some of the organizing principles of these nets, an attempt is made to improve the behavior of the bipartitioning scheme. The results have been encouraging, and the approach seems to be promising for other NP-complete problems in circuit layout.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many next-generation distributed applications, such as grid computing, require a single source to communicate with a group of destinations. Traditionally, such applications are implemented using multicast communication. A typical multicast session requires creating the shortest-path tree to a fixed number of destinations. The fundamental issue in multicasting data to a fixed set of destinations is receiver blocking. If one of the destinations is not reachable, the entire multicast request (say, grid task request) may fail. Manycasting is a generalized variation of multicasting that provides the freedom to choose the best subset of destinations from a larger set of candidate destinations. We propose an impairment-aware algorithm to provide manycasting service in the optical layer, specifically OBS. We compare the performance of our proposed manycasting algorithm with traditional multicasting and multicast with over provisioning. Our results show a significant improvement in the blocking probability by implementing optical-layer manycasting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The blood-brain barrier (BBB) is a unique barrier that strictly regulates the entry of endogenous substrates and xenobiotics into the brain. This is due to its tight junctions and the array of transporters and metabolic enzymes that are expressed. The determination of brain concentrations in vivo is difficult, laborious and expensive which means that there is interest in developing predictive tools of brain distribution. Predicting brain concentrations is important even in early drug development to ensure efficacy of central nervous system (CNS) targeted drugs and safety of non-CNS drugs. The literature review covers the most common current in vitro, in vivo and in silico methods of studying transport into the brain, concentrating on transporter effects. The consequences of efflux mediated by p-glycoprotein, the most widely characterized transporter expressed at the BBB, is also discussed. The aim of the experimental study was to build a pharmacokinetic (PK) model to describe p-glycoprotein substrate drug concentrations in the brain using commonly measured in vivo parameters of brain distribution. The possibility of replacing in vivo parameter values with their in vitro counterparts was also studied. All data for the study was taken from the literature. A simple 2-compartment PK model was built using the Stella™ software. Brain concentrations of morphine, loperamide and quinidine were simulated and compared with published studies. Correlation of in vitro measured efflux ratio (ER) from different studies was evaluated in addition to studying correlation between in vitro and in vivo measured ER. A Stella™ model was also constructed to simulate an in vitro transcellular monolayer experiment, to study the sensitivity of measured ER to changes in passive permeability and Michaelis-Menten kinetic parameter values. Interspecies differences in rats and mice were investigated with regards to brain permeability and drug binding in brain tissue. Although the PK brain model was able to capture the concentration-time profiles for all 3 compounds in both brain and plasma and performed fairly well for morphine, for quinidine it underestimated and for loperamide it overestimated brain concentrations. Because the ratio of concentrations in brain and blood is dependent on the ER, it is suggested that the variable values cited for this parameter and its inaccuracy could be one explanation for the failure of predictions. Validation of the model with more compounds is needed to draw further conclusions. In vitro ER showed variable correlation between studies, indicating variability due to experimental factors such as test concentration, but overall differences were small. Good correlation between in vitro and in vivo ER at low concentrations supports the possibility of using of in vitro ER in the PK model. The in vitro simulation illustrated that in the simulation setting, efflux is significant only with low passive permeability, which highlights the fact that the cell model used to measure ER must have low enough paracellular permeability to correctly mimic the in vivo situation.