880 resultados para Hydrologic connectivity


Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the major concerns in an Intelligent Transportation System (ITS) scenario, such as that which may be found on a long-distance train service, is the provision of efficient communication services, satisfying users' expectations, and fulfilling even highly demanding application requirements, such as safety-oriented services. In an ITS scenario, it is common to have a significant amount of onboard devices that comprise a cluster of nodes (a mobile network) that demand connectivity to the outside networks. This demand has to be satisfied without service disruption. Consequently, the mobility of the mobile network has to be managed. Due to the nature of mobile networks, efficient and lightweight protocols are desired in the ITS context to ensure adequate service performance. However, the security is also a key factor in this scenario. Since the management of the mobility is essential for providing communications, the protocol for managing this mobility has to be protected. Furthermore, there are safety-oriented services in this scenario, so user application data should also be protected. Nevertheless, providing security is expensive in terms of efficiency. Based on this considerations, we have developed a solution for managing the network mobility for ITS scenarios: the NeMHIP protocol. This approach provides a secure management of network mobility in an efficient manner. In this article, we present this protocol and the strategy developed to maintain its security and efficiency in satisfactory levels. We also present the developed analytical models to analyze quantitatively the efficiency of the protocol. More specifically, we have developed models for assessing it in terms of signaling cost, which demonstrates that NeMHIP generates up to 73.47% less signaling compared to other relevant approaches. Therefore, the results obtained demonstrate that NeMHIP is the most efficient and secure solution for providing communications in mobile network scenarios such as in an ITS context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The EC (entorhinal cortex) is fundamental for cognitive and mnesic functions. Thus damage to this area appears as a key element in the progression of AD (Alzheimer's disease), resulting in memory deficits arising from neuronal and synaptic alterations as well as glial malfunction. In this paper, we have performed an in-depth analysis of astroglial morphology in the EC by measuring the surface and volume of the GFAP (glial fibrillary acidic protein) profiles in a triple transgenic mouse model of AD [3xTg-AD (triple transgenic mice of AD)]. We found significant reduction in both the surface and volume of GFAP-labelled profiles in 3xTg-AD animals from very early ages (1 month) when compared with non-Tg (non-transgenic) controls (48 and 54%, reduction respectively), which was sustained for up to 12 months (33 and 45% reduction respectively). The appearance of Lambda beta (amyloid beta-peptide) depositions at 12 months of age did not trigger astroglial hypertrophy; nor did it result in the close association of astrocytes with senile plaques. Our results suggest that the AD progressive cognitive deterioration can be associated with an early reduction of astrocytic arborization and shrinkage of the astroglial domain, which may affect synaptic connectivity within the EC and between the EC and other brain regions. In addition, the EC seems to be particularly vulnerable to AD pathology because of the absence of evident astrogliosis in response to A beta accumulation. Thus we can consider that targeting astroglial atrophy may represent a therapeutic strategy which might slow down the progression of AD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Alliance for Coastal Technologies (ACT) convened a workshop on Evaluating Approaches and Technologies for Monitoring Organic Contaminants in the Aquatic Environment in Ann Arbor, MI on July 21-23, 2006. The primary objectives of this workshop were to: 1) identify the priority management information needs relative to organic contaminant loading; 2) explore the most appropriate approaches to estimating mass loading; and 3) evaluate the current status of the sensor technology. To meet these objectives, a mixture of leading research scientists, resource managers, and industry representatives were brought together for a focused two-day workshop. The workshop featured four plenary talks followed by breakout sessions in which arranged groups of participants where charged to respond to a series of focused discussion questions. At present, there are major concerns about the inadequacies in approaches and technologies for quantifying mass emissions and detection of organic contaminants for protecting municipal water supplies and receiving waters. Managers use estimates of land-based contaminant loadings to rivers, lakes, and oceans to assess relative risk among various contaminant sources, determine compliance with regulatory standards, and define progress in source reduction. However, accurately quantifying contaminant loading remains a major challenge. Loading occurs over a range of hydrologic conditions, requiring measurement technologies that can accommodate a broad range of ambient conditions. In addition, in situ chemical sensors that provide a means for acquiring continuous concentration measurements are still under development, particularly for organic contaminants that typically occur at low concentrations. Better approaches and strategies for estimating contaminant loading, including evaluations of both sampling design and sensor technologies, need to be identified. The following general recommendations were made in an effort to advance future organic contaminant monitoring: 1. Improve the understanding of material balance in aquatic systems and the relationship between potential surrogate measures (e.g., DOC, chlorophyll, particle size distribution) and target constituents. 2. Develop continuous real-time sensors to be used by managers as screening measures and triggers for more intensive monitoring. 3. Pursue surrogate measures and indicators of organic pollutant contamination, such as CDOM, turbidity, or non-equilibrium partitioning. 4. Develop continuous field-deployable sensors for PCBs, PAHs, pyrethroids, and emerging contaminants of concern and develop strategies that couple sampling approaches with tools that incorporate sensor synergy (i.e., measure appropriate surrogates along with the dissolved organics to allow full mass emission estimation).[PDF contains 20 pages]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On a hillslope, overland flow first generates sheet erosion and then, with increasing flux, it causes rill erosion. Sheet erosion (interrill erosion) and rill erosion are commonly observed to coexist on hillslopes. Great differences exist between both the intensities and incidences of rill and interrill erosion. In this paper, a two-dimensional rill and interrill erosion model is developed to simulate the details of the soil erosion process on hillslopes. The hillslope is treated as a combination of a two-dimensional interrill area and a one-dimensional rill. The rill process, the interrill process, and the joint occurrence of rill and interrill areas are modeled, respectively. Thus, the process of sheet flow replenishing rill flow with water and sediment can be simulated in detail, which may possibly render more truthful results for rill erosion. The model was verified with two sets of data and the results seem good. Using this model, the characteristics of soil erosion on hillslopes are investigated. Study results indicate that (1) the proposed model is capable of describing the complex process of interrill and rill erosion on hillslopes; (2) the spatial distribution of erosion is simulated on a simplified two-dimensional hillslope, which shows that the distribution of interrill erosion may contribute to rill development; and (3) the quantity of soil eroded increases rapidly with the slope gradient, then declines, and a critical slope gradient exists, which is about 15-20 degrees for the accumulated erosion amount.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2008, the Center for Watershed Protection (CWP) surveyed seventy-three coastal plain communities to determine their current practices and need for watershed planning and low impact development (LID). The survey found that communities had varying watershed planning effectiveness and need better stormwater management, land use planning, and watershed management communication. While technical capacity is improving, stormwater programs are under staffed and innovative site designs may be prohibited under current regulations. In addition, the unique site constraints (e.g., sandy soils, low relief, tidal influence, vulnerability to coastal hazards, etc.) and lack of local examples are common LID obstacles along the coast (Vandiver and Hernandez, 2009). LID stormwater practices are an innovative approach to stormwater management that provide an alternative to structural stormwater practices, reduce runoff, and maintain or restores hydrology. The term LID is typically used to refer to the systematic application of small, distributed practices that replicate pre-development hydrologic functions. Examples of LID practices include: downspout disconnection, rain gardens, bioretention areas, dry wells, and vegetated filter strips. In coastal communities, LID practices have not yet become widely accepted or applied. The geographic focus for the project is the Atlantic and Gulf coastal plain province which includes nearly 250,000 square miles in portions of fifteen states from New Jersey to Texas (Figure 1). This project builds on CWP’s “Coastal Plain Watershed Network: Adapting, Testing, and Transferring Effective Tools to Protect Coastal Plain Watersheds” that developed a coastal land cover model, conducted a coastal plain community needs survey (results are online here: http://www.cwp.org/#survey), created a coastal watershed Network, and adapted the 8 Tools for Watershed Protection Framework for coastal areas. (PDF contains 4 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Congress established a legal imperative to restore the quality of our surface waters when it enacted the Clean Water Act in 1972. The act requires that existing uses of coastal waters such as swimming and shellfishing be protected and restored. Enforcement of this mandate is frequently measured in terms of the ability to swim and harvest shellfish in tidal creeks, rivers, sounds, bays, and ocean beaches. Public-health agencies carry out comprehensive water-quality sampling programs to check for bacteria contamination in coastal areas where swimming and shellfishing occur. Advisories that restrict swimming and shellfishing are issued when sampling indicates that bacteria concentrations exceed federal health standards. These actions place these coastal waters on the U.S. Environmental Protection Agencies’ (EPA) list of impaired waters, an action that triggers a federal mandate to prepare a Total Maximum Daily Load (TMDL) analysis that should result in management plans that will restore degraded waters to their designated uses. When coastal waters become polluted, most people think that improper sewage treatment is to blame. Water-quality studies conducted over the past several decades have shown that improper sewage treatment is a relatively minor source of this impairment. In states like North Carolina, it is estimated that about 80 percent of the pollution flowing into coastal waters is carried there by contaminated surface runoff. Studies show this runoff is the result of significant hydrologic modifications of the natural coastal landscape. There was virtually no surface runoff occurring when the coastal landscape was natural in places such as North Carolina. Most rainfall soaked into the ground, evaporated, or was used by vegetation. Surface runoff is largely an artificial condition that is created when land uses harden and drain the landscape surfaces. Roofs, parking lots, roads, fields, and even yards all result in dramatic changes in the natural hydrology of these coastal lands, and generate huge amounts of runoff that flow over the land’s surface into nearby waterways. (PDF contains 3 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper viewed the decline in information provision in Nigeria to poor library development, which could be attributed to poor funding. The consequence is that current journal and books are not available in nigerian fisheries libraries. Information which can be regarded as the first factor of production on which other factors like land, labour and capital depend, can only be provided at the right time when libraries are better founded. For now if there must be increase in fish production, poverty alleviation and food security in Nigeria, our fisheries scientists and policy makers will have to rely on international sources of information using the advantage of internet connectivity. Some of such sources discussed in this paper are ASFA, AGORA, FAO DOAJ, FISHBASE, IAMSLIC, INASP, INASP-PERI, INASP-AJOL, ODINAFRICA, SIFAR, WAS, and ABASFR. However, reliance on international sources must not be at the total neglect of harnessing nigerian fisheries information. For the Nigerian Fisheries and Aquatic Sciences Database being developed by NIFFR to attain an international status like those enumerated above, scientists and publishers are requested to take the pain of depositing copies of their publications with NIFFR for inclusion in the Database

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cells exhibit a diverse repertoire of dynamic behaviors. These dynamic functions are implemented by circuits of interacting biomolecules. Although these regulatory networks function deterministically by executing specific programs in response to extracellular signals, molecular interactions are inherently governed by stochastic fluctuations. This molecular noise can manifest as cell-to-cell phenotypic heterogeneity in a well-mixed environment. Single-cell variability may seem like a design flaw but the coexistence of diverse phenotypes in an isogenic population of cells can also serve a biological function by increasing the probability of survival of individual cells upon an abrupt change in environmental conditions. Decades of extensive molecular and biochemical characterization have revealed the connectivity and mechanisms that constitute regulatory networks. We are now confronted with the challenge of integrating this information to link the structure of these circuits to systems-level properties such as cellular decision making. To investigate cellular decision-making, we used the well studied galactose gene-regulatory network in \textit{Saccharomyces cerevisiae}. We analyzed the mechanism and dynamics of the coexistence of two stable on and off states for pathway activity. We demonstrate that this bimodality in the pathway activity originates from two positive feedback loops that trigger bistability in the network. By measuring the dynamics of single-cells in a mixed sugar environment, we observe that the bimodality in gene expression is a transient phenomenon. Our experiments indicate that early pathway activation in a cohort of cells prior to galactose metabolism can accelerate galactose consumption and provide a transient increase in growth rate. Together these results provide important insights into strategies implemented by cells that may have been evolutionary advantageous in competitive environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, we explore the density of the microglia in the cerebral and cerebellar cortices of individuals with autism to investigate the hypothesis that neuroinflammation is involved in autism. We describe in our findings an increase in microglial density in two disparate cortical regions, frontal insular cortex and visual cortex, in individuals with autism (Tetreault et al., 2012). Our results imply that there is a global increase in the microglial density and neuroinflammation in the cerebral cortex of individuals with autism.

We expanded our cerebellar study to additional neurodevelopmental disorders that exhibit similar behaviors to autism spectrum disorder and have known cerebellar pathology. We subsequently found a more than threefold increase in the microglial density specific to the molecular layer of the cerebellum, which is the region of the Purkinje and parallel fiber synapses, in individuals with autism and Rett syndrome. Moreover, we report that not only is there an increase in microglia density in the molecular layer, the microglial cell bodies are significantly larger in perimeter and area in individuals with autism spectrum disorder and Rett syndrome compared to controls that implies that the microglia are activated. Additionally, an individual with Angelman syndrome and the sibling of an individual with autism have microglial densities similar to the individuals with autism and Rett syndrome. By contrast, an individual with Joubert syndrome, which is a developmental hypoplasia of the cerebellar vermis, had a normal density of microglia, indicating the specific pathology in the cerebellum does not necessarily result in increased microglial densities. We found a significant decrease in Purkinje cells specific to the cerebellar vermis in individuals with autism.

These findings indicate the importance for investigation of the Purkinje synapses in autism and that the relationship between the microglia and the synapses is of great utility in understanding the pathology in autism. Together, these data provide further evidence for the neuroinflammation hypothesis in autism and a basis for future investigation of neuroinflammation in autism. In particular, investigating the function of microglia in modifying synaptic connectivity in the cerebellum may provide key insights into developing therapeutics in autism spectrum disorder.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Humans are particularly adept at modifying their behavior in accordance with changing environmental demands. Through various mechanisms of cognitive control, individuals are able to tailor actions to fit complex short- and long-term goals. The research described in this thesis uses functional magnetic resonance imaging to characterize the neural correlates of cognitive control at two levels of complexity: response inhibition and self-control in intertemporal choice. First, we examined changes in neural response associated with increased experience and skill in response inhibition; successful response inhibition was associated with decreased neural response over time in the right ventrolateral prefrontal cortex, a region widely implicated in cognitive control, providing evidence for increased neural efficiency with learned automaticity. We also examined a more abstract form of cognitive control using intertemporal choice. In two experiments, we identified putative neural substrates for individual differences in temporal discounting, or the tendency to prefer immediate to delayed rewards. Using dynamic causal models, we characterized the neural circuit between ventromedial prefrontal cortex, an area involved in valuation, and dorsolateral prefrontal cortex, a region implicated in self-control in intertemporal and dietary choice, and found that connectivity from dorsolateral prefrontal cortex to ventromedial prefrontal cortex increases at the time of choice, particularly when delayed rewards are chosen. Moreover, estimates of the strength of connectivity predicted out-of-sample individual rates of temporal discounting, suggesting a neurocomputational mechanism for variation in the ability to delay gratification. Next, we interrogated the hypothesis that individual differences in temporal discounting are in part explained by the ability to imagine future reward outcomes. Using a novel paradigm, we imaged neural response during the imagining of primary rewards, and identified negative correlations between activity in regions associated the processing of both real and imagined rewards (lateral orbitofrontal cortex and ventromedial prefrontal cortex, respectively) and the individual temporal discounting parameters estimated in the previous experiment. These data suggest that individuals who are better able to represent reward outcomes neurally are less susceptible to temporal discounting. Together, these findings provide further insight into role of the prefrontal cortex in implementing cognitive control, and propose neurobiological substrates for individual variation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assembling a nervous system requires exquisite specificity in the construction of neuronal connectivity. One method by which such specificity is implemented is the presence of chemical cues within the tissues, differentiating one region from another, and the presence of receptors for those cues on the surface of neurons and their axons that are navigating within this cellular environment.

Connections from one part of the nervous system to another often take the form of a topographic mapping. One widely studied model system that involves such a mapping is the vertebrate retinotectal projection-the set of connections between the eye and the optic tectum of the midbrain, which is the primary visual center in non-mammals and is homologous to the superior colliculus in mammals. In this projection the two-dimensional surface of the retina is mapped smoothly onto the two-dimensional surface of the tectum, such that light from neighboring points in visual space excites neighboring cells in the brain. This mapping is implemented at least in part via differential chemical cues in different regions of the tectum.

The Eph family of receptor tyrosine kinases and their cell-surface ligands, the ephrins, have been implicated in a wide variety of processes, generally involving cellular movement in response to extracellular cues. In particular, they possess expression patterns-i.e., complementary gradients of receptor in retina and ligand in tectum- and in vitro and in vivo activities and phenotypes-i.e., repulsive guidance of axons and defective mapping in mutants, respectively-consistent with the long-sought retinotectal chemical mapping cues.

The tadpole of Xenopus laevis, the South African clawed frog, is advantageous for in vivo retinotectal studies because of its transparency and manipulability. However, neither the expression patterns nor the retinotectal roles of these proteins have been well characterized in this system. We report here comprehensive descriptions in swimming stage tadpoles of the messenger RNA expression patterns of eleven known Xenopus Eph and ephrin genes, including xephrin-A3, which is novel, and xEphB2, whose expression pattern has not previously been published in detail. We also report the results of in vivo protein injection perturbation studies on Xenopus retinotectal topography, which were negative, and of in vitro axonal guidance assays, which suggest a previously unrecognized attractive activity of ephrins at low concentrations on retinal ganglion cell axons. This raises the possibility that these axons find their correct targets in part by seeking out a preferred concentration of ligands appropriate to their individual receptor expression levels, rather than by being repelled to greater or lesser degrees by the ephrins but attracted by some as-yet-unknown cue(s).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.

Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.

Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The visual system is a remarkable platform that evolved to solve difficult computational problems such as detection, recognition, and classification of objects. Of great interest is the face-processing network, a sub-system buried deep in the temporal lobe, dedicated for analyzing specific type of objects (faces). In this thesis, I focus on the problem of face detection by the face-processing network. Insights obtained from years of developing computer-vision algorithms to solve this task have suggested that it may be efficiently and effectively solved by detection and integration of local contrast features. Does the brain use a similar strategy? To answer this question, I embark on a journey that takes me through the development and optimization of dedicated tools for targeting and perturbing deep brain structures. Data collected using MR-guided electrophysiology in early face-processing regions was found to have strong selectivity for contrast features, similar to ones used by artificial systems. While individual cells were tuned for only a small subset of features, the population as a whole encoded the full spectrum of features that are predictive to the presence of a face in an image. Together with additional evidence, my results suggest a possible computational mechanism for face detection in early face processing regions. To move from correlation to causation, I focus on adopting an emergent technology for perturbing brain activity using light: optogenetics. While this technique has the potential to overcome problems associated with the de-facto way of brain stimulation (electrical microstimulation), many open questions remain about its applicability and effectiveness for perturbing the non-human primate (NHP) brain. In a set of experiments, I use viral vectors to deliver genetically encoded optogenetic constructs to the frontal eye field and faceselective regions in NHP and examine their effects side-by-side with electrical microstimulation to assess their effectiveness in perturbing neural activity as well as behavior. Results suggest that cells are robustly and strongly modulated upon light delivery and that such perturbation can modulate and even initiate motor behavior, thus, paving the way for future explorations that may apply these tools to study connectivity and information flow in the face processing network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste estudo, procuramos avaliar a composição e abundância de espécies de anfíbios e répteis em uma paisagem fragmentada de Mata Atlântica, no município de Cachoeiras de Macacu, Estado do Rio de Janeiro. Amostramos a herpetofauna da região na área contínua de floresta da Reserva Ecológica de Guapiaçu (REGUA), em 12 fragmentos do entorno com tamanhos e graus de isolamento diferentes, e áreas de pasto (matriz). Utilizamos para amostragem destes animais as metodologias de buscas ativas visuais e armadilhas de interceptação e queda, além de encontros ocasionais. No total foram registradas 55 espécies de anfíbios anuros pertencentes a 12 famílias e entre os répteis foram registradas 26 espécies, sendo uma espécie de anfisbena, uma de jacaré, nove de lagartos e 15 de serpentes. Para os anfíbios, houve uma dominância de espécies da família Hylidae, que representaram mais da metade do total de espécies encontradas no estudo. Já entre os répteis, houve uma predominância de espécies de serpentes da família Dipsadidae. Considerando apenas os registros feitos pelas metodologias empregadas, a área contínua de floresta da REGUA possui uma riqueza de espécies de anfíbios (N = 30) e de lagartos (N = 4) menor do que o conjunto de fragmentos (N = 36 e N = 8), mas superior ao que foi encontrado na matriz (N = 25 e N = 1). Entretanto, para os anfíbios, mais de um terço das espécies (N = 11) que ocorreu na mata contínua não ocorreu nos fragmentos ou na matriz, o que sugere que estas espécies podem ser mais sensíveis a alterações do hábitat. A maior riqueza de espécies encontrada no conjunto de fragmentos pode ser parcialmente explicada pelo fato de muitas espécies tanto de anfíbios quanto de lagartos que são típicas de áreas abertas, terem sua ocorrência favorecida neste tipo de condição de ambiente relativamente menos fechado dos fragmentos. Quando avaliamos o efeito de métricas da paisagem, observamos diferentes respostas entre os anfíbios e os lagartos. Enquanto para os anfíbios houve uma tendência de fragmentos mais distantes terem uma menor riqueza de espécies e de modos reprodutivos associados a estas espécies, para os lagartos a área dos fragmentos parece ser uma importante variável na estruturação das comunidades. Entretanto, por particularidades das características fisiológicas e ecológicas de anfíbios e lagartos, é possível que outros fatores expliquem a distribuição diferenciada das espécies. De forma geral, as áreas de matriz amostradas pareceram ser hostis a espécies florestais tanto de anfíbios quanto de lagartos. Além disso, para anfíbios, a presença de ambientes reprodutivos pode ser fator crucial para a ocorrência de algumas espécies. De forma similar ao encontrado em outros estudos, para a manutenção da diversidade de anfíbios e lagartos na paisagem fragmentada tratada neste estudo é necessário preservar o grande bloco florestal e aumentar a conectividade deste com os fragmentos, o que poderia permitir a área contínua servir de área-fonte de dispersores para os remanescentes florestais.