866 resultados para categorization IT PFC computational neuroscience model HMAX


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study used a multi-analytical approach based on traditional microbiological methods for cultivation and isolation of heterotrophic bacteria in the laboratory associated with the molecular identification of the isolates and physicochemical analysis of environmental samples. The model chosen for data integration was supported by knowledge from computational neuroscience, and composed by three modules: (i) microbiological parameters, contemplating taxonomic data obtained from the partial sequencing of the 16S rRNA gene from 80 colonies of heterotrophic bacteria isolated by plating method in PCA media. For bacterial colonies isolation were used water samples from Atibaia and Jaguarí rivers collected at the site of water captation for use in effluent treatment, upstream from the entrance of treated effluent from the Paulínia refinery (REPLAN/Petrobras) located in the Paulínia-SP municipality, from the output of the biological treatment plant with stabilization pond and from the raw refinery wastewater; (ii) chemical parameters, ending measures of dissolved oxygen (DO), chemical oxygen demand (COD), biochemical oxygen demand (BOD), chloride, acidity CaCO3, alkalinity, ammonia, nitrite, nitrate, dissolved ions, sulfides, oils and greases; and (iii) physical parameters, comprising the pH determination, conductivity, temperature, transparency, settleable solids, suspended and soluble solids, volatile material, remaining fixing material (RFM), apparent color and turbidity. The results revealed interesting theoretical relationships involving two families of bacteria (Carnobacteriaceae and Aeromonadaceae). Carnobacteriaceae revealed positive theoretical relationships with COD, BOD, nitrate, chloride, temperature, conductivity and apparent color and negative theoretical relationships with the OD. Positive theoretical relationships were shown between Aeromonadaceae and OD and nitrate, while this bacterial family showed negative theoretical...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to develop a suitable experimental model of natural Mycobacterium bovis infection in white-tailed deer (Odocoileus virginianus), describe the distribution and character of tuberculous lesions, and to examine possible routes of disease transmission. In October 1997, 10 mature female white-tailed deer were inoculated by intratonsilar instillation of 2 3 103 (low dose) or 2 3 105 (high dose) colony forming units (CFU) of M. bovis. In January 1998, deer were euthanatized, examined, and tissues were collected 84 to 87 days post inoculation. Possible routes of disease transmission were evaluated by culture of nasal, oral, tonsilar, and rectal swabs at various times during the study. Gross and microscopic lesions consistent with tuberculosis were most commonly seen in medial retropharyngeal lymph nodes and lung in both dosage groups. Other tissues containing tuberculous lesions included tonsil, trachea, liver, and kidney as well as lateral retropharyngeal, mandibular, parotid, tracheobronchial, mediastinal, hepatic, mesenteric, superficial cervical, and iliac lymph nodes. Mycobacterium bovis was isolated from tonsilar swabs from 8 of 9 deer from both dosage groups at least once 14 to 87 days after inoculation. Mycobacterium bovis was isolated from oral swabs 63 and 80 days after inoculation from one of three deer in the low dose group and none of four deer in the high dose group. Similarly, M. bovis was isolated from nasal swabs 80 and 85 days after inoculation in one of three deer from the low dose group and 63 and 80 days after inoculation from two of four deer in the high dose group. Intratonsilar inoculation with M. bovis results in lesions similar to those seen in naturally infected white-tailed deer; therefore, it represents a suitable model of natural infection. These results also indicate that M. bovis persists in tonsilar crypts for prolonged periods and can be shed in saliva and nasal secretions. These infected fluids represent a likely route of disease transmission to other animals or humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study used a multi-analytical approach based on traditional microbiological methods for cultivation and isolation of heterotrophic bacteria in the laboratory associated with the molecular identification of the isolates and physicochemical analysis of environmental samples. The model chosen for data integration was supported by knowledge from computational neuroscience, and composed by three modules: (i) microbiological parameters, contemplating taxonomic data obtained from the partial sequencing of the 16S rRNA gene from 80 colonies of heterotrophic bacteria isolated by plating method in PCA media. For bacterial colonies isolation were used water samples from Atibaia and Jaguarí rivers collected at the site of water captation for use in effluent treatment, upstream from the entrance of treated effluent from the Paulínia refinery (REPLAN/Petrobras) located in the Paulínia-SP municipality, from the output of the biological treatment plant with stabilization pond and from the raw refinery wastewater; (ii) chemical parameters, ending measures of dissolved oxygen (DO), chemical oxygen demand (COD), biochemical oxygen demand (BOD), chloride, acidity CaCO3, alkalinity, ammonia, nitrite, nitrate, dissolved ions, sulfides, oils and greases; and (iii) physical parameters, comprising the pH determination, conductivity, temperature, transparency, settleable solids, suspended and soluble solids, volatile material, remaining fixing material (RFM), apparent color and turbidity. The results revealed interesting theoretical relationships involving two families of bacteria (Carnobacteriaceae and Aeromonadaceae). Carnobacteriaceae revealed positive theoretical relationships with COD, BOD, nitrate, chloride, temperature, conductivity and apparent color and negative theoretical relationships with the OD. Positive theoretical relationships were shown between Aeromonadaceae and OD and nitrate, while this bacterial family showed negative theoretical...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanisms responsible for containing activity in systems represented by networks are crucial in various phenomena, for example, in diseases such as epilepsy that affect the neuronal networks and for information dissemination in social networks. The first models to account for contained activity included triggering and inhibition processes, but they cannot be applied to social networks where inhibition is clearly absent. A recent model showed that contained activity can be achieved with no need of inhibition processes provided that the network is subdivided into modules (communities). In this paper, we introduce a new concept inspired in the Hebbian theory, through which containment of activity is achieved by incorporating a dynamics based on a decaying activity in a random walk mechanism preferential to the node activity. Upon selecting the decay coefficient within a proper range, we observed sustained activity in all the networks tested, namely, random, Barabasi-Albert and geographical networks. The generality of this finding was confirmed by showing that modularity is no longer needed if the dynamics based on the integrate-and-fire dynamics incorporated the decay factor. Taken together, these results provide a proof of principle that persistent, restrained network activation might occur in the absence of any particular topological structure. This may be the reason why neuronal activity does not spread out to the entire neuronal network, even when no special topological organization exists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of mitochondrial dysfunction in cancer has long been a subject of great interest. In this study, such dysfunction has been examined with regards to thyroid oncocytoma, a rare form of cancer, accounting for less than 5% of all thyroid cancers. A peculiar characteristic of thyroid oncocytic cells is the presence of an abnormally large number of mitochondria in the cytoplasm. Such mitochondrial hyperplasia has also been observed in cells derived from patients suffering from mitochondrial encephalomyopathies, where mutations in the mitochondrial DNA(mtDNA) encoding the respiratory complexes result in oxidative phosphorylation dysfunction. An increase in the number of mitochondria occurs in the latter in order to compensate for the respiratory deficiency. This fact spurred the investigation into the presence of analogous mutations in thyroid oncocytic cells. In this study, the only available cell model of thyroid oncocytoma was utilised, the XTC-1 cell line, established from an oncocytic thyroid metastasis to the breast. In order to assess the energetic efficiency of these cells, they were incubated in a medium lacking glucose and supplemented instead with galactose. When subjected to such conditions, glycolysis is effectively inhibited and the cells are forced to use the mitochondria for energy production. Cell viability experiments revealed that XTC-1 cells were unable to survive in galactose medium. This was in marked contrast to the TPC-1 control cell line, a thyroid tumour cell line which does not display the oncocytic phenotype. In agreement with these findings, subsequent experiments assessing the levels of cellular ATP over incubation time in galactose medium, showed a drastic and continual decrease in ATP levels only in the XTC-1 cell line. Furthermore, experiments on digitonin-permeabilised cells revealed that the respiratory dysfunction in the latter was due to a defect in complex I of the respiratory chain. Subsequent experiments using cybrids demonstrated that this defect could be attributed to the mitochondrially-encoded subunits of complex I as opposed to the nuclearencoded subunits. Confirmation came with mtDNA sequencing, which detected the presence of a novel mutation in the ND1 subunit of complex I. In addition, a mutation in the cytochrome b subunit of complex III of the respiratory chain was detected. The fact that XTC-1 cells are unable to survive when incubated in galactose medium is consistent with the fact that many cancers are largely dependent on glycolysis for energy production. Indeed, numerous studies have shown that glycolytic inhibitors are able to induce apoptosis in various cancer cell lines. Subsequent experiments were therefore performed in order to identify the mode of XTC-1 cell death when subjected to the metabolic stress imposed by the forced use of the mitochondria for energy production. Cell shrinkage and mitochondrial fragmentation were observed in the dying cells, which would indicate an apoptotic type of cell death. Analysis of additional parameters however revealed a lack of both DNA fragmentation and caspase activation, thus excluding a classical apoptotic type of cell death. Interestingly, cleavage of the actin component of the cytoskeleton was observed, implicating the action of proteases in this mode of cell demise. However, experiments employing protease inhibitors failed to identify the specific protease involved. It has been reported in the literature that overexpression of Bcl-2 is able to rescue cells presenting a respiratory deficiency. As the XTC-1 cell line is not only respiration-deficient but also exhibits a marked decrease in Bcl-2 expression, it is a perfect model with which to study the relationship between Bcl-2 and oxidative phosphorylation in respiratory-deficient cells. Contrary to the reported literature studies on various cell lines harbouring defects in the respiratory chain, Bcl-2 overexpression was not shown to increase cell survival or rescue the energetic dysfunction in XTC-1 cells. Interestingly however, it had a noticeable impact on cell adhesion and morphology. Whereas XTC-1 cells shrank and detached from the growth surface under conditions of metabolic stress, Bcl-2-overexpressing XTC-1 cells appeared much healthier and were up to 45% more adherent. The target of Bcl-2 in this setting appeared to be the actin cytoskeleton, as the cleavage observed in XTC-1 cells expressing only endogenous levels of Bcl-2, was inhibited in Bcl-2-overexpressing cells. Thus, although unable to rescue XTC-1 cells in terms of cell viability, Bcl-2 is somehow able to stabilise the cytoskeleton, resulting in modifications in cell morphology and adhesion. The mitochondrial respiratory deficiency observed in cancer cells is thought not only to cause an increased dependency on glycolysis but it is also thought to blunt cellular responses to anticancer agents. The effects of several therapeutic agents were thus assessed for their death-inducing ability in XTC-1 cells. Cell viability experiments clearly showed that the cells were more resistant to stimuli which generate reactive oxygen species (tert-butylhydroperoxide) and to mitochondrial calcium-mediated apoptotic stimuli (C6-ceramide), as opposed to stimuli inflicting DNA damage (cisplatin) and damage to protein kinases(staurosporine). Various studies in the literature have reported that the peroxisome proliferator-activated receptor-coactivator 1(PGC-1α), which plays a fundamental role in mitochondrial biogenesis, is also involved in protecting cells against apoptosis caused by the former two types of stimuli. In accordance with these observations, real-time PCR experiments showed that XTC-1 cells express higher mRNA levels of this coactivator than do the control cells, implicating its importance in drug resistance. In conclusion, this study has revealed that XTC-1 cells, like many cancer cell lines, are characterised by a reduced energetic efficiency due to mitochondrial dysfunction. Said dysfunction has been attributed to mutations in respiratory genes encoded by the mitochondrial genome. Although the mechanism of cell demise in conditions of metabolic stress is unclear, the potential of targeting thyroid oncocytic cancers using glycolytic inhibitors has been illustrated. In addition, the discovery of mtDNA mutations in XTC-1 cells has enabled the use of this cell line as a model with which to study the relationship between Bcl-2 overexpression and oxidative phosphorylation in cells harbouring mtDNA mutations and also to investigate the significance of such mutations in establishing resistance to apoptotic stimuli.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The striatum, the major input nucleus of the basal ganglia, is numerically dominated by a single class of principal neurons, the GABAergic spiny projection neuron (SPN) that has been extensively studied both in vitro and in vivo. Much less is known about the sparsely distributed interneurons, principally the cholinergic interneuron (CIN) and the GABAergic fast-spiking interneuron (FSI). Here, we summarize results from two recent studies on these interneurons where we used in vivo intracellular recording techniques in urethane-anaesthetized rats (Schulz et al., J Neurosci 31[31], 2011; J Physiol, in press). Interneurons were identified by their characteristic responses to intracellular current steps and spike waveforms. Spontaneous spiking contained a high proportion (~45%) of short inter-spike intervals (ISI) of <30 ms in FSIs, but virtually none in CINs. Spiking patterns in CINs covered a broad spectrum ranging from regular tonic spiking to phasic activity despite very similar unimodal membrane potential distributions across neurons. In general, phasic spiking activity occurred in phase with the slow ECoG waves, whereas CINs exhibiting tonic regular spiking were little affected by afferent network activity. In contrast, FSIs exhibited transitions between Down and Up states very similar to SPNs. Compared to SPNs, the FSI Up state membrane potential was noisier and power spectra exhibited significantly larger power at frequencies in the gamma range (55-95 Hz). Cortical-evoked inputs had faster dynamics in FSIs than SPNs and the membrane potential preceding spontaneous spike discharge exhibited short and steep trajectories, suggesting that fast input components controlled spike output in FSIs. Intrinsic resonance mechanisms may have further enhanced the sensitivity of FSIs to fast oscillatory inputs. Induction of an activated ECoG state by local ejection of bicuculline into the superior colliculus, resulted in increased spike frequency in both interneuron classes without changing the overall distribution of ISIs. This manipulation also made CINs responsive to a light flashed into the contralateral eye. Typically, the response consisted of an excitation at short latency followed by a pause in spike firing, via an underlying depolarization-hyperpolarization membrane sequence. These results highlight the differential sensitivity of striatal interneurons to afferent synaptic signals and support a model where CINs modulate the striatal network in response to salient sensory bottom-up signals, while FSIs serve gating of top-down signals from the cortex during action selection and reward-related learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radio frequency electromagnetic fields (RF-EMF) in our daily life are caused by numerous sources such as fixed site transmitters (e.g. mobile phone base stations) or indoor devices (e.g. cordless phones). The objective of this study was to develop a prediction model which can be used to predict mean RF-EMF exposure from different sources for a large study population in epidemiological research. We collected personal RF-EMF exposure measurements of 166 volunteers from Basel, Switzerland, by means of portable exposure meters, which were carried during one week. For a validation study we repeated exposure measurements of 31 study participants 21 weeks after the measurements of the first week on average. These second measurements were not used for the model development. We used two data sources as exposure predictors: 1) a questionnaire on potentially exposure relevant characteristics and behaviors and 2) modeled RF-EMF from fixed site transmitters (mobile phone base stations, broadcast transmitters) at the participants' place of residence using a geospatial propagation model. Relevant exposure predictors, which were identified by means of multiple regression analysis, were the modeled RF-EMF at the participants' home from the propagation model, housing characteristics, ownership of communication devices (wireless LAN, mobile and cordless phones) and behavioral aspects such as amount of time spent in public transports. The proportion of variance explained (R2) by the final model was 0.52. The analysis of the agreement between calculated and measured RF-EMF showed a sensitivity of 0.56 and a specificity of 0.95 (cut-off: 90th percentile). In the validation study, the sensitivity and specificity of the model were 0.67 and 0.96, respectively. We could demonstrate that it is feasible to model personal RF-EMF exposure. Most importantly, our validation study suggests that the model can be used to assess average exposure over several months.