940 resultados para Essential-state models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of gap junction coupling among neurons of the central nervous systems has been appreciated for some time now. In recent years there has been an upsurge of interest from the mathematical community in understanding the contribution of these direct electrical connections between cells to large-scale brain rhythms. Here we analyze a class of exactly soluble single neuron models, capable of producing realistic action potential shapes, that can be used as the basis for understanding dynamics at the network level. This work focuses on planar piece-wise linear models that can mimic the firing response of several different cell types. Under constant current injection the periodic response and phase response curve (PRC) is calculated in closed form. A simple formula for the stability of a periodic orbit is found using Floquet theory. From the calculated PRC and the periodic orbit a phase interaction function is constructed that allows the investigation of phase-locked network states using the theory of weakly coupled oscillators. For large networks with global gap junction connectivity we develop a theory of strong coupling instabilities of the homogeneous, synchronous and splay state. For a piece-wise linear caricature of the Morris-Lecar model, with oscillations arising from a homoclinic bifurcation, we show that large amplitude oscillations in the mean membrane potential are organized around such unstable orbits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study identifies and compares competing policy stories of key actors involved in the Ecuadorian education reform under President Rafael Correa from 2007-2015. By revealing these competing policy stories the study generates insights into the political and technical aspects of education reform in a context where state capacity has been eroded by decades of neoliberal policies. Since the elections in 2007, President Correa has focused much of his political effort and capital on reconstituting the state’s authority and capacity to not only formulate but also implement public policies. The concentration of power combined with a capacity building agenda allowed the Correa government to advance an ambitious comprehensive education reform with substantive results in equity and quality. At the same time the concentration of power has undermined a more inclusive and participatory approach which are essential for deepening and sustaining the reform. This study underscores both the limits and importance of state control over education; the inevitable conflicts and complexities associated with education reforms that focus on quality; and the limits and importance of participation in reform. Finally, it examines the analytical benefits of understanding governance, participation and quality as socially constructed concepts that are tied to normative and ideological interests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A computer vision system that has to interact in natural language needs to understand the visual appearance of interactions between objects along with the appearance of objects themselves. Relationships between objects are frequently mentioned in queries of tasks like semantic image retrieval, image captioning, visual question answering and natural language object detection. Hence, it is essential to model context between objects for solving these tasks. In the first part of this thesis, we present a technique for detecting an object mentioned in a natural language query. Specifically, we work with referring expressions which are sentences that identify a particular object instance in an image. In many referring expressions, an object is described in relation to another object using prepositions, comparative adjectives, action verbs etc. Our proposed technique can identify both the referred object and the context object mentioned in such expressions. Context is also useful for incrementally understanding scenes and videos. In the second part of this thesis, we propose techniques for searching for objects in an image and events in a video. Our proposed incremental algorithms use the context from previously explored regions to prioritize the regions to explore next. The advantage of incremental understanding is restricting the amount of computation time and/or resources spent for various detection tasks. Our first proposed technique shows how to learn context in indoor scenes in an implicit manner and use it for searching for objects. The second technique shows how explicitly written context rules of one-on-one basketball can be used to sequentially detect events in a game.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to contribute to the debate on corporate governance models in European transition economies. The paper consists of four parts. After a historic overview of the evolution of corporate governance, the introduction presents various understandings of the corporate governance function and describes current issues in corporate governance. Part two deals with governance systems in the (mainly domestically) privatized former state-owned companies in Central European transition countries, with the main types of company ownership structures, relationships between governing and management functions, and deficiencies in existing governance systems. Part three is dedicated to the analysis of factors that determine the efficiency of the relationship between the corporate governance and management functions in Central European transition economies. It deals with the issue of why the German (continental European) governance model is usually the preferred choice and why the chosen models underperform. In the conclusion the author offers his suggestions on how the Central European transition countries should improve their corporate governance in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Entangled quantum states can be given a separable decomposition if we relax the restriction that the local operators be quantum states. Motivated by the construction of classical simulations and local hidden variable models, we construct `smallest' local sets of operators that achieve this. In other words, given an arbitrary bipartite quantum state we construct convex sets of local operators that allow for a separable decomposition, but that cannot be made smaller while continuing to do so. We then consider two further variants of the problem where the local state spaces are required to contain the local quantum states, and obtain solutions for a variety of cases including a region of pure states around the maximally entangled state. The methods involve calculating certain forms of cross norm. Two of the variants of the problem have a strong relationship to theorems on ensemble decompositions of positive operators, and our results thereby give those theorems an added interpretation. The results generalise those obtained in our previous work on this topic [New J. Phys. 17, 093047 (2015)].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We estimate a dynamic model of mortgage default for a cohort of Colombian debtors between 1997 and 2004. We use the estimated model to study the effects on default of a class of policies that affected the evolution of mortgage balances in Colombia during the 1990's. We propose a framework for estimating dynamic behavioral models accounting for the presence of unobserved state variables that are correlated across individuals and across time periods. We extend the standard literature on the structural estimation of dynamic models by incorporating an unobserved common correlated shock that affects all individuals' static payoffs and the dynamic continuation payoffs associated with different decisions. Given a standard parametric specification the dynamic problem, we show that the aggregate shocks are identified from the variation in the observed aggregate behavior. The shocks and their transition are separately identified, provided there is enough cross-sectionavl ariation of the observeds tates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Impulse control disorders (ICD) is a common side effect of the dopaminergic treatment in patients with Parkinson's disease, which is more associated with dopamine agonists than with levodopa. To understand its pathophysiology, reliable animal models are essential. Using the variable delay-to-signal (VDS) paradigm, impulsivity was evaluated in bilateral parkinsonian rats treated with pramipexole (PPX). In this test, rats have to introduce the snout into a nose poke that is signaled by a light (presented at variable delays) triggering the delivery of a food reward after a correct response. Reaching a stable baseline performance, a partial bilateral dopaminergic lesion with 6-OHDA was induced in the dorsolateral striatum (AP: +1mm, L: ±3.4mm, V:-4.7 mm, Bregma). Rats undertook the VDS test under 5 conditions: basal state, 6-OHDA-induced lesion, the effect of two doses of PPX (0,25mg/kg and 3mg/kg; Latin-square design), and the day after the last dose of PPX. Only the acute administration of 3 mg/kg of PPX significantly rised the number of premature responses, indicating an increase of impulsive behavior, in parkinsonian but not in sham rats. Both doses of PPX significantly decreased the accuracy of responding (correct/total number of responses) and increased the incorrect and perseverative (compulsive behavior) responses in both parkinsonian and sham treated groups when compared with saline-treated groups. In conclusion, PPX induced attention deficit (lack of accuracy) as well as compulsive behavior in control and parkinsonian rats, but increased impulsivity only in the parkinsonian animals. This model could constitute a valid tool to investigate the pathophysiology of ICD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Credible spatial information characterizing the structure and site quality of forests is critical to sustainable forest management and planning, especially given the increasing demands and threats to forest products and services. Forest managers and planners are required to evaluate forest conditions over a broad range of scales, contingent on operational or reporting requirements. Traditionally, forest inventory estimates are generated via a design-based approach that involves generalizing sample plot measurements to characterize an unknown population across a larger area of interest. However, field plot measurements are costly and as a consequence spatial coverage is limited. Remote sensing technologies have shown remarkable success in augmenting limited sample plot data to generate stand- and landscape-level spatial predictions of forest inventory attributes. Further enhancement of forest inventory approaches that couple field measurements with cutting edge remotely sensed and geospatial datasets are essential to sustainable forest management. We evaluated a novel Random Forest based k Nearest Neighbors (RF-kNN) imputation approach to couple remote sensing and geospatial data with field inventory collected by different sampling methods to generate forest inventory information across large spatial extents. The forest inventory data collected by the FIA program of US Forest Service was integrated with optical remote sensing and other geospatial datasets to produce biomass distribution maps for a part of the Lake States and species-specific site index maps for the entire Lake State. Targeting small-area application of the state-of-art remote sensing, LiDAR (light detection and ranging) data was integrated with the field data collected by an inexpensive method, called variable plot sampling, in the Ford Forest of Michigan Tech to derive standing volume map in a cost-effective way. The outputs of the RF-kNN imputation were compared with independent validation datasets and extant map products based on different sampling and modeling strategies. The RF-kNN modeling approach was found to be very effective, especially for large-area estimation, and produced results statistically equivalent to the field observations or the estimates derived from secondary data sources. The models are useful to resource managers for operational and strategic purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.