975 resultados para Constrained network mapping


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presented a novel approach to develop car following models using reactive agent techniques for mapping perceptions to actions. The results showed that the model outperformed the Gipps and Psychophysical family of car following models. The standing of this work is highlighted by its acceptance and publication in the proceedings of the International IEEE Conference on Intelligent Transportation Systems (ITS), which is now recognised as the premier international conference on ITS. The paper acceptance rate to this conference was 67 percent. The standing of this paper is also evidenced by its listing in international databases like Ei Inspec and IEEE Xplore. The paper is also listed in Google Scholar. Dr Dia co-authored this paper with his PhD student Sakda Panwai.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have proposed a novel robust inversion-based neurocontroller that searches for the optimal control law by sampling from the estimated Gaussian distribution of the inverse plant model. However, for problems involving the prediction of continuous variables, a Gaussian model approximation provides only a very limited description of the properties of the inverse model. This is usually the case for problems in which the mapping to be learned is multi-valued or involves hysteritic transfer characteristics. This often arises in the solution of inverse plant models. In order to obtain a complete description of the inverse model, a more general multicomponent distributions must be modeled. In this paper we test whether our proposed sampling approach can be used when considering an arbitrary conditional probability distributions. These arbitrary distributions will be modeled by a mixture density network. Importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The effectiveness of the importance sampling from an arbitrary conditional probability distribution will be demonstrated using a simple single input single output static nonlinear system with hysteretic characteristics in the inverse plant model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Substantial behavioural and neuropsychological evidence has been amassed to support the dual-route model of morphological processing, which distinguishes between a rule-based system for regular items (walk–walked, call–called) and an associative system for the irregular items (go–went). Some neural-network models attempt to explain the neuropsychological and brain-mapping dissociations in terms of single-system associative processing. We show that there are problems in the accounts of homogeneous networks in the light of recent brain-mapping evidence of systematic double-dissociation. We also examine the superior capabilities of more internally differentiated connectionist models, which, under certain conditions, display systematic double-dissociations. It appears that the more differentiation models show, the more easily they account for dissociation patterns, yet without implementing symbolic computations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuronal network oscillations are a unifying phenomenon in neuroscience research, with comparable measurements across scales and species. Cortical oscillations are of central importance in the characterization of neuronal network function in health and disease and are influential in effective drug development. Whilst animal in vitro and in vivo electrophysiology is able to characterize pharmacologically induced modulations in neuronal activity, present human counterparts have spatial and temporal limitations. Consequently, the potential applications for a human equivalent are extensive. Here, we demonstrate a novel implementation of contemporary neuroimaging methods called pharmaco-magnetoencephalography. This approach determines the spatial profile of neuronal network oscillatory power change across the cortex following drug administration and reconstructs the time course of these modulations at focal regions of interest. As a proof of concept, we characterize the nonspecific GABAergic modulator diazepam, which has a broad range of therapeutic applications. We demonstrate that diazepam variously modulates ? (4–7 Hz), a (7–14 Hz), ß (15–25 Hz), and ? (30–80 Hz) frequency oscillations in specific regions of the cortex, with a pharmacodynamic profile consistent with that of drug uptake. We examine the relevance of these results with regard to the spatial and temporal observations from other modalities and the various therapeutic consequences of diazepam and discuss the potential applications of such an approach in terms of drug development and translational neuroscience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite-borne scatterometers are used to measure backscattered micro-wave radiation from the ocean surface. This data may be used to infer surface wind vectors where no direct measurements exist. Inherent in this data are outliers owing to aberrations on the water surface and measurement errors within the equipment. We present two techniques for identifying outliers using neural networks; the outliers may then be removed to improve models derived from the data. Firstly the generative topographic mapping (GTM) is used to create a probability density model; data with low probability under the model may be classed as outliers. In the second part of the paper, a sensor model with input-dependent noise is used and outliers are identified based on their probability under this model. GTM was successfully modified to incorporate prior knowledge of the shape of the observation manifold; however, GTM could not learn the double skinned nature of the observation manifold. To learn this double skinned manifold necessitated the use of a sensor model which imposes strong constraints on the mapping. The results using GTM with a fixed noise level suggested the noise level may vary as a function of wind speed. This was confirmed by experiments using a sensor model with input-dependent noise, where the variation in noise is most sensitive to the wind speed input. Both models successfully identified gross outliers with the largest differences between models occurring at low wind speeds. © 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The INTAMAP FP6 project has developed an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods and employing open, web-based, data exchange protocols and visualisation tools. This paper will give an overview of the underlying problem, of the project, and discuss which problems it has solved and which open problems seem to be most relevant to deal with next. The interpolation problem that INTAMAP solves is the generic problem of spatial interpolation of environmental variables without user interaction, based on measurements of e.g. PM10, rainfall or gamma dose rate, at arbitrary locations or over a regular grid covering the area of interest. It deals with problems of varying spatial resolution of measurements, the interpolation of averages over larger areas, and with providing information on the interpolation error to the end-user. In addition, monitoring network optimisation is addressed in a non-automatic context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of high-power voltage-source converters (VSCs) to multiterminal dc networks is attracting research interest. The development of VSC-based dc networks is constrained by the lack of operational experience, the immaturity of appropriate protective devices, and the lack of appropriate fault analysis techniques. VSCs are vulnerable to dc-cable short-circuit and ground faults due to the high discharge current from the dc-link capacitance. However, faults occurring along the interconnecting dc cables are most likely to threaten system operation. In this paper, cable faults in VSC-based dc networks are analyzed in detail with the identification and definition of the most serious stages of the fault that need to be avoided. A fault location method is proposed because this is a prerequisite for an effective design of a fault protection scheme. It is demonstrated that it is relatively easy to evaluate the distance to a short-circuit fault using voltage reference comparison. For the more difficult challenge of locating ground faults, a method of estimating both the ground resistance and the distance to the fault is proposed by analyzing the initial stage of the fault transient. Analysis of the proposed method is provided and is based on simulation results, with a range of fault resistances, distances, and operational conditions considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroimaging studies have consistently shown that working memory (WM) tasks engage a distributed neural network that primarily includes the dorsolateral prefrontal cortex, the parietal cortex, and the anterior cingulate cortex. The current challenge is to provide a mechanistic account of the changes observed in regional activity. To achieve this, we characterized neuroplastic responses in effective connectivity between these regions at increasing WM loads using dynamic causal modeling of functional magnetic resonance imaging data obtained from healthy individuals during a verbal n-back task. Our data demonstrate that increasing memory load was associated with (a) right-hemisphere dominance, (b) increasing forward (i.e., posterior to anterior) effective connectivity within the WM network, and (c) reduction in individual variability in WM network architecture resulting in the right-hemisphere forward model reaching an exceedance probability of 99% in the most demanding condition. Our results provide direct empirical support that task difficulty, in our case WM load, is a significant moderator of short-term plasticity, complementing existing theories of task-related reduction in variability in neural networks. Hum Brain Mapp, 2013. © 2013 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Government agencies use information technology extensively to collect business data for regulatory purposes. Data communication standards form part of the infrastructure with which businesses must conform to survive. We examine the development of, and emerging competition between, two open business reporting data standards adopted by government bodies in France; EDIFACT (incumbent) and XBRL (challenger). The research explores whether an incumbent may be displaced in a setting in which the contention is unresolved. We apply Latour’s (1992) translation map to trace the enrolments and detours in the battle. We find that regulators play an important role as allies in the development of the standards. The antecedent networks in which the standards are located embed strong beliefs that become barriers to collaboration and fuel the battle. One of the key differentiating attitudes is whether speed is more important than legitimacy. The failure of collaboration encourages competition. The newness of XBRL’s technology just as regulators need to respond to an economic crisis and its adoption by French regulators not using EDIFACT create an opportunity for the challenger to make significant network gains over the longer term. ANT also highlights the importance of the preservation of key components of EDIFACT in ebXML.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis analyses networks of knowledge flows, focusing on the role of indirect ties in the knowledge transfer, knowledge accumulation and knowledge creation process. It extends and improves existing methods for mapping networks of knowledge flows in two different applications and contributes to two stream of research. To support the underlying idea of this thesis, which is finding an alternative method to rank indirect network ties to shed a new light on the dynamics of knowledge transfer, we apply Ordered Weighted Averaging (OWA) to two different network contexts. Knowledge flows in patent citation networks and a company supply chain network are analysed using Social Network Analysis (SNA) and the OWA operator. The OWA is used here for the first time (i) to rank indirect citations in patent networks, providing new insight into their role in transferring knowledge among network nodes; and to analyse a long chain of patent generations along 13 years; (ii) to rank indirect relations in a company supply chain network, to shed light on the role of indirectly connected individuals involved in the knowledge transfer and creation processes and to contribute to the literature on knowledge management in a supply chain. In doing so, indirect ties are measured and their role as means of knowledge transfer is shown. Thus, this thesis represents a first attempt to bridge the OWA and SNA fields and to show that the two methods can be used together to enrich the understanding of the role of indirectly connected nodes in a network. More specifically, the OWA scores enrich our understanding of knowledge evolution over time within complex networks. Future research can show the usefulness of OWA operator in different complex networks, such as the on-line social networks that consists of thousand of nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of pavement layer moduli through the use of an artificial neural network is a new concept which provides a less strenuous strategy for backcalculation procedures. Artificial Neural Networks are biologically inspired models of the human nervous system. They are specifically designed to carry out a mapping characteristic. This study demonstrates how an artificial neural network uses non-destructive pavement test data in determining flexible pavement layer moduli. The input parameters include plate loadings, corresponding sensor deflections, temperature of pavement surface, pavement layer thicknesses and independently deduced pavement layer moduli.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.