26 resultados para Network-based routing
Resumo:
This paper presented a novel approach to develop car following models using reactive agent techniques for mapping perceptions to actions. The results showed that the model outperformed the Gipps and Psychophysical family of car following models. The standing of this work is highlighted by its acceptance and publication in the proceedings of the International IEEE Conference on Intelligent Transportation Systems (ITS), which is now recognised as the premier international conference on ITS. The paper acceptance rate to this conference was 67 percent. The standing of this paper is also evidenced by its listing in international databases like Ei Inspec and IEEE Xplore. The paper is also listed in Google Scholar. Dr Dia co-authored this paper with his PhD student Sakda Panwai.
Resumo:
A specialised reconfigurable architecture for telecommunication base-band processing is augmented with testing resources. The routing network is linked via virtual wire hardware modules to reduce the area occupied by connecting buses. The number of switches within the routing matrices is also minimised, which increases throughput without sacrificing flexibility. The testing algorithm was developed to systematically search for faults in the processing modules and the flexible high-speed routing network within the architecture. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This algorithm scans all possible paths within the interconnection network exhaustively and reports all faults. Strategies can be inserted to bypass minor faults
Resumo:
This paper discusses a multi-layer feedforward (MLF) neural network incident detection model that was developed and evaluated using field data. In contrast to published neural network incident detection models which relied on simulated or limited field data for model development and testing, the model described in this paper was trained and tested on a real-world data set of 100 incidents. The model uses speed, flow and occupancy data measured at dual stations, averaged across all lanes and only from time interval t. The off-line performance of the model is reported under both incident and non-incident conditions. The incident detection performance of the model is reported based on a validation-test data set of 40 incidents that were independent of the 60 incidents used for training. The false alarm rates of the model are evaluated based on non-incident data that were collected from a freeway section which was video-taped for a period of 33 days. A comparative evaluation between the neural network model and the incident detection model in operation on Melbourne's freeways is also presented. The results of the comparative performance evaluation clearly demonstrate the substantial improvement in incident detection performance obtained by the neural network model. The paper also presents additional results that demonstrate how improvements in model performance can be achieved using variable decision thresholds. Finally, the model's fault-tolerance under conditions of corrupt or missing data is investigated and the impact of loop detector failure/malfunction on the performance of the trained model is evaluated and discussed. The results presented in this paper provide a comprehensive evaluation of the developed model and confirm that neural network models can provide fast and reliable incident detection on freeways. (C) 1997 Elsevier Science Ltd. All rights reserved.
Resumo:
The conventional analysis for the estimation of the tortuosity factor for transport in porous media is modified here to account for the effect of pore aspect ratio. Structural models of the porous medium are also constructed for calculating the aspect ratio as a function of porosity. Comparison of the model predictions with the extensive data of Currie (1960) for the effective diffusivity of hydrogen in packed beds shows good agreement with a network model of randomly oriented intersecting pores for porosities upto about 50 percent, which is the region of practical interest. The predictions based on this network model are also found to be in better agreement with the data of Currie than earlier expressions developed for unconsolidated and grainy media.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.
Resumo:
Individuals with Autism Spectrum Disorder (ASD) are generally thought to have impaired attentional and executive function upon which all their cognitive and behaviour functions are based. Mental Rotation is a recognized visuo-spatial task, involving spatial working memory, known to involve activation in the fronto-parietal networks. To elucidate the functioning of fronto-parietal networks in ASD, the aim of this study was to use fMRI techniques with a mental rotation task, to characterize the underlying functional neural system. Sixteen male participants (seven highfunctioning autism or Asperger's syndrome; nine ageand performance IQ-matched controls) underwent fMRI. Participants were presented with 18 baseline and 18 rotation trials, with stimuli rotated 3- dimensionaUy (45°-180°). Data were acquired on a 3- Tesla scanner. The most widely accepted area reported to be involved in processing of visuo-spatial information. Posterior Parietal Cortex, was found to be activated in both groups, however, the ASD group showed decreased activation in cortical and subcortical frontal structures that are highly interconnected, including lateral and medial Brodmann area 6, frontal eye fields, caudate, dorsolateral prefrontal cortex and anterior cingulate. The suggested connectivity between these regions indicates that one or more circuits are impaired as a result of the disorder. In future it is hoped that we are able to identify the possible point of origin of this dysfunction, or indeed if the entire network is dysfunctional.
Resumo:
Castells argues that society is being reconstituted according to the global logic of networks. This paper discusses the ways in which a globalised network logic transforms the nature young people's transitions from school to work. Furthermore, the paper explores the ways in which this network logic restructured the manner in which youth transitions are managed via the emergence of a Vocational Education and Training (VET) agenda in Australian post compulsory secondary schooling. It also notes the implications of the emergence of the 'network society' for locality generally and for selected localities specific to the research upon which this paper is based. It suggests that schools represent nodes in a range of VET and other networks, and shows how schools and other agencies in particular localities mobilise their expertise to construct such networks. These networks are networked, funded and regulated at various levels - regionally, nationally and globally. But, they are also facilitated by personal networking opportunities and capacities. The paper also points to the ways in which the 'reflexivity chances' of young people are shaped by this network logic - a situation that generates new forms of responsibility, for schools and teachers, with regard to the management of youth transitions.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less magical, and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature real-world problem relating to the lengths of paths in a network.
Resumo:
A simple percolation theory-based method for determination of the pore network connectivity using liquid phase adsorption isotherm data combined with a density functional theory (DFT)-based pore size distribution is presented in this article. The liquid phase adsorption experiments have been performed using eight different esters as adsorbates and microporous-mesoporous activated carbons Filtrasorb-400, Norit ROW 0.8 and Norit ROX 0.8 as adsorbents. The density functional theory (DFT)-based pore size distributions of the carbons were obtained using DFT analysis of argon adsorption data. The mean micropore network coordination numbers, Z, of the carbons were determined based on DR characteristic plots and fitted saturation capacities using percolation theory. Based on this method, the critical molecular sizes of the model compounds used in this study were also obtained. The incorporation of percolation concepts in the prediction of multicomponent adsorption equilibria is also investigated, and found to improve the performance of the ideal adsorbed solution theory (IAST) model for the large molecules utilized in this study. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.