829 resultados para NETWORK MODEL
Resumo:
This paper describes results obtained using the modified Kanerva model to perform word recognition in continuous speech after being trained on the multi-speaker Alvey 'Hotel' speech corpus. Theoretical discoveries have recently enabled us to increase the speed of execution of part of the model by two orders of magnitude over that previously reported by Prager & Fallside. The memory required for the operation of the model has been similarly reduced. The recognition accuracy reaches 95% without syntactic constraints when tested on different data from seven trained speakers. Real time simulation of a model with 9,734 active units is now possible in both training and recognition modes using the Alvey PARSIFAL transputer array. The modified Kanerva model is a static network consisting of a fixed nonlinear mapping (location matching) followed by a single layer of conventional adaptive links. A section of preprocessed speech is transformed by the non-linear mapping to a high dimensional representation. From this intermediate representation a simple linear mapping is able to perform complex pattern discrimination to form the output, indicating the nature of the speech features present in the input window.
Resumo:
A model of the auditory periphery assembled from analog network submodels of all the relevant anatomical structures is described. There is bidirectional coupling between networks representing the outer ear, middle ear and cochlea. A simple voltage source representation of the outer hair cells provides level-dependent basilar membrane curves. The networks are translated into efficient computational modules by means of wave digital filtering. A feedback unit regulates the average firing rate at the output of an inner hair cell module via a simplified modelling of the dynamics of the descending paths to the peripheral ear. This leads to a digital model of the entire auditory periphery with applications to both speech and hearing research.
Resumo:
A parallel processing network derived from Kanerva's associative memory theory Kanerva 1984 is shown to be able to train rapidly on connected speech data and recognize further speech data with a label error rate of 0·68%. This modified Kanerva model can be trained substantially faster than other networks with comparable pattern discrimination properties. Kanerva presented his theory of a self-propagating search in 1984, and showed theoretically that large-scale versions of his model would have powerful pattern matching properties. This paper describes how the design for the modified Kanerva model is derived from Kanerva's original theory. Several designs are tested to discover which form may be implemented fastest while still maintaining versatile recognition performance. A method is developed to deal with the time varying nature of the speech signal by recognizing static patterns together with a fixed quantity of contextual information. In order to recognize speech features in different contexts it is necessary for a network to be able to model disjoint pattern classes. This type of modelling cannot be performed by a single layer of links. Network research was once held back by the inability of single-layer networks to solve this sort of problem, and the lack of a training algorithm for multi-layer networks. Rumelhart, Hinton & Williams 1985 provided one solution by demonstrating the "back propagation" training algorithm for multi-layer networks. A second alternative is used in the modified Kanerva model. A non-linear fixed transformation maps the pattern space into a space of higher dimensionality in which the speech features are linearly separable. A single-layer network may then be used to perform the recognition. The advantage of this solution over the other using multi-layer networks lies in the greater power and speed of the single-layer network training algorithm. © 1989.
Resumo:
The product design development has increasingly become a collaborative process. Conflicts often appear in the design process due to multi-actors interactions. Therefore, a critical element of collaborative design would be conflict situations resolution. In this paper, a methodology, based on a process model, is proposed to support conflict management. This methodology deals mainly with the conflict resolution team identification and the solution impact evaluation issues. The proposed process model allows the design process traceability and the data dependencies network identification; which making it be possible to identify the conflict resolution actors as well as to evaluate the selected solution impact. Copyright © 2006 IFAC.
Resumo:
Supply chain tracking information is one of the main levers for achieving operational efficiency. RFID technology and the EPC Network can deliver serial-level product information that was never before available. However, these technologies still fail to meet the managers' visibility requirements in full, since they provide information about product location at specific time instances only. This paper proposes a model that uses the data provided by the EPC Network to deliver enhanced tracking information to the final user. Following a Bayesian approach, the model produces realistic ongoing estimates about the current and future location of products across a supply network, taking into account the characteristics of the product behavior and the configuration of the data collection points. These estimates can then be used to optimize operational decisions that depend on product availability at different locations. The enhancement of tracking information quality is highlighted through an example. © 2009 IFAC.
Resumo:
Purpose - In recent years there has been increasing interest in Product Service Systems (PSSs) as a business model for selling integrated product and service offerings. To date, there has been extensive research into the benefits of PSS to manufacturers and their customers, but there has been limited research into the effect of PSS on the upstream supply chain. This paper seeks to address this gap in the research. Design/methodology/approach - The research uses case-based research which is appropriate for exploratory research of this type. In-depth interviews were conducted with key personnel in a focal firm and two members of its supply chain, and the results were analysed to identify emergent themes.b Findings - The research has identified differences in supplier behaviour dependent on their role in PSS delivery and their relationship with the PSS provider. In particular, it suggests that for a successful partnership it is important to align the objectives between PSS provider and suppliers. Originality/value - This research provides a detailed investigation into a PSS supply chain and highlights the complexity of roles and relationships among the organizations within it. It will be of value to other PSS researchers and organizations transitioning to the delivery of PSS. © Emerald Group Publishing Limited.
Resumo:
This paper provides a case study on the deepest excavation carried out so far in the construction of the metro network in Shanghai, which typically features soft ground. The excavation is 38 m deep with retaining walls 65 m deep braced by 9 levels of concrete props. To obtain a quick and rough prediction, two centrifuge model tests were conducted, in which one is for the 'standard' section with green field surrounding and the other with an adjacent piled building. The tests were carried out in a run-stop-excavation-run style, in which excavation was conducted manually. By analyzing the lateral wall displacement, ground deformation, bending moment and earth pressure, the test results are shown to be reasonably convincing and the design and construction were validated. Such industry orientated centrifuge modeling was shown to be useful in understanding the performance of geotechnical processes, especially when engineers lack relevant field experience. © 2010 Taylor & Francis Group, London.
Resumo:
This paper presents an efficient algorithm for robust network reconstruction of Linear Time-Invariant (LTI) systems in the presence of noise, estimation errors and unmodelled nonlinearities. The method here builds on previous work [1] on robust reconstruction to provide a practical implementation with polynomial computational complexity. Following the same experimental protocol, the algorithm obtains a set of structurally-related candidate solutions spanning every level of sparsity. We prove the existence of a magnitude bound on the noise, which if satisfied, guarantees that one of these structures is the correct solution. A problem-specific model-selection procedure then selects a single solution from this set and provides a measure of confidence in that solution. Extensive simulations quantify the expected performance for different levels of noise and show that significantly more noise can be tolerated in comparison to the original method. © 2012 IEEE.
Resumo:
The fundamental aim of clustering algorithms is to partition data points. We consider tasks where the discovered partition is allowed to vary with some covariate such as space or time. One approach would be to use fragmentation-coagulation processes, but these, being Markov processes, are restricted to linear or tree structured covariate spaces. We define a partition-valued process on an arbitrary covariate space using Gaussian processes. We use the process to construct a multitask clustering model which partitions datapoints in a similar way across multiple data sources, and a time series model of network data which allows cluster assignments to vary over time. We describe sampling algorithms for inference and apply our method to defining cancer subtypes based on different types of cellular characteristics, finding regulatory modules from gene expression data from multiple human populations, and discovering time varying community structure in a social network.
Resumo:
State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple sub-systems that may even be developed at different sites. Cross system adaptation, in which model adaptation is performed using the outputs from another sub-system, can be used as an alternative to hypothesis level combination schemes such as ROVER. Normally cross adaptation is only performed on the acoustic models. However, there are many other levels in LVCSR systems' modelling hierarchy where complimentary features may be exploited, for example, the sub-word and the word level, to further improve cross adaptation based system combination. It is thus interesting to also cross adapt language models (LMs) to capture these additional useful features. In this paper cross adaptation is applied to three forms of language models, a multi-level LM that models both syllable and word sequences, a word level neural network LM, and the linear combination of the two. Significant error rate reductions of 4.0-7.1% relative were obtained over ROVER and acoustic model only cross adaptation when combining a range of Chinese LVCSR sub-systems used in the 2010 and 2011 DARPA GALE evaluations. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents ongoing work on data collection and collation from a large number of laboratory cement-stabilization projects worldwide. The aim is to employ Artificial Neural Networks (ANN) to establish relationships between variables, which define the properties of cement-stabilized soils, and the two parameters determined by the Unconfined Compression Test, the Unconfined Compressive Strength (UCS), and stiffness, using E50 calculated from UCS results. Bayesian predictive neural network models are developed to predict the UCS values of cement-stabilized inorganic clays/silts, as well as sands as a function of selected soil mix variables, such as grain size distribution, water content, cement content and curing time. A model which can predict the stiffness values of cement-stabilized clays/silts is also developed and compared to the UCS model. The UCS model results emulate known trends better and provide more accurate estimates than the results from the E50 stiffness model. © 2013 American Society of Civil Engineers.
Resumo:
A recurrent artificial neural network was used for 0-and 7-days-ahead forecasting of daily spring phytoplankton bloom dynamics in Xiangxi Bay of Three-Gorges Reservoir with meteorological, hydrological, and limnological parameters as input variables. Daily data from the depth of 0.5 m was used to train the model, and data from the depth of 2.0 m was used to validate the calibrated model. The trained model achieved reasonable accuracy in predicting the daily dynamics of chlorophyll a both in 0-and 7-days-ahead forecasting. In 0-day-ahead forecasting, the R-2 values of observed and predicted data were 0.85 for training and 0.89 for validating. In 7-days-ahead forecasting, the R-2 values of training and validating were 0.68 and 0.66, respectively. Sensitivity analysis indicated that most ecological relationships between chlorophyll a and input environmental variables in 0-and 7-days-ahead models were reasonable. In the 0-day model, Secchi depth, water temperature, and dissolved silicate were the most important factors influencing the daily dynamics of chlorophyll a. And in 7-days-ahead predicting model, chlorophyll a was sensitive to most environmental variables except water level, DO, and NH3N.
Resumo:
The paper demonstrates the nonstationarity of algal population behaviors by analyzing the historical populations of Nostocales spp. in the River Darling, Australia. Freshwater ecosystems are more likely to be nonstationary, instead of stationary. Nonstationarity implies that only the near past behaviors could forecast the near future for the system. However, nonstionarity was not considered seriously in previous research efforts for modeling and predicting algal population behaviors. Therefore the moving window technique was incorporated with radial basis function neural network (RBFNN) approach to deal with nonstationarity when modeling and forecasting the population behaviors of Nostocales spp. in the River Darling. The results showed that the RBFNN model could predict the timing and magnitude of algal blooms of Nostocales spp. with high accuracy. Moreover, a combined model based on individual RBFNN models was implemented, which showed superiority over the individual RBFNN models. Hence, the combined model was recommended for the modeling and forecasting of the phytoplankton populations, especially for the forecasting.
Resumo:
A radial basis function neural network was employed to model the abundance of cyanobacteria. The trained network could predict the populations of two bloom forming algal taxa with high accuracy, Nostocales spp. and Anabaena spp., in the River Darling, Australia. To elucidate the population dynamics for both Nostocales spp. and Anabaena spp., sensitivity analysis was performed with the following results. Total Kjeldahl nitrogen had a very strong influence on the abundance of the two algal taxa, electrical conductivity had a very strong negative relationship with the population of the two algal species, and flow was identified as one dominant factor influencing algal blooms after a scatter plot revealed that high flow could significantly reduce the algal biomass for both Nostocales spp. and Anabaena spp. Other variables such as turbidity, color, and pH were less important in determining the abundance and succession of the algal blooms.
Resumo:
A new method to measure reciprocal four-port structures, using a 16-term error model, is presented. The measurement is based on 5 two-port calibration standards connected to two of the ports, while the network analyzer is connected to the two remaining ports. Least-squares-fit data reduction techniques are used to lower error sensitivity. The effect of connectors is deembedded using closed-form equations. (C) 2007 Wiley Periodicals, Inc.