959 resultados para Polynomial distributed lag models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Keyword identification in one of two simultaneous sentences is improved when the sentences differ in F0, particularly when they are almost continuously voiced. Sentences of this kind were recorded, monotonised using PSOLA, and re-synthesised to give a range of harmonic ?F0s (0, 1, 3, and 10 semitones). They were additionally re-synthesised by LPC with the LPC residual frequency shifted by 25% of F0, to give excitation with inharmonic but regularly spaced components. Perceptual identification of frequency-shifted sentences showed a similar large improvement with nominal ?F0 as seen for harmonic sentences, although overall performance was about 10% poorer. We compared performance with that of two autocorrelation-based computational models comprising four stages: (i) peripheral frequency selectivity and half-wave rectification; (ii) within-channel periodicity extraction; (iii) identification of the two major peaks in the summary autocorrelation function (SACF); (iv) a template-based approach to speech recognition using dynamic time warping. One model sampled the correlogram at the target-F0 period and performed spectral matching; the other deselected channels dominated by the interferer and performed matching on the short-lag portion of the residual SACF. Both models reproduced the monotonic increase observed in human performance with increasing ?F0 for the harmonic stimuli, but not for the frequency-shifted stimuli. A revised version of the spectral-matching model, which groups patterns of periodicity that lie on a curve in the frequency-delay plane, showed a closer match to the perceptual data for frequency-shifted sentences. The results extend the range of phenomena originally attributed to harmonic processing to grouping by common spectral pattern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. In this paper, a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models at runtime can be defined as abstract representations of a system, including its structure and behaviour, which exist in tandem with the given system during the actual execution time of that system. Furthermore, these models should be causally connected to the system being modelled, offering a reflective capability. Significant advances have been made in recent years in applying this concept, most notably in adaptive systems. In this paper we argue that a similar approach can also be used to support the dynamic generation of software artefacts at execution time. An important area where this is relevant is the generation of software mediators to tackle the crucial problem of interoperability in distributed systems. We refer to this approach as emergent middleware, representing a fundamentally new approach to resolving interoperability problems in the complex distributed systems of today. In this context, the runtime models are used to capture meta-information about the underlying networked systems that need to interoperate, including their interfaces and additional knowledge about their associated behaviour. This is supplemented by ontological information to enable semantic reasoning. This paper focuses on this novel use of models at runtime, examining in detail the nature of such runtime models coupled with consideration of the supportive algorithms and tools that extract this knowledge and use it to synthesise the appropriate emergent middleware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solid tumours display a complex drug resistance phenotype that involves inherent and acquired mechanisms. Multicellular resistance is an inherent feature of solid tumours and is known to present significant barriers to drug permeation in tumours. Given this barrier, do acquired resistance mechanisms such as P-glycoprotein (P-gp) contribute significantly to resistance? To address this question, the multicellular tumour spheroid (MCTS) model was used to examine the influence of P-gp on drug distribution in solid tissue. Tumour spheroids (TS) were generated from either drug-sensitive MCF7WT cells or a drug-resistant, P-gp-expressing derivative MCF7Adr. Confocal microscopy was used to measure time courses and distribution patterns of three fluorescent compounds; calcein-AM, rhodamine123 and BODIPY-taxol. These compounds were chosen because they are all substrates for P-gp-mediated transport, exhibit high fluorescence and are chemically dissimilar. For example, BODIPY-taxol and rhodamine 123 showed high accumulation and distributed extensively throughout the TSWT, whereas calcein-AM accumulation was restricted to the outermost layers. The presence of P-gp in TSAdr resulted in negligible accumulation, regardless of the compound. Moreover, the inhibition of P-gp by nicardipine restored intracellular accumulation and distribution patterns to levels observed in TSWT. The results demonstrate the effectiveness of P-gp in modulating drug distribution in solid tumour models. However, the penetration of agents throughout the tissue is strongly determined by the physico-chemical properties of the individual compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with "negative absorption" of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100km. Although an effective reflection due to the Rayleigh scattering is extremely small (~0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation of a stationary near-Gaussian beam with a narrow spectrum. A random distributed feedback fibre laser has efficiency and performance that are comparable to and even exceed those of similar conventional fibre lasers. The key features of the generated radiation of random distributed feedback fibre lasers include: a stationary narrow-band continuous modeless spectrum that is free of mode competition, nonlinear power broadening, and an output beam with a Gaussian profile in the fundamental transverse mode (generated both in single mode and multi-mode fibres).This review presents the current status of research in the field of random fibre lasers and shows their potential and perspectives. We start with an introductory overview of conventional distributed feedback lasers and traditional random lasers to set the stage for discussion of random fibre lasers. We then present a theoretical analysis and experimental studies of various random fibre laser configurations, including widely tunable, multi-wavelength, narrow-band generation, and random fibre lasers operating in different spectral bands in the 1-1.6μm range. Then we discuss existing and future applications of random fibre lasers, including telecommunication and distributed long reach sensor systems. A theoretical description of random lasers is very challenging and is strongly linked with the theory of disordered systems and kinetic theory. We outline two key models governing the generation of random fibre lasers: the average power balance model and the nonlinear Schrödinger equation based model. Recently invented random distributed feedback fibre lasers represent a new and exciting field of research that brings together such diverse areas of science as laser physics, the theory of disordered systems, fibre optics and nonlinear science. Stable random generation in optical fibre opens up new possibilities for research on wave transport and localization in disordered media. We hope that this review will provide background information for research in various fields and will stimulate cross-disciplinary collaborations on random fibre lasers. © 2014 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a discipline, supply chain management (SCM) has traditionally been primarily concerned with the procurement, processing, movement and sale of physical goods. However an important class of products has emerged - digital products - which cannot be described as physical as they do not obey commonly understood physical laws. They do not possess mass or volume, and they require no energy in their manufacture or distribution. With the Internet, they can be distributed at speeds unimaginable in the physical world, and every copy produced is a 100% perfect duplicate of the original version. Furthermore, the ease with which digital products can be replicated has few analogues in the physical world. This paper assesses the effect of non-physicality on one such product – software – in relation to the practice of SCM. It explores the challenges that arise when managing the software supply chain and how practitioners are addressing these challenges. Using a two-pronged exploratory approach that examines the literature around software management as well as direct interviews with software distribution practitioners, a number of key challenges associated with software supply chains are uncovered, along with responses to these challenges. This paper proposes a new model for software supply chains that takes into account the non-physicality of the product being delivered. Central to this model is the replacement of physical flows with flows of intellectual property, the growing importance of innovation over duplication and the increased centrality of the customer in the entire process. Hybrid physical / digital supply chains are discussed and a framework for practitioners concerned with software supply chains is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a complex neural network model of user behavior in distributed systems. The model reflects both dynamical and statistical features of user behavior and consists of three components: on-line and off-line models and change detection module. On-line model reflects dynamical features by predicting user actions on the basis of previous ones. Off-line model is based on the analysis of statistical parameters of user behavior. In both cases neural networks are used to reveal uncharacteristic activity of users. Change detection module is intended for trends analysis in user behavior. The efficiency of complex model is verified on real data of users of Space Research Institute of NASU-NSAU.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many software engineers have found that it is difficult to understand, incorporate and use different formal models consistently in the process of software developments, especially for large and complex software systems. This is mainly due to the complex mathematical nature of the formal methods and the lack of tool support. It is highly desirable to have software models and their related software artefacts systematically connected and used collaboratively, rather than in isolation. The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. This paper proposed a framework that allows users to interconnect the knowledge about formal software models and other related documents using the semantic technology. We first propose a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them. We then develop a Semantic Web environment for representing and sharing formal Z/OZ models. A method with prototype tool is presented to enhance semantic query to software models and other artefacts. © 2014.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objects of a large-scale gas-transport company (GTC) suggest a complex unified evolutionary approach, which covers basic building concepts, up-to-date technologies, models, methods and means that are used in the phases of design, adoption, maintenance and development of the multilevel automated distributed control systems (ADCS).. As a single methodological basis of the suggested approach three basic Concepts, which contain the basic methodological principles and conceptual provisions on the creation of distributed control systems, were worked out: systems of the lower level (ACS of the technological processes based on up-to-date SCADA), of the middle level (ACS of the operative-dispatch production control based on MES-systems) and of the high level (business process control on the basis of complex automated systems ERP).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, there has been an increas-ing interest in learning a distributed rep-resentation of word sense. Traditional context clustering based models usually require careful tuning of model parame-ters, and typically perform worse on infre-quent word senses. This paper presents a novel approach which addresses these lim-itations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned represen-tations outperform the publicly available embeddings on 2 out of 4 metrics in the word similarity task, and 6 out of 13 sub tasks in the analogical reasoning task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematics Subject Classification: 65C05, 60G50, 39A10, 92C37

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, there has been an increasing interest in learning a distributed representation of word sense. Traditional context clustering based models usually require careful tuning of model parameters, and typically perform worse on infrequent word senses. This paper presents a novel approach which addresses these limitations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned representations outperform the publicly available embeddings on half of the metrics in the word similarity task, 6 out of 13 sub tasks in the analogical reasoning task, and gives the best overall accuracy in the word sense effect classification task, which shows the effectiveness of our proposed distributed distribution learning model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed representations (DR) of cortical channels are pervasive in models of spatio-temporal vision. A central idea that underpins current innovations of DR stems from the extension of 1-D phase into 2-D images. Neurophysiological evidence, however, provides tenuous support for a quadrature representation in the visual cortex, since even phase visual units are associated with broader orientation tuning than odd phase visual units (J.Neurophys.,88,455–463, 2002). We demonstrate that the application of the steering theorems to a 2-D definition of phase afforded by the Riesz Transform (IEEE Trans. Sig. Proc., 49, 3136–3144), to include a Scale Transform, allows one to smoothly interpolate across 2-D phase and pass from circularly symmetric to orientation tuned visual units, and from more narrowly tuned odd symmetric units to even ones. Steering across 2-D phase and scale can be orthogonalized via a linearizing transformation. Using the tiltafter effect as an example, we argue that effects of visual adaptation can be better explained by via an orthogonal rather than channel specific representation of visual units. This is because of the ability to explicitly account for isotropic and cross-orientation adaptation effect from the orthogonal representation from which both direct and indirect tilt after-effects can be explained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.