58 resultados para Multi rate processing
Resumo:
We have proposed a novel robust inversion-based neurocontroller that searches for the optimal control law by sampling from the estimated Gaussian distribution of the inverse plant model. However, for problems involving the prediction of continuous variables, a Gaussian model approximation provides only a very limited description of the properties of the inverse model. This is usually the case for problems in which the mapping to be learned is multi-valued or involves hysteritic transfer characteristics. This often arises in the solution of inverse plant models. In order to obtain a complete description of the inverse model, a more general multicomponent distributions must be modeled. In this paper we test whether our proposed sampling approach can be used when considering an arbitrary conditional probability distributions. These arbitrary distributions will be modeled by a mixture density network. Importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The effectiveness of the importance sampling from an arbitrary conditional probability distribution will be demonstrated using a simple single input single output static nonlinear system with hysteretic characteristics in the inverse plant model.
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects are applied to a problem of distributed mail retrieval in which agents must visit mail producing cities and choose between mail types under certain constraints.The efficiency (i.e. the average amount of mail retrieved per time step), and the flexibility (i.e. the capability of the agents to react to changes in the environment) are investigated both in static and dynamic environments. New rules for mail selection and specialisation are introduced and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a genetic algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation. From a more theoretical point of view, in order to avoid finite size effects, most results are obtained for large population sizes. However, we do analyse the influence of population size on the performance. Furthermore, we critically analyse the causes of efficiency loss, derive the exact dynamics of the model in the large system limit under certain conditions, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.
Resumo:
Developmental learning disabilities such as dyslexia and dyscalculia have a high rate of co-occurrence in pediatric populations, suggesting that they share underlying cognitive and neurophysiological mechanisms. Dyslexia and other developmental disorders with a strong heritable component have been associated with reduced sensitivity to coherent motion stimuli, an index of visual temporal processing on a millisecond time-scale. Here we examined whether deficits in sensitivity to visual motion are evident in children who have poor mathematics skills relative to other children of the same age. We obtained psychophysical thresholds for visual coherent motion and a control task from two groups of children who differed in their performance on a test of mathematics achievement. Children with math skills in the lowest 10% in their cohort were less sensitive than age-matched controls to coherent motion, but they had statistically equivalent thresholds to controls on a coherent form control measure. Children with mathematics difficulties therefore tend to present a similar pattern of visual processing deficit to those that have been reported previously in other developmental disorders. We speculate that reduced sensitivity to temporally defined stimuli such as coherent motion represents a common processing deficit apparent across a range of commonly co-occurring developmental disorders.
Resumo:
One of the major problems associated with communication via a loudspeaking telephone (LST) is that, using analogue processing, duplex transmission is limited to low-loss lines and produces a low acoustic output. An architectural for an instrument has been developed and tested, which uses digital signal processing to provide duplex transmission between a LST and a telopnone handset over most of the B.T. network. Digital adaptive-filters are used in the duplex LST to cancel coupling between the loudspeaker and microphone, and across the transmit to receive paths of the 2-to-4-wire converter. Normal movement of a person in the acoustic path causes a loss of stability by increasing the level of coupling from the loudspeaker to the microphone, since there is a lag associated the adaptive filters learning about a non-stationary path, Control of the loop stability and the level of sidetone heard by the hadset user is by a microprocessoe, which continually monitors the system and regulates the gain. The result is a system which offers the best compromise available based on a set of measured parameters.A theory has been developed which gives the loop stability requirements based on the error between the parameters of the filter and those of the unknown path. The programme to develope a low-cost adaptive filter in LST produced a low-cost adaptive filter in LST produced a unique architecture which has a number of features not available in any similar system. These include automatic compensation for the rate of adaptation over a 36 dB range of output level, , 4 rates of adaptation (with a maximum of 465 dB/s), plus the ability to cascade up to 4 filters without loss o performance. A complex story has been developed to determine the adptation which can be achieved using finite-precision arithmatic. This enabled the development of an architecture which distributed the normalisation required to achieve optimum rate of adaptation over the useful input range. Comparison of theory and measurement for the adaptive filter show very close agreement. A single experimental LST was built and tested on connections to hanset telephones over the BT network. The LST demonstrated that duplex transmission was feasible using signal processing and produced a more comfortable means of communication beween people than methods emplying deep voice-switching to regulate the local-loop gain. Although, with the current level of processing power, it is not a panacea and attention must be directed toward the physical acoustic isolation between loudspeaker and microphone.
Resumo:
Textured regions in images can be defined as those regions containing a signal which has some measure of randomness. This thesis is concerned with the description of homogeneous texture in terms of a signal model and to develop a means of spatially separating regions of differing texture. A signal model is presented which is based on the assumption that a large class of textures can adequately be represented by their Fourier amplitude spectra only, with the phase spectra modelled by a random process. It is shown that, under mild restrictions, the above model leads to a stationary random process. Results indicate that this assumption is valid for those textures lacking significant local structure. A texture segmentation scheme is described which separates textured regions based on the assumption that each texture has a different distribution of signal energy within its amplitude spectrum. A set of bandpass quadrature filters are applied to the original signal and the envelope of the output of each filter taken. The filters are designed to have maximum mutual energy concentration in both the spatial and spatial frequency domains thus providing high spatial and class resolutions. The outputs of these filters are processed using a multi-resolution classifier which applies a clustering algorithm on the data at a low spatial resolution and then performs a boundary estimation operation in which processing is carried out over a range of spatial resolutions. Results demonstrate a high performance, in terms of the classification error, for a range of synthetic and natural textures
Resumo:
With the extensive use of pulse modulation methods in telecommunications, much work has been done in the search for a better utilisation of the transmission channel.The present research is an extension of these investigations. A new modulation method, 'Variable Time-Scale Information Processing', (VTSIP), is proposed.The basic principles of this system have been established, and the main advantages and disadvantages investigated. With the proposed system, comparison circuits detect the instants at which the input signal voltage crosses predetermined amplitude levels.The time intervals between these occurrences are measured digitally and the results are temporarily stored, before being transmitted.After reception, an inverse process enables the original signal to be reconstituted.The advantage of this system is that the irregularities in the rate of information contained in the input signal are smoothed out before transmission, allowing the use of a smaller transmission bandwidth. A disadvantage of the system is the time delay necessarily introduced by the storage process.Another disadvantage is a type of distortion caused by the finite store capacity.A simulation of the system has been made using a standard speech signal, to make some assessment of this distortion. It is concluded that the new system should be an improvement on existing pulse transmission systems, allowing the use of a smaller transmission bandwidth, but introducing a time delay.
Resumo:
The research is concerned with thermochemical characterisation of straws and high yielding perennial grasses. Crops selected for this study include wheat straw (Triticum aestivum), rape straw (Brassica napus), reed canary grass (Phalaris arundinacea) and switch grass (Panicum virgatum). Thermogravimetric analysis (TGA) was used to examine the distribution of char and volatiles during pyrolysis up to 900 °C. Utilising multi-heating rate thermogravimetric data, the Friedman iso-conversional kinetic method was used to determine pyrolysis kinetic parameters. Light and medium volatile decomposition products were investigated using pyrolysis–gas chromatography–mass spectrometry (Py–GC–MS) up to 520 °C. The 22 highest yielding identifiable cellulose, hemicellulose and lignin biomass markers were semi-quantified taking into consideration peak areas from GC chromatograms. Notable differences can be seen in butanedioic acid, dimethyl ester (hemicelluloses decomposition products), 2-methoxy-4-vinylphenol (lignin marker) and levoglucosan (intermediate pyrolytic decomposition product of cellulose) content when comparing perennial grasses with straw. From results presented in this study, perennial grasses such as switch grass, have the most attractive properties for fast pyrolysis processing. This is because of the observed high volatile yield content of 82.23%, heating value of 19.64 MJ/kg and the relatively low inorganic content.
Resumo:
We experimentally investigate a multi-parameter optimization of conditions for generation of triangular pulses in normal dispersion fiber. We find that triangular pulses suitable for all optical processing applications can be generated for a wide range of input pulse chirps but that triangular pulse quality and stability is improved with increased input pulse chirp.
Resumo:
The primary objective of this research was to examine the concepts of the chemical modification of polymer blends by reactive processing using interlinking agents (multi-functional, activated vinyl compounds; trimethylolpropane triacrylates {TRIS} and divinylbenzene {DVD}) to target in-situ interpolymer formation between immiscible polymers in PS/EPDM blends via peroxide-initiated free radical reactions during melt mixing. From a comprehensive survey of previous studies of compatibility enhancement in polystyrene blends, it was recognised that reactive processing offers opportunities for technological success that have not yet been fully realised; learning from this study is expected to assist in the development and application of this potential. In an experimental-scale operation for the simultaneous melt blending and reactive processing of both polymers, involving manual injection of precise reactive agent/free radical initiator mixtures directly into molten polymer within an internal mixer, torque changes were distinct, quantifiable and rationalised by ongoing physical and chemical effects. EPDM content of PS/EPDM blends was the prime determinant of torque increases on addition of TRIS, itself liable to self-polymerisation at high additions, with little indication of PS reaction in initial reactively processed blends with TRIS, though blend compatibility, from visual assessment of morphology by SEM, was nevertheless improved. Suitable operating windows were defined for the optimisation of reactive blending, for use once routes to encourage PS reaction could be identified. The effectiveness of PS modification by reactive processing with interlinking agents was increased by the selection of process conditions to target specific reaction routes, assessed by spectroscopy (FT-IR and NMR) and thermal analysis (DSC) coupled dichloromethane extraction and fractionation of PS. Initiator concentration was crucial in balancing desired PS modification and interlinking agent self-polymerisation, most particularly with TRIS. Pre-addition of initiator to PS was beneficial in the enhancement of TRIS binding to PS and minimisation of modifier polymerisation; believed to arise from direct formation of polystyryl radicals for addition to active unsaturation in TRIS. DVB was found to be a "compatible" modifier for PS, but its efficacy was not quantified. Application of routes for PS reaction in PS/EPDM blends was successful for in-situ formation of interpolymer (shown by sequential solvent extraction combined with FT-IR and DSC analysis); the predominant outcome depending on the degree of reaction of each component, with optimum "between-phase" interpolymer formed under conditions selected for equalisation of differing component reactivities and avoidance of competitive processes. This was achieved for combined addition of TRIS+DVB at optimum initiator concentrations with initiator pre-addition to PS. Improvements in blend compatibility (by tensiles, SEM and thermal analysis) were shown in all cases with significant interpolymer formation, though physical benefits were not; morphology and other reactive effects were also important factors. Interpolymer from specific "between-phase" reaction of blend components and interlinking agent was vital for the realisation of positive performance on compatibilisation by the chemical modification of polymer blends by reactive processing.
Resumo:
In series I and II of this study ([Chua et al., 2010a] and [Chua et al., 2010b]), we discussed the time scale of granule–granule collision, droplet–granule collision and droplet spreading in Fluidized Bed Melt Granulation (FBMG). In this third one, we consider the rate at which binder solidifies. Simple analytical solution, based on classical formulation for conduction across a semi-infinite slab, was used to obtain a generalized equation for binder solidification time. A multi-physics simulation package (Comsol) was used to predict the binder solidification time for various operating conditions usually considered in FBMG. The simulation results were validated with experimental temperature data obtained with a high speed infrared camera during solidification of ‘macroscopic’ (mm scale) droplets. For the range of microscopic droplet size and operating conditions considered for a FBMG process, the binder solidification time was found to fall approximately between 10-3 and 10-1 s. This is the slowest compared to the other three major FBMG microscopic events discussed in this series (granule–granule collision, granule–droplet collision and droplet spreading).
Resumo:
A technique is presented for the development of a high precision and resolution Mean Sea Surface (MSS) model. The model utilises Radar altimetric sea surface heights extracted from the geodetic phase of the ESA ERS-1 mission. The methodology uses a modified Le Traon et al. (1995) cubic-spline fit of dual ERS-1 and TOPEX/Poseidon crossovers for the minimisation of radial orbit error. The procedure then uses Fourier domain processing techniques for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the model. Additionally, a multi-satellite mean sea surface integration technique is investigated to supplement the first model with additional enhanced data from the GEOSAT geodetic mission.The methodology employs a novel technique that combines the Stokes' and Vening-Meinsz' transformations, again in the spectral domain. This allows the presentation of a new enhanced GEOSAT gravity anomaly field.
Resumo:
To ensure state synchronization of signalling operations, many signaling protocol designs choose to establish “soft” state that expires if it is not refreshed. The approaches of refreshing state in multi-hop signaling system can be classified as either end-to-end (E2E) or hop-by-hop (HbH). Although both state refresh approaches have been widely used in practical signaling protocols, the design tradeoffs between state synchronization and signaling cost have not yet been fully investigated. In this paper, we investigate this issue from the perspectives of state refresh and state removal. We propose simple but effective Markov chain models for both approaches and obtain closed-form solutions which depict the state refresh performance in terms of state consistency and refresh message rate, as well as the state removal performance in terms of state removal delay. Simulations verify the analytical models. It is observed that the HbH approach yields much better state synchronization at the cost of higher signaling cost than the E2E approach. While the state refresh performance can be improved by increasing the values of state refresh and timeout timers, the state removal delay increases largely for both E2E and HbH approaches. The analysis here shed lights on the design of signaling protocols and the configuration of the timers to adapt to changing network conditions.
Resumo:
The development of new all-optical technologies for data processing and signal manipulation is a field of growing importance with a strong potential for numerous applications in diverse areas of modern science. Nonlinear phenomena occurring in optical fibres have many attractive features and great, but not yet fully explored, potential in signal processing. Here, we review recent progress on the use of fibre nonlinearities for the generation and shaping of optical pulses and on the applications of advanced pulse shapes in all-optical signal processing. Amongst other topics, we will discuss ultrahigh repetition rate pulse sources, the generation of parabolic shaped pulses in active and passive fibres, the generation of pulses with triangular temporal profiles, and coherent supercontinuum sources. The signal processing applications will span optical regeneration, linear distortion compensation, optical decision at the receiver in optical communication systems, spectral and temporal signal doubling, and frequency conversion. © Copyright 2012 Sonia Boscolo and Christophe Finot.
Resumo:
Nonlinear phenomena occurring in optical fibres have many attractive features and great, but not yet fully explored potential in signal processing. Here, we review recent progress on the use of fibre nonlinearities for the generation and shaping of optical pulses, and on the applications of advanced pulse waveforms in all-optical signal processing. Among other topics, we will discuss ultrahigh repetition-rate pulse sources, the generation of parabolic-shaped pulses in active and passive fibres, the generation of pulses with triangular temporal profiles, and coherent supercontinuum sources. The signal processing applications will span optical regeneration, linear distortion compensation, optical decision at the receiver in optical communication systems, spectral and temporal signal doubling, and frequency conversion. © 2012 IEEE.
Resumo:
Two reactive comonomers, divinyl benzene (DVB) and trimethylolpropane triacrylate (TRIS), were evaluated for their role in effecting the melt free radical grafting reaction of the monomer glycidyl methacrylate (GMA) onto polypropylene (PP). The characteristics of the GMA-grafting systems in the presence and absence of DVB or TRIS were examined and compared in terms of the yield of the grafting reaction and the extent of the main side reactions, namely homopolymerisation of GMA (poly-GMA) and polymer degradation, using different chemical compositions of the reactive systems and processing conditions. In the absence of the comonomers, i.e. in a conventional system, high initiator concentrations of peroxides were typically required to achieve the highest possible GMA grafting levels which were found to be generally low. Concomitantly, both poly-GMA and degradation of the polymer by chain scission takes place with increasing initiator amounts. On the other hand, the presence of a small amount of the comonomers, DVB or Tris, in the GMA-grafting system, was shown to bring about a significant increase in the grafting level paralleled by a large reduction in poly-GMA and PP degradation. In the presence of these highly reactive comonomers, the optimum grafting system requires a much lower concentration of the peroxide initiator and, consequently, would lead to the much lower degree of polymer degradation observed in these systems. The differences in the effects of the presence of DVB and that of TRIS in the grafting systems on the rate of the GMA-grafting and homopolymerisation reactions, and the extent of PP degradation (through melt flow changes), were compared and contrasted with a conventional GMA-grafting system.