934 resultados para Processing Speed
Resumo:
Processing networks are a variant of the standard linear programming network model which are especially useful for optimizing industrial energy/environment systems. Modelling advantages include an intuitive diagrammatic representation and the ability to incorporate all forms of energy and pollutants in a single integrated linear network model. Added advantages include increased speed of solution and algorithms supporting formulation. The paper explores their use in modelling the energy and pollution control systems in large industrial plants. The pollution control options in an ethylene production plant are analyzed as an example. PROFLOW, a computer tool for the formulation, analysis, and solution of processing network models, is introduced.
Resumo:
Based on the data processing technologies of interferential spectrometer, a sort of real-time data processing system on chip of interferential imaging spectrometer was studied based on large capacitance and high speed field programmable gate array( FPGA) device. The system integrates both interferograrn sampling and spectrum rebuilding on a single chip of FPGA and makes them being accomplished in real-time with advantages such as small cubage, fast speed and high reliability. It establishes a good technical foundation in the applications of imaging spectrometer on target detection and recognition in real-time.
Resumo:
Timing data is infrequently reported in aphasiological literature and time taken is only a minor factor, where it is considered at all, in existing aphasia assessments. This is not surprising because reaction times are difficult to obtain manually, but it is a pity, because speed data should be indispensable in assessing the severity of language processing disorders and in evaluating the effects of treatment. This paper argues that reporting accuracy data without discussing speed of performance gives an incomplete and potentially misleading picture of any cognitive function. Moreover, in deciding how to treat, when to continue treatment and when to cease therapy, clinicians should have regard to both parameters: Speed and accuracy of performance. Crerar, Ellis and Dean (1996) reported a study in which the written sentence comprehension of 14 long-term agrammatic subjects was assessed and treated using a computer-based microworld. Some statistically significant and durable treatment effects were obtained after a short amount of focused therapy. Only accuracy data were reported in that (already long) paper, and interestingly, although it has been a widely read study, neither referees nor subsequent readers seemed to miss "the other side of the coin": How these participants compared with controls for their speed of processing and what effect treatment had on speed. This paper considers both aspects of the data and presents a tentative way of combining treatment effects on both accuracy and speed of performance in a single indicator. Looking at rehabilitation this way gives us a rather different perspective on which individuals benefited most from the intervention. It also demonstrates that while some subjects are capable of utilising metalinguistic skills to achieve normal accuracy scores even many years post-stroke, there is little prospect of reducing the time taken to within the normal range. Without considering speed of processing, the extent of this residual functional impairment can be overlooked.
Resumo:
We investigate adaptive buffer management techniques for approximate evaluation of sliding window joins over multiple data streams. In many applications, data stream processing systems have limited memory or have to deal with very high speed data streams. In both cases, computing the exact results of joins between these streams may not be feasible, mainly because the buffers used to compute the joins contain much smaller number of tuples than the tuples contained in the sliding windows. Therefore, a stream buffer management policy is needed in that case. We show that the buffer replacement policy is an important determinant of the quality of the produced results. To that end, we propose GreedyDual-Join (GDJ) an adaptive and locality-aware buffering technique for managing these buffers. GDJ exploits the temporal correlations (at both long and short time scales), which we found to be prevalent in many real data streams. We note that our algorithm is readily applicable to multiple data streams and multiple joins and requires almost no additional system resources. We report results of an experimental study using both synthetic and real-world data sets. Our results demonstrate the superiority and flexibility of our approach when contrasted to other recently proposed techniques.
Resumo:
This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.
Resumo:
Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.
Resumo:
Particle degradation can be a significant issue in particulate solids handling and processing, particularly in pneumatic conveying systems, in which high-speed impact is usually the main contributory factor leading to changes in particle size distribution (comparing the material to its virgin state). However, other factors may strongly influence particles breakage as well, such as particle concentrations, bend geometry,and hardness of pipe material. Because of such complex influences, it is often very difficult to predict particle degradation accurately and rapidly for industrial processes. In this article, a general method for evaluating particle degradation due to high-speed impacts is described, in which the breakage properties of particles are quantified using what are known as "breakage matrices". Rather than a pilot-size test facility, a bench-scale degradation tester has been used. Some advantages of using the bench-scale tester are briefly explored. Experimental determination of adipic acid has been carried out for a range of impact velocities in four particle size categories. Subsequently, particle breakage matrices of adipic acid have been established for these impact velocities. The experimental results show that the "breakage matrices" of particles is an effective and easy method for evaluation of particle degradation due to high-speed impacts. The possibility of the "breakage matrices" approach being applied to a pneumatic conveying system is also explored by a simulation example.
Resumo:
Using a speed-matching task, we measured the speed tuning of the dynamic motion aftereVect (MAE). The results of our Wrst experiment, in which we co-varied dot speed in the adaptation and test stimuli, revealed a speed tuning function. We sought to tease apart what contribution, if any, the test stimulus makes towards the observed speed tuning. This was examined by independently manipulating dot speed in the adaptation and test stimuli, and measuring the eVect this had on the perceived speed of the dynamic MAE. The results revealed that the speed tuning of the dynamic MAE is determined, not by the speed of the adaptation stimulus, but by the local motion characteristics of the dynamic test stimulus. The role of the test stimulus in determining the perceived speed of the dynamic MAE was conWrmed by showing that, if one uses a test stimulus containing two sources of local speed information, observers report seeing a transparent MAE; this is despite the fact that adaptation is induced using a single-speed stimulus. Thus while the adaptation stimulus necessarily determines perceived direction of the dynamic MAE, its perceived speed is determined by the test stimulus. This dissociation of speed and direction supports the notion that the processing of these two visual attributes may be partially independent.
Resumo:
A variation of the least means squares (LMS) algorithm, called the delayed LMS (DLMS) algorithm is an ideally suited to achieve highly pipelined, adaptive digital filter implementations. The paper presents an efficient method of determining the delays in the DLMS filter and then transferring these delays using retiming in order to achieve fully pipelined circuit architectures for FPGA implementation. The method has been used to derive a series of retimed delayed LMS (RDLMS) architectures, which considerable reduce the number of delays and convergence time and give superior performance in terms of throughput rate when compared to previous work. Three circuit architectures and three hardware shared versions are presented which have been implemented using the Virtex-II FPGA technology resulting in a throughout rate of 182 Msample/s.
Resumo:
BACKGROUND:
tissue MicroArrays (TMAs) are a valuable platform for tissue based translational research and the discovery of tissue biomarkers. The digitised TMA slides or TMA Virtual Slides, are ultra-large digital images, and can contain several hundred samples. The processing of such slides is time-consuming, bottlenecking a potentially high throughput platform.
METHODS:
a High Performance Computing (HPC) platform for the rapid analysis of TMA virtual slides is presented in this study. Using an HP high performance cluster and a centralised dynamic load balancing approach, the simultaneous analysis of multiple tissue-cores were established. This was evaluated on Non-Small Cell Lung Cancer TMAs for complex analysis of tissue pattern and immunohistochemical positivity.
RESULTS:
the automated processing of a single TMA virtual slide containing 230 patient samples can be significantly speeded up by a factor of circa 22, bringing the analysis time to one minute. Over 90 TMAs could also be analysed simultaneously, speeding up multiplex biomarker experiments enormously.
CONCLUSIONS:
the methodologies developed in this paper provide for the first time a genuine high throughput analysis platform for TMA biomarker discovery that will significantly enhance the reliability and speed for biomarker research. This will have widespread implications in translational tissue based research.
Resumo:
The fabrication and performance of the first bit-level systolic correlator array is described. The application of systolic array concepts at the bit level provides a simple and extremely powerful method for implementing high-performance digital processing functions. The resulting structure is highly regular, facilitating yield enhancement through fault-tolerant redundancy techniques and therefore ideally suited to implementation as a VLSI chip. The CMOS/SOS chip operates at 35 MHz, is fully cascadable and exhibits 64-stage correlation for 1-bit reference and 4-bit data. 7 refs.
Resumo:
Melt viscosity is one of the main factors affecting product quality in extrusion processes particularly with regard to recycled polymers. However, due to wide variability in the physical properties of recycled feedstock, it is difficult to maintain the melt viscosity during extrusion of polymer blends and obtain good quality product without generating scrap. This research investigates the application of ultrasound and temperature control in an automatic extruder controller, which has ability to maintain constant melt viscosity from variable recycled polymer feedstock during extrusion processing. An ultrasonic modulation system has been developed and fitted to the extruder prior to the die to convey ultrasonic energy from a high power ultrasonic generator to the polymer melt. Two separate control loops have been developed to run simultaneously in one controller: the first loop controls the ultrasonic energy or temperature to maintain constant die pressure, the second loop is used to control extruder screw speed to maintain constant throughput at the extruder die. Time response and energy consumption of the control methods in real-time experiments are also investigated and reported this paper.
Resumo:
Graphene, due to its outstanding properties, has become the topic of much research activity in recent years. Much of that work has been on a laboratory scale however, if we are to introduce graphene into real product applications it is necessary to examine how the material behaves under industrial processing conditions. In this paper the melt processing of polyamide 6/graphene nanoplatelet composites via twin screw extrusion is investigated and structure–property relationships are examined for mechanical and electrical properties. Graphene nanoplatelets (GNPs) with two aspect ratios (700 and 1000) were used in order to examine the influence of particle dimensions on composite properties. It was found that the introduction of GNPs had a nucleating effect on polyamide 6 (PA6) crystallization and substantially increased crystallinity by up to 120% for a 20% loading in PA6. A small increase in crystallinity was observed when extruder screw speed increased from 50 rpm to 200 rpm which could be attributed to better dispersion and more nucleation sites for crystallization. A maximum enhancement of 412% in Young's modulus was achieved at 20 wt% loading of GNPs. This is the highest reported enhancement in modulus achieved to date for a melt mixed thermoplastic/GNPs composite. A further result of importance here is that the modulus continued to increase as the loading of GNPs increased even at 20 wt% loading and results are in excellent agreement with theoretical predictions for modulus enhancement. Electrical percolation was achieved between 10–15 wt% loading for both aspect ratios of GNPs with an increase in conductivity of approximately 6 orders of magnitude compared to the unfilled PA6.
Resumo:
In this paper, the processing and characterization of Polyamide 6 (PA6) / graphite nanoplatelets
(GNPs) composites is reported. PA6/GNPs composites were prepared by melt-mixing using an
industrial, co-rotating, intermeshing, twin-screw extruder. A bespoke screw configuration was used
that was designed in-house to enhance nanoparticle dispersion into a polymer matrix. The effects of
GNPs type (xGnP® M-5 and xGnP® C-500), GNPs content, and extruder screw speed on the bulk
properties of the PA6/GNPs nanocomposites were investigated. Results show a considerable
improvement in the thermal and mechanical properties of PA6/GNPs composites, as compared with
the unfilled PA6 polymer. An increase in crystallinity (%Xc) with increasing GNPs content, and a
change in shape of the crystallization exotherms (broadening) and melting endotherms, both suggest a
change in the crystal type and perfection. An increase in tensile modulus of as much as 376% and
412% was observed for PA6/M-5 xGnP® and PA6/C-500 xGnP® composites, respectively, at filler
contents of 20wt%. The enhancement of Young’s modulus and yield stress can be attributed to the
reinforcing effect of GNPs and their uniform dispersion in the PA6 matrix. The rheological response
of the composite resembles that of a ‘pseudo-solid’, rather than a molten liquid, and analysis of the
rheological data indicates that a percolation threshold was reached at GNPs contents of between 10–
15wt%. The electrical conductivity of the composite also increased with increasing GNPs content,
with an addition of 15wt% GNPs resulting in a 6 order-of-magnitude increase in conductivity. The
electrical percolation thresholds of all composites were between 10–15wt%.