79 resultados para Large detector-systems performance
Resumo:
The memory subsystem is a major contributor to the performance, power, and area of complex SoCs used in feature rich multimedia products. Hence, memory architecture of the embedded DSP is complex and usually custom designed with multiple banks of single-ported or dual ported on-chip scratch pad memory and multiple banks of off-chip memory. Building software for such large complex memories with many of the software components as individually optimized software IPs is a big challenge. In order to obtain good performance and a reduction in memory stalls, the data buffers of the application need to be placed carefully in different types of memory. In this paper we present a unified framework (MODLEX) that combines different data layout optimizations to address the complex DSP memory architectures. Our method models the data layout problem as multi-objective genetic algorithm (GA) with performance and power being the objectives and presents a set of solution points which is attractive from a platform design viewpoint. While most of the work in the literature assumes that performance and power are non-conflicting objectives, our work demonstrates that there is significant trade-off (up to 70%) that is possible between power and performance.
Resumo:
Based on dynamic inversion, a relatively straightforward approach is presented in this paper for nonlinear flight control design of high performance aircrafts, which does not require the normal and lateral acceleration commands to be first transferred to body rates before computing the required control inputs. This leads to substantial improvement of the tracking response. Promising results are obtained from six degree-offreedom simulation studies of F-16 aircraft, which are found to be superior as compared to an existing approach (which is also based on dynamic inversion). The new approach has two potential benefits, namely reduced oscillatory response (including elimination of non-minimum phase behavior) and reduced control magnitude. Next, a model-following neuron-adaptive design is augmented the nominal design in order to assure robust performance in the presence of parameter inaccuracies in the model. Note that in the approach the model update takes place adaptively online and hence it is philosophically similar to indirect adaptive control. However, unlike a typical indirect adaptive control approach, there is no need to update the individual parameters explicitly. Instead the inaccuracy in the system output dynamics is captured directly and then used in modifying the control. This leads to faster adaptation, which helps in stabilizing the unstable plant quicker. The robustness study from a large number of simulations shows that the adaptive design has good amount of robustness with respect to the expected parameter inaccuracies in the model.
Resumo:
Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%
Resumo:
This paper presents the design and performance analysis of a detector based on suprathreshold stochastic resonance (SSR) for the detection of deterministic signals in heavy-tailed non-Gaussian noise. The detector consists of a matched filter preceded by an SSR system which acts as a preprocessor. The SSR system is composed of an array of 2-level quantizers with independent and identically distributed (i.i.d) noise added to the input of each quantizer. The standard deviation sigma of quantizer noise is chosen to maximize the detection probability for a given false alarm probability. In the case of a weak signal, the optimum sigma also minimizes the mean-square difference between the output of the quantizer array and the output of the nonlinear transformation of the locally optimum detector. The optimum sigma depends only on the probability density functions (pdfs) of input noise and quantizer noise for weak signals, and also on the signal amplitude and the false alarm probability for non-weak signals. Improvement in detector performance stems primarily from quantization and to a lesser extent from the optimization of quantizer noise. For most input noise pdfs, the performance of the SSR detector is very close to that of the optimum detector. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In the present paper, the ultrasonic strain sensing performance of large-area piezoceramic coating with Inter Digital Transducer (IDT) electrodes is studied. The piezoceramic coating is prepared using slurry coating technique and the piezoelectric phase is achieved by poling under DC field. To study the sensing performance of the piezoceramic coating with IDT electrodes for strain induced by the guided waves, the piezoceramic coating is fabricated on the surface of a beam specimen at one end and the ultrasonic guided waves are launched with a piezoelectric wafer bonded on another end. Often a wider frequency band of operation is needed for the effective implementation of the sensors in the Structural Health Monitoring (SHM) of various structures, for different types of damages. A wider frequency band of operation is achieved in the present study by considering the variation in the number of IDT electrodes in the contribution of voltage for the induced dynamic strain. In the present work, the fabricated piezoceramic coatings with IDT electrodes have been characterized for dynamic strain sensing applications using guided wave technique at various different frequencies. Strain levels of the launched guided wave are varied by varying the magnitude of the input voltage sent to the actuator. Sensitivity variation with the variation in the strain levels of guided wave is studied for the combination of different number of IDT electrodes. Piezoelectric coefficient e(11) is determined at different frequencies and at different strain levels using the guided wave technique.
Resumo:
The presence of software bloat in large flexible software systems can hurt energy efficiency. However, identifying and mitigating bloat is fairly effort intensive. To enable such efforts to be directed where there is a substantial potential for energy savings, we investigate the impact of bloat on power consumption under different situations. We conduct the first systematic experimental study of the joint power-performance implications of bloat across a range of hardware and software configurations on modern server platforms. The study employs controlled experiments to expose different effects of a common type of Java runtime bloat, excess temporary objects, in the context of the SPECPower_ssj2008 workload. We introduce the notion of equi-performance power reduction to characterize the impact, in addition to peak power comparisons. The results show a wide variation in energy savings from bloat reduction across these configurations. Energy efficiency benefits at peak performance tend to be most pronounced when bloat affects a performance bottleneck and non-bloated resources have low energy-proportionality. Equi-performance power savings are highest when bloated resources have a high degree of energy proportionality. We develop an analytical model that establishes a general relation between resource pressure caused by bloat and its energy efficiency impact under different conditions of resource bottlenecks and energy proportionality. Applying the model to different "what-if" scenarios, we predict the impact of bloat reduction and corroborate these predictions with empirical observations. Our work shows that the prevalent software-only view of bloat is inadequate for assessing its power-performance impact and instead provides a full systems approach for reasoning about its implications.
Resumo:
We develop a communication theoretic framework for modeling 2-D magnetic recording channels. Using the model, we define the signal-to-noise ratio (SNR) for the channel considering several physical parameters, such as the channel bit density, code rate, bit aspect ratio, and noise parameters. We analyze the problem of optimizing the bit aspect ratio for maximizing SNR. The read channel architecture comprises a novel 2-D joint self-iterating equalizer and detection system with noise prediction capability. We evaluate the system performance based on our channel model through simulations. The coded performance with the 2-D equalizer detector indicates similar to 5.5 dB of SNR gain over uncoded data.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.
Resumo:
Shock-Boundary Layer Interaction (SBLI) often occurs in supersonic/hypersonic flow fields. Especially when accompanied by separation (termed strong interaction), the SBLI phenomena largely affect the performance of the systems where they occur, such as scramjet intakes, thus often demanding the control of the interaction. Experiments on the strong interaction between impinging shock wave and boundary layer on a flat plate at Mach 5.96 are carried out in IISc hypersonic shock tunnel HST-2. The experiments are performed at moderate flow total enthalpy of 1.3 MJ/kg and freestream Reynolds number of 4 million/m. The strong shock generated by a wedge (or shock generator) of large angle 30.96 degrees to the freestream is made to impinge on the flat plate at 95 mm (inviscid estimate) from the leading edge, due to which a large separation bubble of length (75 mm) comparable to the distance of shock impingement from the leading edge is generated. The experimental simulation of such large separation bubble with separation occurring close to the leading edge, and its control using boundary layer bleed (suction and tangential blowing) at the location of separation, are demonstrated within the short test time of the shock tunnel (similar to 600 mu s) from time resolved schlieren flow visualizations and surface pressure measurements. By means of suction - with mass flow rate one order less than the mass flow defect in boundary layer - a reduction in separation length by 13.33% was observed. By the injection of an array of (nearly) tangential jets in the direction of mainstream (from the bottom of the plate) at the location of separation - with momentum flow rate one order less than the boundary layer momentum flow defect - 20% reduction in separation length was observed, although the flow field was apparently unsteady. (C) 2014 Elsevier Masson SAS. All rights reserved.
Resumo:
The performance of metal hydride based solid sorption cooling systems depends on the driving pressure differential, and the rate of hydrogen transfer between coupled metal hydride beds during cooling and regeneration processes. Conventionally, the mid-plateau pressure difference obtained from `static' equilibrium PCT data are used for the thermodynamic analysis. It is well known that the processes are `dynamic' because the pressure and temperature, and hence the concentration of the hydride beds, are continuously changing. Keeping this in mind, the pair of La0.9Ce0.1Ni5 - LaNi4.7Al0.3 metal hydrides suitable for solid sorption cooling systems were characterised using both static and dynamic methods. It was found that the PCT characteristics, and the resulting enthalpy (Delta H) and entropy (Delta S) values, were significantly different for static and dynamic modes of measurements. In the present study, the solid sorption metal hydride cooling system is analysed taking in to account the actual variation in the pressure difference (Delta P) and the dynamic enthalpy values. Compared to `static' property based analysis, significant decrease in the driving potentials and transferrable amounts of hydrogen, leading to decrease in cooling capacity by 57.8% and coefficient of performance by 31.9% are observed when dynamic PCT data at the flow rate of 80 ml/min are considered. Copyright 2014 (C) Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
Resumo:
Spatial modulation (SM) is attractive for multiantenna wireless communications. SM uses multiple transmit antenna elements but only one transmit radio frequency (RF) chain. In SM, in addition to the information bits conveyed through conventional modulation symbols (e.g., QAM), the index of the active transmit antenna also conveys information bits. In this paper, we establish that SM has significant signal-to-noise (SNR) advantage over conventional modulation in large-scale multiuser (multiple-input multiple-output) MIMO systems. Our new contribution in this paper addresses the key issue of large-dimension signal processing at the base station (BS) receiver (e.g., signal detection) in large-scale multiuser SM-MIMO systems, where each user is equipped with multiple transmit antennas (e.g., 2 or 4 antennas) but only one transmit RF chain, and the BS is equipped with tens to hundreds of (e.g., 128) receive antennas. Specifically, we propose two novel algorithms for detection of large-scale SM-MIMO signals at the BS; one is based on message passing and the other is based on local search. The proposed algorithms achieve very good performance and scale well. For the same spectral efficiency, multiuser SM-MIMO outperforms conventional multiuser MIMO (recently being referred to as massive MIMO) by several dBs. The SNR advantage of SM-MIMO over massive MIMO can be attributed to: (i) because of the spatial index bits, SM-MIMO can use a lower-order QAM alphabet compared to that in massive MIMO to achieve the same spectral efficiency, and (ii) for the same spectral efficiency and QAM size, massive MIMO will need more spatial streams per user which leads to increased spatial interference.
Resumo:
The detection efficiency of a gaseous photomultiplier depends on the photocathode quantum efficiency and the extraction efficiency of photoelectrons into the gas. In this paper we have studied the performance of an UV photon detector with P10 gas in which the extraction efficiency can reach values near to those in vacuum operated devices. Simulations have been done to compare the percentage of photoelectrons backscattered in P10 gas as well as in the widely used neon-based gas mixture. The performance study has been carried out using a single stage thick gas electron multiplier (THGEM). The electron pulses and electron spectrum are recorded under various operating conditions. Secondary effects prevailing in UV photon detectors like photon feedback are discussed and its effect on the electron spectrum under different operating conditions is analyzed. (C) 2014 Chinese Laser Press
Resumo:
We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.
Resumo:
We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.