327 resultados para level scheme
Resumo:
Emerging embedded applications are based on evolving standards (e.g., MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a polymorphic ASIC in which specialized hardware units are replaced with basic hardware units that can create the same functionality by runtime re-composition. It is a ``future-proof'' custom hardware solution for multiple applications and their derivatives in a domain. In this article, we describe a compiler framework and supporting hardware comprising compute, storage, and communication resources. Applications described in high-level language (e.g., C) are compiled into application substructures. For each application substructure, a set of compute elements on the hardware are interconnected during runtime to form a pattern that closely matches the communication pattern of that particular application. The advantage is that the bounded CEs are neither processor cores nor logic elements as in FPGAs. Hence, REDEFINE offers the power and performance advantage of an ASIC and the hardware reconfigurability and programmability of that of an FPGA/instruction set processor. In addition, the hardware supports custom instruction pipelining. Existing instruction-set extensible processors determine a sequence of instructions that repeatedly occur within the application to create custom instructions at design time to speed up the execution of this sequence. We extend this scheme further, where a kernel is compiled into custom instructions that bear strong producer-consumer relationship (and not limited to frequently occurring sequences of instructions). Custom instructions, realized as hardware compositions effected at runtime, allow several instances of the same to be active in parallel. A key distinguishing factor in majority of the emerging embedded applications is stream processing. To reduce the overheads of data transfer between custom instructions, direct communication paths are employed among custom instructions. In this article, we present the overview of the hardware-aware compiler framework, which determines the NoC-aware schedule of transports of the data exchanged between the custom instructions on the interconnect. The results for the FFT kernel indicate a 25% reduction in the number of loads/stores, and throughput improves by log(n) for n-point FFT when compared to sequential implementation. Overall, REDEFINE offers flexibility and a runtime reconfigurability at the expense of 1.16x in power and 8x in area when compared to an ASIC. REDEFINE implementation consumes 0.1x the power of an FPGA implementation. In addition, the configuration overhead of the FPGA implementation is 1,000x more than that of REDEFINE.
Resumo:
Partitional clustering algorithms, which partition the dataset into a pre-defined number of clusters, can be broadly classified into two types: algorithms which explicitly take the number of clusters as input and algorithms that take the expected size of a cluster as input. In this paper, we propose a variant of the k-means algorithm and prove that it is more efficient than standard k-means algorithms. An important contribution of this paper is the establishment of a relation between the number of clusters and the size of the clusters in a dataset through the analysis of our algorithm. We also demonstrate that the integration of this algorithm as a pre-processing step in classification algorithms reduces their running-time complexity.
Resumo:
We consider the problem of transmission of correlated discrete alphabet sources over a Gaussian Multiple Access Channel (GMAC). A distributed bit-to-Gaussian mapping is proposed which yields jointly Gaussian codewords. This can guarantee lossless transmission or lossy transmission with given distortions, if possible. The technique can be extended to the system with side information at the encoders and decoder.
Resumo:
We propose a self-regularized pseudo-time marching scheme to solve the ill-posed, nonlinear inverse problem associated with diffuse propagation of coherent light in a tissuelike object. In particular, in the context of diffuse correlation tomography (DCT), we consider the recovery of mechanical property distributions from partial and noisy boundary measurements of light intensity autocorrelation. We prove the existence of a minimizer for the Newton algorithm after establishing the existence of weak solutions for the forward equation of light amplitude autocorrelation and its Frechet derivative and adjoint. The asymptotic stability of the solution of the ordinary differential equation obtained through the introduction of the pseudo-time is also analyzed. We show that the asymptotic solution obtained through the pseudo-time marching converges to that optimal solution provided the Hessian of the forward equation is positive definite in the neighborhood of optimal solution. The superior noise tolerance and regularization-insensitive nature of pseudo-dynamic strategy are proved through numerical simulations in the context of both DCT and diffuse optical tomography. (C) 2010 Optical Society of America.
Resumo:
Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under a single service class and multiple queues, and present a multi-layered pricing scheme. We propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. The pricing policy used depends on a weighted average queue length at each node. This helps in reducing frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using our scheme over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our approach exhibits a throughput improvement in the range of 34 to 69 percent in all cases studied (over all routes) over the above scheme.
Resumo:
Non-standard finite difference methods (NSFDM) introduced by Mickens [Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers–Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791–797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250–2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235–276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter (λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Resumo:
The sea level pressure (SLP) variability in 30-60 day intraseasonal timescales is investigated using 25 years of reanalysis data addressing two issues. The first concerns the non-zero zonal mean component of SLP near the equator and its meridional connections, and the second concerns the fast eastward propagation (EP) speed of SLP compared to that of zonal wind. It is shown that the entire globe resonates with high amplitude wave activity during some periods which may last for few to several months, followed by lull periods of varying duration. SLP variations in the tropical belt are highly coherent from 25A degrees S to 25A degrees N, uncorrelated with variations in mid latitudes and again significantly correlated but with opposite phase around 60A degrees S and 65A degrees N. Near the equator (8A degrees S-8A degrees N), the zonal mean contributes significantly to the total variance in SLP, and after its removal, SLP shows a dominant zonal wavenumber one structure having a periodicity of 40 days and EP speeds comparable to that of zonal winds in the Indian Ocean. SLP from many of the atmospheric and coupled general circulation models show similar behaviour in the meridional direction although their propagation characteristics in the tropical belt differ widely.
Resumo:
STOAT has been extensively used for the dynamic simulation of an activated sludge based wastewater treatment plant in the Titagarh Sewage Treatment Plant, near Kolkata, India. Some alternative schemes were suggested. Different schemes were compared for the removal of Total Suspended Solids (TSS), b-COD, ammonia, nitrates etc. A combination of IAWQ#1 module with the Takacs module gave best results for the existing scenarios of the Titagarh Sewage Treatment Plant. The modified Bardenpho process was found most effective for reducing the mean b-COD level to as low as 31.4 mg/l, while the mean TSS level was as high as 100.98 mg/l as compared to the mean levels of TSS (92 62 mg/l) and b-COD (92.0 mg/l) in the existing plant. Scheme 2 gave a better scenario for the mean TSS level bringing it down to a mean value of 0.4 mg/l, but a higher mean value for the b-COD level at 54.89 mg/l. The Scheme Final could reduce the mean TSS level to 2.9 mg/l and the mean b-COD level to as low as 38.8 mg/l. The Final Scheme looks to be a technically viable scheme with respect to the overall effluent quality for the plant. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
The methylotrophic yeast Pichia pastoris is widely used for the production of recombinant glycoproteins. With the aim to generate biologically active 15N-labeled glycohormones for conformational studies focused on the unravelling of the NMR structures in solution, the P. pastoris strains GS115 and X-33 were explored for the expression of human chorionic gonadotropin (phCG) and human follicle-stimulating hormone (phFSH). In agreement with recent investigations on the N-glycosylation of phCG, produced in P. pastoris GS115, using ammonia/glycerol-methanol as nitrogen/carbon sources, the N-glycosylation pattern of phCG, synthesized using NH4Cl/glucose–glycerol–methanol, comprised neutral and charged, phosphorylated high-mannose-type N-glycans (Man8–15GlcNAc2). However, the changed culturing protocol led to much higher amounts of glycoprotein material, which is of importance for an economical realistic approach of the aimed NMR research. In the context of these studies, attention was also paid to the site specific N-glycosylation in phCG produced in P. pastoris GS115. In contrast to the rather simple N-glycosylation pattern of phCG expressed in the GS115 strain, phCG and phFSH expressed in the X-33 strain revealed, besides neutral high-mannose-type N-glycans, also high concentrations of neutral hypermannose-type N-glycans (Manup-to-30GlcNAc2). The latter finding made the X-33 strain not very suitable for generating 15N-labeled material. Therefore, 15N-phCG was expressed in the GS115 strain using the new optimized protocol. The 15N-enrichment was evaluated by 15N-HSQC NMR spectroscopy and GLC-EI/MS. Circular dichroism studies indicated that 15N-phCG/GS115 had the same folding as urinary hCG. Furthermore, 15N-phCG/GS115 was found to be similar to the unlabeled protein in every respect as judged by radioimmunoassay, radioreceptor assays, and in vitro bioassays.
Resumo:
In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates the predicted error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. In quantization phase, we used a modified SPIHT algorithm to achieve efficiency in memory requirements. The memory constraint plays a vital role in wireless and bandwidth-limited applications. A single reusable list is used instead of three continuously growing linked lists as in case of SPIHT. This method is error resilient. The performance is measured in terms of PSNR and memory requirements. The algorithm shows good compression performance and significant savings in memory. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
A novel dodecagonal space vector structure for induction motor drive is presented in this paper. It consists of two dodecagons, with the radius of the outer one twice the inner one. Compared to existing dodecagonal space vector structures, to achieve the same PWM output voltage quality, the proposed topology lowers the switching frequency of the inverters and reduces the device ratings to half. At the same time, other benefits obtained from existing dodecagonal space vector structure are retained here. This includes the extension of the linear modulation range and elimination of all 6+/-1 harmonics (n=odd) from the phase voltage. The proposed structure is realized by feeding an open-end winding induction motor with two conventional three level inverters. A detailed calculation of the PWM timings for switching the space vector points is also presented. Simulation and experimental results indicate the possible application of the proposed idea for high power drives.
Resumo:
In this paper, we study the behaviour of the slotted Aloha multiple access scheme with a finite number of users under different traffic loads and optimize the retransmission probability q(r) for various settings, cost objectives and policies. First, we formulate the problem as a parameter optimization problem and use certain efficient smoothed functional algorithms for finding the optimal retransmission probability parameter. Next, we propose two classes of multi-level closed-loop feedback policies (for finding in each case the retransmission probability qr that now depends on the current system state) and apply the above algorithms for finding an optimal policy within each class of policies. While one of the policy classes depends on the number of backlogged nodes in the system, the other depends on the number of time slots since the last successful transmission. The latter policies are more realistic as it is difficult to keep track of the number of backlogged nodes at each instant. We investigate the effect of increasing the number of levels in the feedback policies. Wen also investigate the effects of using different cost functions (withn and without penalization) in our algorithms and the corresponding change in the throughput and delay using these. Both of our algorithms use two-timescale stochastic approximation. One of the algorithms uses one simulation while the other uses two simulations of the system. The two-simulation algorithm is seen to perform better than the other algorithm. Optimal multi-level closed-loop policies are seen to perform better than optimal open-loop policies. The performance further improves when more levels are used in the feedback policies.
Resumo:
Bandwidth allocation for multimedia applications in case of network congestion and failure poses technical challenges due to bursty and delay sensitive nature of the applications. The growth of multimedia services on Internet and the development of agent technology have made us to investigate new techniques for resolving the bandwidth issues in multimedia communications. Agent technology is emerging as a flexible promising solution for network resource management and QoS (Quality of Service) control in a distributed environment. In this paper, we propose an adaptive bandwidth allocation scheme for multimedia applications by deploying the static and mobile agents. It is a run-time allocation scheme that functions at the network nodes. This technique adaptively finds an alternate patchup route for every congested/failed link and reallocates the bandwidth for the affected multimedia applications. The designed method has been tested (analytical and simulation)with various network sizes and conditions. The results are presented to assess the performance and effectiveness of the approach. This work also demonstrates some of the benefits of the agent based schemes in providing flexibility, adaptability, software reusability, and maintainability. (C) 2004 Elsevier Inc. All rights reserved.