151 resultados para unified framework
em Indian Institute of Science - Bangalore - Índia
Resumo:
The memory subsystem is a major contributor to the performance, power, and area of complex SoCs used in feature rich multimedia products. Hence, memory architecture of the embedded DSP is complex and usually custom designed with multiple banks of single-ported or dual ported on-chip scratch pad memory and multiple banks of off-chip memory. Building software for such large complex memories with many of the software components as individually optimized software IPs is a big challenge. In order to obtain good performance and a reduction in memory stalls, the data buffers of the application need to be placed carefully in different types of memory. In this paper we present a unified framework (MODLEX) that combines different data layout optimizations to address the complex DSP memory architectures. Our method models the data layout problem as multi-objective genetic algorithm (GA) with performance and power being the objectives and presents a set of solution points which is attractive from a platform design viewpoint. While most of the work in the literature assumes that performance and power are non-conflicting objectives, our work demonstrates that there is significant trade-off (up to 70%) that is possible between power and performance.
Resumo:
A computational study for the convergence acceleration of Euler and Navier-Stokes computations with upwind schemes has been conducted in a unified framework. It involves the flux-vector splitting algorithms due to Steger-Warming and Van Leer, the flux-difference splitting algorithms due to Roe and Osher and the hybrid algorithms, AUSM (Advection Upstream Splitting Method) and HUS (Hybrid Upwind Splitting). Implicit time integration with line Gauss-Seidel relaxation and multigrid are among the procedures which have been systematically investigated on an individual as well as cumulative basis. The upwind schemes have been tested in various implicit-explicit operator combinations such that the optimal among them can be determined based on extensive computations for two-dimensional flows in subsonic, transonic, supersonic and hypersonic flow regimes. In this study, the performance of these implicit time-integration procedures has been systematically compared with those corresponding to a multigrid accelerated explicit Runge-Kutta method. It has been demonstrated that a multigrid method employed in conjunction with an implicit time-integration scheme yields distinctly superior convergence as compared to those associated with either of the acceleration procedures provided that effective smoothers, which have been identified in this investigation, are prescribed in the implicit operator.
Resumo:
This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.
Resumo:
We study power dissipation for systems of multiple quantum wires meeting at a junction, in terms of a current splitting matrix (M) describing the junction. We present a unified framework for studying dissipation for wires with either interacting electrons (i.e., Tomonaga-Luttinger liquid wires with Fermi-liquid leads) or noninteracting electrons. We show that for a given matrix M, the eigenvalues of (MM)-M-T characterize the dissipation, and the eigenvectors identify the combinations of bias voltages which need to be applied to the different wires in order to maximize the dissipation associated with the junction. We use our analysis to propose and study some microscopic models of a dissipative junction which employ the edge states of a quantum Hall liquid. These models realize some specific forms of the M matrix whose entries depends on the tunneling amplitudes between the different edges.
Resumo:
We provide a new unified framework, called "multiple correlated informants - single recipient" communication, to address the variations of the traditional Distributed Source Coding (DSC) problem. Different combinations of the assumptions about the communication scenarios and the objectives of communication result in different variations of the DSC problem. For each of these variations, the complexities of communication and computation of the optimal solution is determined by the combination of the underlying assumptions. In the proposed framework, we address the asymmetric, interactive, and lossless variant of the DSC problem, with various objectives of communication and provide optimal solutions for those. Also, we consider both, the worst-case and average-case scenarios.
Resumo:
The effect of using a spatially smoothed forward-backward covariance matrix on the performance of weighted eigen-based state space methods/ESPRIT, and weighted MUSIC for direction-of-arrival (DOA) estimation is analyzed. Expressions for the mean-squared error in the estimates of the signal zeros and the DOA estimates, along with some general properties of the estimates and optimal weighting matrices, are derived. A key result is that optimally weighted MUSIC and weighted state-space methods/ESPRIT have identical asymptotic performance. Moreover, by properly choosing the number of subarrays, the performance of unweighted state space methods can be significantly improved. It is also shown that the mean-squared error in the DOA estimates is independent of the exact distribution of the source amplitudes. This results in a unified framework for dealing with DOA estimation using a uniformly spaced linear sensor array and the time series frequency estimation problems.
Resumo:
Model Reference Adaptive Control (MRAC) of a wide repertoire of stable Linear Time Invariant (LTI) systems is addressed here. Even an upper bound on the order of the finite-dimensional system is unavailable. Further, the unknown plant is permitted to have both minimum phase and nonminimum phase zeros. Model following with reference to a completely specified reference model excited by a class of piecewise continuous bounded signals is the goal. The problem is approached by taking recourse to the time moments representation of an LTI system. The treatment here is confined to Single-Input Single-Output (SISO) systems. The adaptive controller is built upon an on-line scheme for time moment estimation of a system given no more than its input and output. As a first step, a cascade compensator is devised. The primary contribution lies in developing a unified framework to eventually address with more finesse the problem of adaptive control of a large family of plants allowed to be minimum or nonminimum phase. Thus, the scheme presented in this paper is confined to lay the basis for more refined compensators-cascade, feedback and both-initially for SISO systems and progressively for Multi-Input Multi-Output (MIMO) systems. Simulations are presented.
Resumo:
This paper presents a unified framework using the unit cube for measurement, representation and usage of the range of motion (ROM) of body joints with multiple degrees of freedom (d.o.f) to be used for digital human models (DHM). Traditional goniometry needs skill and kn owledge; it is intrusive and has limited applicability for multi-d.o.f. joints. Measurements using motion capture systems often involve complicated mathematics which itself need validation. In this paper we use change of orientation as the measure of rotation; this definition does not require the identification of any fixed axis of rotation. A two-d.o.f. joint ROM can be represented as a Gaussian map. Spherical polygon representation of ROM, though popular, remains inaccurate, vulnerable due to singularities on parametric sphere and difficult to use for point classification. The unit cube representation overcomes these difficulties. In the work presented here, electromagnetic trackers have been effectively used for measuring the relative orientation of a body segment of interest with respect to another body segment. The orientation is then mapped on a surface gridded cube. As the body segment is moved, the grid cells visited are identified and visualized. Using the visual display as a feedback, the subject is instructed to cover as many grid cells as he can. In this way we get a connected patch of contiguous grid cells. The boundary of this patch represents the active ROM of the concerned joint. The tracker data is converted into the motion of a direction aligned with the axis of the segment and a rotation about this axis later on. The direction identifies the grid cells on the cube and rotation about the axis is represented as a range and visualized using color codes. Thus the present methodology provides a simple, intuitive and accura te determination and representation of up to 3 d.o.f. joints. Basic results are presented for the shoulder. The measurement scheme to be used for wrist and neck, and approach for estimation of the statistical distribution of ROM for a given population are also discussed.
Resumo:
This review summarizes theoretical progress in the field of active matter, placing it in the context of recent experiments. This approach offers a unified framework for the mechanical and statistical properties of living matter: biofilaments and molecular motors in vitro or in vivo, collections of motile microorganisms, animal flocks, and chemical or mechanical imitations. A major goal of this review is to integrate several approaches proposed in the literature, from semimicroscopic to phenomenological. In particular, first considered are ``dry'' systems, defined as those where momentum is not conserved due to friction with a substrate or an embedding porous medium. The differences and similarities between two types of orientationally ordered states, the nematic and the polar, are clarified. Next, the active hydrodynamics of suspensions or ``wet'' systems is discussed and the relation with and difference from the dry case, as well as various large-scale instabilities of these nonequilibrium states of matter, are highlighted. Further highlighted are various large-scale instabilities of these nonequilibrium states of matter. Various semimicroscopic derivations of the continuum theory are discussed and connected, highlighting the unifying and generic nature of the continuum model. Throughout the review, the experimental relevance of these theories for describing bacterial swarms and suspensions, the cytoskeleton of living cells, and vibrated granular material is discussed. Promising extensions toward greater realism in specific contexts from cell biology to animal behavior are suggested, and remarks are given on some exotic active-matter analogs. Last, the outlook for a quantitative understanding of active matter, through the interplay of detailed theory with controlled experiments on simplified systems, with living or artificial constituents, is summarized.
Resumo:
The grain size of monolayer large area graphene is key to its performance. Microstructural design for the desired grain size requires a fundamental understanding of graphene nucleation and growth. The two levers that can be used to control these aspects are the defect density, whose population can be controlled by annealing, and the gas-phase supersaturation for activation of nucleation at the defect sites. We observe that defects on copper surface, namely dislocations, grain boundaries, triple points, and rolling marks, initiate nucleation of graphene. We show that among these defects dislocations are the most potent nucleation sites, as they get activated at lowest supersaturation. As an illustration, we tailor the defect density and supersaturation to change the domain size of graphene from <1 mu m(2) to >100 mu m(2). Growth data reported in the literature has been summarized on a supersaturation plot, and a regime for defect-dominated growth has been identified. In this growth regime, we demonstrate the spatial control over nucleation at intentionally introduced defects, paving the way for patterned growth of graphene. Our results provide a unified framework for understanding the role of defects in graphene nucleation and can be used as a guideline for controlled growth of graphene.
Resumo:
Packet forwarding is a memory-intensive application requiring multiple accesses through a trie structure. With the requirement to process packets at line rates, high-performance routers need to forward millions of packets every second with each packet needing up to seven memory accesses. Earlier work shows that a single cache for the nodes of a trie can reduce the number of external memory accesses. It is observed that the locality characteristics of the level-one nodes of a trie are significantly different from those of lower level nodes. Hence, we propose a heterogeneously segmented cache architecture (HSCA) which uses separate caches for level-one and lower level nodes, each with carefully chosen sizes. Besides reducing misses, segmenting the cache allows us to focus on optimizing the more frequently accessed level-one node segment. We find that due to the nonuniform distribution of nodes among cache sets, the level-one nodes cache is susceptible t high conflict misses. We reduce conflict misses by introducing a novel two-level mapping-based cache placement framework. We also propose an elegant way to fit the modified placement function into the cache organization with minimal increase in access time. Further, we propose an attribute preserving trace generation methodology which emulates real traces and can generate traces with varying locality. Performanc results reveal that our HSCA scheme results in a 32 percent speedup in average memory access time over a unified nodes cache. Also, HSC outperforms IHARC, a cache for lookup results, with as high as a 10-fold speedup in average memory access time. Two-level mappin further enhances the performance of the base HSCA by up to 13 percent leading to an overall improvement of up to 40 percent over the unified scheme.
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discoverymethods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies.Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
Frequent episode discovery framework is a popular framework in temporal data mining with many applications. Over the years, many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper, we present a unified view of all the apriori-based discovery methods for serial episodes under these different notions of frequencies. Specifically, we present a unified view of the various frequency counting algorithms. We propose a generic counting algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies, and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various counting algorithms as we show here. It also aids in understanding and obtaining the anti-monotonicity properties satisfied by the various frequencies, the properties exploited by the candidate generation step of any apriori-based method. We also point out how our unified view of counting helps to consider generalization of the algorithm to count episodes with general partial orders.
Resumo:
We revisit the issue of considering stochasticity of Grassmannian coordinates in N = 1 superspace, which was analyzed previously by Kobakhidze et al. In this stochastic supersymmetry (SUSY) framework, the soft SUSY breaking terms of the minimal supersymmetric Standard Model (MSSM) such as the bilinear Higgs mixing, trilinear coupling, as well as the gaugino mass parameters are all proportional to a single mass parameter xi, a measure of supersymmetry breaking arising out of stochasticity. While a nonvanishing trilinear coupling at the high scale is a natural outcome of the framework, a favorable signature for obtaining the lighter Higgs boson mass m(h) at 125 GeV, the model produces tachyonic sleptons or staus turning to be too light. The previous analyses took Lambda, the scale at which input parameters are given, to be larger than the gauge coupling unification scale M-G in order to generate acceptable scalar masses radiatively at the electroweak scale. Still, this was inadequate for obtaining m(h) at 125 GeV. We find that Higgs at 125 GeV is highly achievable, provided we are ready to accommodate a nonvanishing scalar mass soft SUSY breaking term similar to what is done in minimal anomaly mediated SUSY breaking (AMSB) in contrast to a pure AMSB setup. Thus, the model can easily accommodate Higgs data, LHC limits of squark masses, WMAP data for dark matter relic density, flavor physics constraints, and XENON100 data. In contrast to the previous analyses, we consider Lambda = M-G, thus avoiding any ambiguities of a post-grand unified theory physics. The idea of stochastic superspace can easily be generalized to various scenarios beyond the MSSM. DOI: 10.1103/PhysRevD.87.035022
Resumo:
Two inorganic-organic hybrid framework iron phosphate-oxalates, I, [N2C4H12](0.5)[Fe-2(HPO4)(C2O4)(1.5)] and II, [Fe-2(OH2)PO4(C2O4)(0.5)] have been synthesized by hydrothermal means and the structures determined by X-ray crystallography. Crystal Data: compound I, monoclinic, spacegroup = P2(1)/c (No. 14), a=7.569(2) Angstrom, b=7.821(2) Angstrom, c=18.033(4) Angstrom, beta=98.8(1)degrees, V=1055.0(4) Angstrom(3), Z=4, M=382.8, D-calc=2.41 g cm(-3) MoK alpha, R-F=0.02; compound II, monoclinic, spacegroup=P2(1)/c (No. 14), a=10.240(1) b=6.375(3) Angstrom, 9.955(1) Angstrom, beta=117.3(1)degrees, V=577.4(1) Angstrom(3), Z=4, M=268.7, D-calc=3.09 g cm(-3) MoK alpha, R-F=0.03. These materials contain a high proportion of three-coordinated oxygens and [Fe2O9] dimeric units, besides other interesting structural features. The connectivity of Fe2O9 is entirely different in the two materials resulting in the formation of a continuous chain of Fe-O-Fe in II. The phosphate-oxalate containing the amine, I, forms well-defined channels. Magnetic susceptibility measurements show Fen to be in the high-spin state (t(2g)(4)e(g)(2)) in II, and in the intermediate-spin state (t(2g)(5)e(g)(1)) in I.