105 resultados para MOA framework
Resumo:
Today's SoCs are complex designs with multiple embedded processors, memory subsystems, and application specific peripherals. The memory architecture of embedded SoCs strongly influences the power and performance of the entire system. Further, the memory subsystem constitutes a major part (typically up to 70%) of the silicon area for the current day SoC. In this article, we address the on-chip memory architecture exploration for DSP processors which are organized as multiple memory banks, where banks can be single/dual ported with non-uniform bank sizes. In this paper we propose two different methods for physical memory architecture exploration and identify the strengths and applicability of these methods in a systematic way. Both methods address the memory architecture exploration for a given target application by considering the application's data access characteristics and generates a set of Pareto-optimal design points that are interesting from a power, performance and VLSI area perspective. To the best of our knowledge, this is the first comprehensive work on memory space exploration at physical memory level that integrates data layout and memory exploration to address the system objectives from both hardware design and application software development perspective. Further we propose an automatic framework that explores the design space identifying 100's of Pareto-optimal design points within a few hours of running on a standard desktop configuration.
Resumo:
Existing approches to digital halftoning of image are based primarily on thresholding. We propose a general framework fot image halftoning whcrc some function uf the output halftone tracks another function of the input gray-tone.This appcoach is shown lo unify most existing algorithms and to provide useful insights. Further, the new intcrpretation allows us to remedy problems in existing aigorithrms such as the error dlffusion, and sohsequently to achieve halftones haavmg superior quality. The proposed method is very general nature is an advantage since it offers a wide choice of three Cilters and a update rule. An intercstmg product of this framework is that equally good, or better, half-tones are possible ro be obtained by thresholding a noise proccess instead of the image itself.
Resumo:
Rathour RK, Narayanan R. Influence fields: a quantitative framework for representation and analysis of active dendrites. J Neurophysiol 107: 2313-2334, 2012. First published January 18, 2012; doi:10.1152/jn.00846.2011.-Neuronal dendrites express numerous voltage-gated ion channels (VGICs), typically with spatial gradients in their densities and properties. Dendritic VGICs, their gradients, and their plasticity endow neurons with information processing capabilities that are higher than those of neurons with passive dendrites. Despite this, frameworks that incorporate dendritic VGICs and their plasticity into neurophysiological and learning theory models have been far and few. Here, we develop a generalized quantitative framework to analyze the extent of influence of a spatially localized VGIC conductance on different physiological properties along the entire stretch of a neuron. Employing this framework, we show that the extent of influence of a VGIC conductance is largely independent of the conductance magnitude but is heavily dependent on the specific physiological property and background conductances. Morphologically, our analyses demonstrate that the influences of different VGIC conductances located on an oblique dendrite are confined within that oblique dendrite, thus providing further credence to the postulate that dendritic branches act as independent computational units. Furthermore, distinguishing between active and passive propagation of signals within a neuron, we demonstrate that the influence of a VGIC conductance is spatially confined only when propagation is active. Finally, we reconstruct functional gradients from VGIC conductance gradients using influence fields and demonstrate that the cumulative contribution of VGIC conductances in adjacent compartments plays a critical role in determining physiological properties at a given location. We suggest that our framework provides a quantitative basis for unraveling the roles of dendritic VGICs and their plasticity in neural coding, learning, and homeostasis.
Resumo:
Online remote visualization and steering of critical weather applications like cyclone tracking are essential for effective and timely analysis by geographically distributed climate science community. A steering framework for controlling the high-performance simulations of critical weather events needs to take into account both the steering inputs of the scientists and the criticality needs of the application including minimum progress rate of simulations and continuous visualization of significant events. In this work, we have developed an integrated user-driven and automated steering framework INST for simulations, online remote visualization, and analysis for critical weather applications. INST provides the user control over various application parameters including region of interest, resolution of simulation, and frequency of data for visualization. Unlike existing efforts, our framework considers both the steering inputs and the criticality of the application, namely, the minimum progress rate needed for the application, and various resource constraints including storage space and network bandwidth to decide the best possible parameter values for simulations and visualization.
Resumo:
Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. Malleable applications, where the number of processors on which the applications execute can be changed during executions, can make use of their malleability to better tolerate high failure rates. We present AdFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. AdFT framework includes cost models for evaluating the benefits of various fault tolerance actions including checkpointing, live-migration and rescheduling, and runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in application performance, and is effective even for petascale systems and beyond.
Resumo:
Online remote visualization and steering of critical weather applications like cyclone tracking are essential for effective and timely analysis by geographically distributed climate science community. A steering framework for controlling the high-performance simulations of critical weather events needs to take into account both the steering inputs of the scientists and the criticality needs of the application including minimum progress rate of simulations and continuous visualization of significant events. In this work, we have developed an integrated user-driven and automated steering framework InSt for simulations, online remote visualization, and analysis for critical weather applications. InSt provides the user control over various application parameters including region of interest, resolution of simulation, and frequency of data for visualization. Unlike existing efforts, our framework considers both the steering inputs and the criticality of the application, namely, the minimum progress rate needed for the application, and various resource constraints including storage space and network bandwidth to decide the best possible parameter values for simulations and visualization.
Resumo:
We revisit the issue of considering stochasticity of Grassmannian coordinates in N = 1 superspace, which was analyzed previously by Kobakhidze et al. In this stochastic supersymmetry (SUSY) framework, the soft SUSY breaking terms of the minimal supersymmetric Standard Model (MSSM) such as the bilinear Higgs mixing, trilinear coupling, as well as the gaugino mass parameters are all proportional to a single mass parameter xi, a measure of supersymmetry breaking arising out of stochasticity. While a nonvanishing trilinear coupling at the high scale is a natural outcome of the framework, a favorable signature for obtaining the lighter Higgs boson mass m(h) at 125 GeV, the model produces tachyonic sleptons or staus turning to be too light. The previous analyses took Lambda, the scale at which input parameters are given, to be larger than the gauge coupling unification scale M-G in order to generate acceptable scalar masses radiatively at the electroweak scale. Still, this was inadequate for obtaining m(h) at 125 GeV. We find that Higgs at 125 GeV is highly achievable, provided we are ready to accommodate a nonvanishing scalar mass soft SUSY breaking term similar to what is done in minimal anomaly mediated SUSY breaking (AMSB) in contrast to a pure AMSB setup. Thus, the model can easily accommodate Higgs data, LHC limits of squark masses, WMAP data for dark matter relic density, flavor physics constraints, and XENON100 data. In contrast to the previous analyses, we consider Lambda = M-G, thus avoiding any ambiguities of a post-grand unified theory physics. The idea of stochastic superspace can easily be generalized to various scenarios beyond the MSSM. DOI: 10.1103/PhysRevD.87.035022
Resumo:
We consider the design of a linear equalizer with a finite number of coefficients in the context of a classical linear intersymbol-interference channel with additive Gaussian noise for channel estimation. Previous literature has shown that Minimum Bit Error Rate(MBER) based detection has outperformed Minimum Mean Squared Error (MMSE) based detection. We pose the channel estimation problem as a detection problem and propose a novel algorithm to estimate the channel based on the MBER framework for BPSK signals. It is shown that the proposed algorithm reduces BER compared to an MMSE based channel estimation when used in MMSE or MBER detection.
Resumo:
The objective of this paper is to empirically evaluate a framework for designing – GEMS of SAPPhIRE as req-sol – to check if it supports design for variety and novelty. A set of observational studies is designed where three teams of two designers each, solve three different design problems in the following order: without any support, using the framework, and using a combination of the framework and a catalogue. Results from the studies reveal that both variety and novelty of the concept space increases with the use of the framework or the framework and the catalogue. However, the number of concepts and the time taken by the designers decreases with the use of the framework and, the framework and the catalogue. Based on the results and the interview sessions with the designers, an interactive framework for designing to be supported on a computer is proposed as future work.
Resumo:
Multi-view head-pose estimation in low-resolution, dynamic scenes is difficult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring sufficient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is fixed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the different face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results confirm effectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.
Resumo:
We report the mechanical properties of a framework structure, Cu2F(HF)(HF2)(pyz)(4)](SbF6)(2)](n) (pyz = pyrazine), in which Cu(pyz)(2)](2+) layers are pillared by HF2- anions containing the exceptionally strong F-H center dot center dot center dot F hydrogen bonds. Nanoindentation studies on single-crystals clearly demonstrate that such bonds are extremely robust and mechanically comparable with coordination bonds in this system.
Resumo:
Using a Girsanov change of measures, we propose novel variations within a particle-filtering algorithm, as applied to the inverse problem of state and parameter estimations of nonlinear dynamical systems of engineering interest, toward weakly correcting for the linearization or integration errors that almost invariably occur whilst numerically propagating the process dynamics, typically governed by nonlinear stochastic differential equations (SDEs). Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated within the evolving flow in two steps. Once the likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, which is directly computable, is accounted for via two different schemes, one employing resampling and the other using a gain-weighted innovation term added to the drift field of the process dynamics thereby overcoming the problem of sample dispersion posed by resampling. The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates vis-a-vis those obtained through the parent-filtering schemes.
Resumo:
The phenomenon of fatigue is commonly observed in majority of concrete structures and it is important to mathematically model it in order to predict their remaining life. An energy approach is adopted in this research by using the framework of thermodynamics wherein the dissipative phenomenon is described by a dissipation potential. An analytical expression is derived for the dissipation potential using the concepts of dimensional analysis and self-similarity to describe a fatigue crack propagation model for concrete. This is validated using available experimental results. Through a sensitivity analysis, the hierarchy of importance of different parameters is highlighted.
Resumo:
The key requirements for enabling real-time remote healthcare service on a mobile platform, in the present day heterogeneous wireless access network environment, are uninterrupted and continuous access to the online patient vital medical data, monitor the physical condition of the patient through video streaming, and so on. For an application, this continuity has to be sufficiently transparent both from a performance perspective as well as a Quality of Experience (QoE) perspective. While mobility protocols (MIPv6, HIP, SCTP, DSMIP, PMIP, and SIP) strive to provide both and do so, limited or non-availability (deployment) of these protocols on provider networks and server side infrastructure has impeded adoption of mobility on end user platforms. Add to this, the cumbersome OS configuration procedures required to enable mobility protocol support on end user devices and the user's enthusiasm to add this support is lost. Considering the lack of proper mobility implementations that meet the remote healthcare requirements above, we propose SeaMo+ that comprises a light-weight application layer framework, termed as the Virtual Real-time Multimedia Service (VRMS) for mobile devices to provide an uninterrupted real-time multimedia information access to the mobile user. VRMS is easy to configure, platform independent, and does not require additional network infrastructure unlike other existing schemes. We illustrate the working of SeaMo+ in two realistic remote patient monitoring application scenarios.
Resumo:
A joint analysis-synthesis framework is developed for the compressive sensing (CS) recovery of speech signals. The signal is assumed to be sparse in the residual domain with the linear prediction filter used as the sparse transformation. Importantly this transform is not known apriori, since estimating the predictor filter requires the knowledge of the signal. Two prediction filters, one comb filter for pitch and another all pole formant filter are needed to induce maximum sparsity. An iterative method is proposed for the estimation of both the prediction filters and the signal itself. Formant prediction filter is used as the synthesis transform, while the pitch filter is used to model the periodicity in the residual excitation signal, in the analysis mode. Significant improvement in the LLR measure is seen over the previously reported formant filter estimation.