70 resultados para l1-regularized LSP
Resumo:
The inherent temporal locality in memory accesses is filtered out by the L1 cache. As a consequence, an L2 cache with LRU replacement incurs significantly higher misses than the optimal replacement policy (OPT). We propose to narrow this gap through a novel replacement strategy that mimics the replacement decisions of OPT. The L2 cache is logically divided into two components, a Shepherd Cache (SC) with a simple FIFO replacement and a Main Cache (MC) with an emulation of optimal replacement. The SC plays the dual role of caching lines and guiding the replacement decisions in MC. Our pro- posed organization can cover 40% of the gap between OPT and LRU for a 2MB cache resulting in 7% overall speedup. Comparison with the dynamic insertion policy, a victim buffer, a V-Way cache and an LRU based fully associative cache demonstrates that our scheme performs better than all these strategies.
Resumo:
Packet forwarding is a memory-intensive application requiring multiple accesses through a trie structure. The efficiency of a cache for this application critically depends on the placement function to reduce conflict misses. Traditional placement functions use a one-level mapping that naively partitions trie-nodes into cache sets. However, as a significant percentage of trie nodes are not useful, these schemes suffer from a non-uniform distribution of useful nodes to sets. This in turn results in increased conflict misses. Newer organizations such as variable associativity caches achieve flexibility in placement at the expense of increased hit-latency. This makes them unsuitable for L1 caches.We propose a novel two-level mapping framework that retains the hit-latency of one-level mapping yet incurs fewer conflict misses. This is achieved by introducing a secondlevel mapping which reorganizes the nodes in the naive initial partitions into refined partitions with near-uniform distribution of nodes. Further as this remapping is accomplished by simply adapting the index bits to a given routing table the hit-latency is not affected. We propose three new schemes which result in up to 16% reduction in the number of misses and 13% speedup in memory access time. In comparison, an XOR-based placement scheme known to perform extremely well for general purpose architectures, can obtain up to 2% speedup in memory access time.
Resumo:
A critical revi<:w of the possibilities of measuring the ~artlal pressure of sulfur using solid state galvanic cells )'n;;cd on AgI, C" , B-alumina, CaO-Zr02' Na2S04-I and doped ;:":;, ,,,Ilil "Iltl ,,11: auxiliary "jectrodes are presentlOu. SOIll..., df tllc!iL' sYHtcmH h,}vu inherent limltntlol1$ when <:xl'o" ...d to environments contilining both oxygen and sulfur. Electrode polarization due to electronic conduction in the solid electrolyte is a significant factor limiting the ;lC'e,"'acy of isotlwrm:l1 cell",. The electrochemical flux of{lit' !'\)ndlwl Ill}: Ion LhnHO',h tht' ('!('ctrojyt(~ C:Ul },(,! llIinlnliz(,{j pfUjJL!f cell. dL:~) i.t',11. Noni!:iot.herm~ll cells \.Jlth temperaLure compensated reference electrodes have a number of advantages over thC'ir isothermal counterparts.
Resumo:
The interaction of guar gum with the hydrophobic solids namely talc, mica and graphite has been investigated through adsorption, electrokinetic and flotation experiments. The adsorption densities of guar gum onto the above hydrophobic minerals show that they are more or less independent of pH. The adsorption isotherms of guar gum onto talc, mica and graphite indicate that the adsorption densities increase with increase in guar gum concentration and all the isotherms follow the as L1 type according to Giles classification. The magnitude of the adsorption density of guar gum onto the above minerals may be arranged in the following sequence: talc > graphite > mica The effect of particle size on the adsorption density of guar gum onto these minerals has indicated that higher adsorption takes place in the coarser size fraction, consequent to an increase in the surface face-to-edge ratio. In the case of the talc and mica samples pretreated with EDTA and the leached graphite sample, a decrease in the adsorption density of guar gum is observed, due to a reduction in the metallic adsorption sites. The adsorption densities of guar gum increase with decrease in sample weight for all the three minerals. Electrokinetic measurements have indicated that the isoelectric points (iep) of these minerals lie between pH 2-3, Addition of guar gum decreases the negative electrophoretic mobility values in proportion to the guar gum concentration without any observable shift in the iep values, resembling the influence of an indifferent electrolyte. The flotation recovery is diminished in the presence of guar gum for all the three minerals, The magnitude of depression follows the same sequence as observed in the adsorption studies. The floatability of EDTA treated talc and mica samples as well as the leached graphite sample is enhanced, complementing the adsorption data, Possible mechanisms of interaction between the hydrophobic minerals and guar gum are discussed.
Resumo:
Purpose: The authors aim at developing a pseudo-time, sub-optimal stochastic filtering approach based on a derivative free variant of the ensemble Kalman filter (EnKF) for solving the inverse problem of diffuse optical tomography (DOT) while making use of a shape based reconstruction strategy that enables representing a cross section of an inhomogeneous tumor boundary by a general closed curve. Methods: The optical parameter fields to be recovered are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions) and the EnKF is used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the pseudo-dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ``measurement'' equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. Results: In our numerical simulations, we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes (such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as mu(b)(a)=0.01mm(-1) and mu('b)(s)=1.0mm(-1), respectively. We also assume mu(a) = 0.02 mm(-1) within the inhomogeneity (for the single inhomogeneity case) and mu(a) = 0.02 and 0.03 mm(-1) (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown mu(a) from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. Conclusions: The PD-EnKF, which exhibits little sensitivity against variations in the fictitiously introduced noise processes, is also proven to be accurate and robust in recovering a spatial map of the absorption coefficient from DOT data. With the help of shape based representation of the inhomogeneities and an appropriate scaling of the CH expansion coefficients representing the boundary, we have been able to recover inhomogeneities representative of the shape of malignancies in medical diagnostic imaging. (C) 2012 American Association of Physicists in Medicine. [DOI: 10.1118/1.3679855]
Resumo:
Gabor's analytic signal (AS) is a unique complex signal corresponding to a real signal, but in general, it admits infinitely-many combinations of amplitude and frequency modulations (AM and FM, respectively). The standard approach is to enforce a non-negativity constraint on the AM, but this results in discontinuities in the corresponding phase modulation (PM), and hence, an FM with discontinuities particularly when the underlying AM-FM signal is over-modulated. In this letter, we analyze the phase discontinuities and propose a technique to compute smooth AM and FM from the AS, by relaxing the non-negativity constraint on the AM. The proposed technique is effective at handling over-modulated signals. We present simulation results to support the theoretical calculations.
Resumo:
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.106015]
Resumo:
A series of macrobicyclic dizinc(II) complexes Zn2L1-2B](ClO4)(4) (1-6) have been synthesized and characterized (L1-2 are polyaza macrobicyclic binucleating ligands, and B is the N,N-donor heterocyclic base (viz. 2,2'-bipyridine (bipy) and 1,10-phenanthroline (phen)). The DNA and protein binding, DNA hydrolysis and anticancer activity of these complexes were investigated. The interactions of complexes 1-6 with calf thymus DNA were studied by spectroscopic techniques, including absorption, fluorescence and CD spectroscopy. The DNA binding constant values of the complexes were found to range from 2.80 x 10(5) to 5.25 x 10(5) M-1, and the binding affinities are in the following order: 3 > 6 > 2 > 5 > 1 > 4. All the dizinc(II) complexes 1-6 are found to effectively promote the hydrolytic cleavage of plasmid pBR322 DNA under anaerobic and aerobic conditions. Kinetic data for DNA hydrolysis promoted by 3 and 6 under physiological conditions give observed rate constants (k(obs)) of 5.56 +/- 0.1 and 5.12 +/- 0.2 h(-1), respectively, showing a 10(7)-fold rate acceleration over the uncatalyzed reaction of dsDNA. Remarkably, the macrobicyclic dizinc(II) complexes 1-6 bind and cleave bovine serum albumin (BSA), and effectively promote the caspase-3 and caspase-9 dependent deaths of HeLa and BeWo cancer cells. The cytotoxicity of the complexes was further confirmed by lactate dehydrogenase enzyme levels in cancer cell lysate and content media.
Resumo:
We address the problem of identifying the constituent sources in a single-sensor mixture signal consisting of contributions from multiple simultaneously active sources. We propose a generic framework for mixture signal analysis based on a latent variable approach. The basic idea of the approach is to detect known sources represented as stochastic models, in a single-channel mixture signal without performing signal separation. A given mixture signal is modeled as a convex combination of known source models and the weights of the models are estimated using the mixture signal. We show experimentally that these weights indicate the presence/absence of the respective sources. The performance of the proposed approach is illustrated through mixture speech data in a reverberant enclosure. For the task of identifying the constituent speakers using data from a single microphone, the proposed approach is able to identify the dominant source with up to 8 simultaneously active background sources in a room with RT60 = 250 ms, using models obtained from clean speech data for a Source to Interference Ratio (SIR) greater than 2 dB.
Resumo:
We report here the synthesis and characterization of a few phenolate-based ligands bearing tert- amino substituent and their Zn(II) and Cu(II) metal complexes. Three mono/binuclear Zn(II) and Cu(II) complexes Zn(L1)(H2O)].CH3OH.H2O (1) (H (2) L1 = 6,6(')-(((2-dimethylamino)ethylazanediyl)bis(methylene))bis(2, 4-dimethylphenol), Zn-2(L2)(2)] (2) (H (2) L2 = 2,2(')-(((2-dimethylamino)ethyl)azanediyl)bis(methylene)bis(4- methylphenol) and Cu-2(L3)(2).CH2 Cl-2] (3) (H (2) L3 = (6,6(')-(((2-(diethylamino)ethyl)azanediyl)bis(methylene)) bis(methylene))bis(2,4-dimethylphenol) were synthesized by using three symmetrical tetradendate ligands containing N2O2 donor sites. These complexes are characterized by a variety of techniques including; elemental analysis, mass spectrometry, H-1, C-13 NMR spectroscopic and single crystal X-ray analysis. The new complexes have been tested for the phosphotriesterase (PTE) activity with the help of P-31 NMR spectroscopy. The P-31 NMR studies show that mononuclear complex Zn(L1)(H2O)].CH3OH.H2O (1) can hydrolyse the phosphotriester i.e., p-nitrophenyl diphenylphosphate (PNPDPP), more efficiently than the binuclear complexes Zn-2(L2)(2)] (2) and Cu-2(L3)(2).CH2Cl2] (3). The mononuclear Zn(II) complex (1) having one coordinated water molecule exhibits significant PTE activity which may be due to the generation of a Zn(II)-bound hydroxide ion during the hydrolysis reactions in CHES buffer at pH 9.0.
Resumo:
Ranking problems have become increasingly important in machine learning and data mining in recent years, with applications ranging from information retrieval and recommender systems to computational biology and drug discovery. In this paper, we describe a new ranking algorithm that directly maximizes the number of relevant objects retrieved at the absolute top of the list. The algorithm is a support vector style algorithm, but due to the different objective, it no longer leads to a quadratic programming problem. Instead, the dual optimization problem involves l1, ∞ constraints; we solve this dual problem using the recent l1, ∞ projection method of Quattoni et al (2009). Our algorithm can be viewed as an l∞-norm extreme of the lp-norm based algorithm of Rudin (2009) (albeit in a support vector setting rather than a boosting setting); thus we refer to the algorithm as the ‘Infinite Push’. Experiments on real-world data sets confirm the algorithm’s focus on accuracy at the absolute top of the list.
Resumo:
We address the classical problem of delta feature computation, and interpret the operation involved in terms of Savitzky- Golay (SG) filtering. Features such as themel-frequency cepstral coefficients (MFCCs), obtained based on short-time spectra of the speech signal, are commonly used in speech recognition tasks. In order to incorporate the dynamics of speech, auxiliary delta and delta-delta features, which are computed as temporal derivatives of the original features, are used. Typically, the delta features are computed in a smooth fashion using local least-squares (LS) polynomial fitting on each feature vector component trajectory. In the light of the original work of Savitzky and Golay, and a recent article by Schafer in IEEE Signal Processing Magazine, we interpret the dynamic feature vector computation for arbitrary derivative orders as SG filtering with a fixed impulse response. This filtering equivalence brings in significantly lower latency with no loss in accuracy, as validated by results on a TIMIT phoneme recognition task. The SG filters involved in dynamic parameter computation can be viewed as modulation filters, proposed by Hermansky.
Resumo:
We propose a novel method of constructing Dispersion Matrices (DM) for Coherent Space-Time Shift Keying (CSTSK) relying on arbitrary PSK signal sets by exploiting codes from division algebras. We show that classic codes from Cyclic Division Algebras (CDA) may be interpreted as DMs conceived for PSK signal sets. Hence various benefits of CDA codes such as their ability to achieve full diversity are inherited by CSTSK. We demonstrate that the proposed CDA based DMs are capable of achieving a lower symbol error ratio than the existing DMs generated using the capacity as their optimization objective function for both perfect and imperfect channel estimation.
Resumo:
In this letter, we propose a reduced-complexity implementation of partial interference cancellation group decoder with successive interference cancellation (PIC-GD-SIC) by employing the theory of displacement structures. The proposed algorithm exploits the block-Toeplitz structure of the effective matrix and chooses an ordering of the groups such that the zero-forcing matrices associated with the various groups are obtained through Schur recursions without any approximations. We show using an example that the proposed implementation offers a significantly reduced computational complexity compared to the direct approach without any loss in performance.
Resumo:
Real-time object tracking is a critical task in many computer vision applications. Achieving rapid and robust tracking while handling changes in object pose and size, varying illumination and partial occlusion, is a challenging task given the limited amount of computational resources. In this paper we propose a real-time object tracker in l(1) framework addressing these issues. In the proposed approach, dictionaries containing templates of overlapping object fragments are created. The candidate fragments are sparsely represented in the dictionary fragment space by solving the l(1) regularized least squares problem. The non zero coefficients indicate the relative motion between the target and candidate fragments along with a fidelity measure. The final object motion is obtained by fusing the reliable motion information. The dictionary is updated based on the object likelihood map. The proposed tracking algorithm is tested on various challenging videos and found to outperform earlier approach.