43 resultados para students that use drugs


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data Prefetchers identify and make use of any regularity present in the history/training stream to predict future references and prefetch them into the cache. The training information used is typically the primary misses seen at a particular cache level, which is a filtered version of the accesses seen by the cache. In this work we demonstrate that extending the training information to include secondary misses and hits along with primary misses helps improve the performance of prefetchers. In addition to empirical evaluation, we use the information theoretic metric entropy, to quantify the regularity present in extended histories. Entropy measurements indicate that extended histories are more regular than the default primary miss only training stream. Entropy measurements also help corroborate our empirical findings. With extended histories, further benefits can be achieved by triggering prefetches during secondary misses also. In this paper we explore the design space of extended prefetch histories and alternative prefetch trigger points for delta correlation prefetchers. We observe that different prefetch schemes benefit to a different extent with extended histories and alternative trigger points. Also the best performing design point varies on a per-benchmark basis. To meet these requirements, we propose a simple adaptive scheme that identifies the best performing design point for a benchmark-prefetcher combination at runtime. In SPEC2000 benchmarks, using all the L2 accesses as history for prefetcher improves the performance in terms of both IPC and misses reduced over techniques that use only primary misses as history. The adaptive scheme improves the performance of CZone prefetcher over Baseline by 4.6% on an average. These performance gains are accompanied by a moderate reduction in the memory traffic requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an improved hierarchical clustering algorithm for land cover mapping problem using quasi-random distribution. Initially, Niche Particle Swarm Optimization (NPSO) with pseudo/quasi-random distribution is used for splitting the data into number of cluster centers by satisfying Bayesian Information Criteria (BIC). Themain objective is to search and locate the best possible number of cluster and its centers. NPSO which highly depends on the initial distribution of particles in search space is not been exploited to its full potential. In this study, we have compared more uniformly distributed quasi-random with pseudo-random distribution with NPSO for splitting data set. Here to generate quasi-random distribution, Faure method has been used. Performance of previously proposed methods namely K-means, Mean Shift Clustering (MSC) and NPSO with pseudo-random is compared with the proposed approach - NPSO with quasi distribution(Faure). These algorithms are used on synthetic data set and multi-spectral satellite image (Landsat 7 thematic mapper). From the result obtained we conclude that use of quasi-random sequence with NPSO for hierarchical clustering algorithm results in a more accurate data classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Cytochromes P450 (P450) and associated monooxygenases are a family of heme proteins involved in metabolism of endogenous compounds (arachidonic acid, eicosanoids and prostaglandins) as also xenobiotics including drugs and environmental chemicals. Liver is the major organ involved in P450-mediated metabolism and hepatic enzymes have been characterized. Extrahepatic organs, such as lung, kidney and brain have the capability for biotransformation through P450 enzymes. Brain, including human brain, expresses P450 enzymes that metabolize xenobiotics and endogenous compounds. Areas covered: An overview of P450-mediated metabolism in brain is presented focusing on distinct differences seen in expression of P450 enzymes, generation of unique P450 enzymes in brain through alternate splicing and their consequences in terms of metabolism of psychoactive drugs and inflammatory prompts, such as leukotrienes, thus modulating inflammatory response. Expert opinion: The brain possesses unique P450s that metabolize drugs and endogenous compounds through pathways that are markedly different from that seen in liver indicating that extrapolation directly from liver to brain is not appropriate. It is therefore necessary to characterize the unique brain P450s and their ability to metabolize xenobiotics and endogenous compounds to better understand the functions of this important class of enzymes in brain, especially human brain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arterial walls have a regular and lamellar organization of elastin present as concentric fenestrated networks in the media. In contrast, elastin networks are longitudinally oriented in layers adjacent to the media. In a previous model exploring the biomechanics of arterial elastin, we had proposed a microstructurally motivated strain energy function modeled using orthotropic material symmetry. Using mechanical experiments, we showed that the neo-Hookean term had a dominant contribution to the overall form of the strain energy function. In contrast, invariants corresponding to the two fiber families had smaller contributions. To extend these investigations, we use biaxial force-controlled experiments to quantify regional variations in the anisotropy and nonlinearity of elastin isolated from bovine aortic tissues proximal and distal to the heart. Results from this study show that tissue nonlinearity significantly increases distal to the heart as compared to proximally located regions (). Distally located samples also have a trend for increased anisotropy (), with the circumferential direction stiffer than the longitudinal, as compared to an isotropic and relatively linear response for proximally located elastin samples. These results are consistent with the underlying tissue histology from proximally located samples that had higher optical density (), fiber thickness (), and trend for lower tortuosity () in elastin fibers as compared to the thinner and highly undulating elastin fibers isolated from distally located samples. Our studies suggest that it is important to consider elastin fiber orientations in investigations that use microstructure-based models to describe the contributions of elastin and collagen to arterial mechanics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visualizing symmetric patterns in the data often helps the domain scientists make important observations and gain insights about the underlying experiment. Detecting symmetry in scalar fields is a nascent area of research and existing methods that detect symmetry are either not robust in the presence of noise or computationally costly. We propose a data structure called the augmented extremum graph and use it to design a novel symmetry detection method based on robust estimation of distances. The augmented extremum graph captures both topological and geometric information of the scalar field and enables robust and computationally efficient detection of symmetry. We apply the proposed method to detect symmetries in cryo-electron microscopy datasets and the experiments demonstrate that the algorithm is capable of detecting symmetry even in the presence of significant noise. We describe novel applications that use the detected symmetry to enhance visualization of scalar field data and facilitate their exploration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Finite Feedback Scheme (FFS) for a quasi-static MIMO block fading channel with finite N-ary delay-free noise-free feedback consists of N Space-Time Block Codes (STBCs) at the transmitter, one corresponding to each possible value of feedback, and a function at the receiver that generates N-ary feedback. A number of FFSs are available in the literature that provably attain full-diversity. However, there is no known full-diversity criterion that universally applies to all FFSs. In this paper a universal necessary condition for any FFS to achieve full-diversity is given, and based on this criterion the notion of Feedback-Transmission duration optimal (FT-optimal) FFSs is introduced, which are schemes that use minimum amount of feedback N for the given transmission duration T, and minimum T for the given N to achieve full-diversity. When there is no feedback (N = 1) an FT-optimal scheme consists of a single STBC, and the proposed condition reduces to the well known necessary and sufficient condition for an STBC to achieve full-diversity. Also, a sufficient criterion for full-diversity is given for FFSs in which the component STBC yielding the largest minimum Euclidean distance is chosen, using which full-rate (N-t complex symbols per channel use) full-diversity FT-optimal schemes are constructed for all N-t > 1. These are the first full-rate full-diversity FFSs reported in the literature for T < N-t. Simulation results show that the new schemes have the best error performance among all known FFSs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of characterizing the minimum average delay, or equivalently the minimum average queue length, of message symbols randomly arriving to the transmitter queue of a point-to-point link which dynamically selects a (n, k) block code from a given collection. The system is modeled by a discrete time queue with an IID batch arrival process and batch service. We obtain a lower bound on the minimum average queue length, which is the optimal value for a linear program, using only the mean (λ) and variance (σ2) of the batch arrivals. For a finite collection of (n, k) codes the minimum achievable average queue length is shown to be Θ(1/ε) as ε ↓ 0 where ε is the difference between the maximum code rate and λ. We obtain a sufficient condition for code rate selection policies to achieve this optimal growth rate. A simple family of policies that use only one block code each as well as two other heuristic policies are shown to be weakly optimal in the sense of achieving the 1/ε growth rate. An appropriate selection from the family of policies that use only one block code each is also shown to achieve the optimal coefficient σ2/2 of the 1/ε growth rate. We compare the performance of the heuristic policies with the minimum achievable average queue length and the lower bound numerically. For a countable collection of (n, k) codes, the optimal average queue length is shown to be Ω(1/ε). We illustrate the selectivity among policies of the growth rate optimality criterion for both finite and countable collections of (n, k) block codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two different soft-chemical, self-assembly-based solution approaches are employed to grow zinc oxide (ZnO) nanorods with controlled texture. The methods used involve seeding and growth on a substrate. Nanorods with various aspect ratios (1-5) and diameters (15-65 nm) are grown. Obtaining highly oriented rods is determined by the way the substrate is mounted within the chemical bath. Furthermore, a preheat and centrifugation step is essential for the optimization of the growth solution. In the best samples, we obtain ZnO nanorods that are almost entirely oriented in the (002) direction; this is desirable since electron mobility of ZnO is highest along this crystallographic axis. When used as the buffer layer of inverted organic photovoltaics (I-OPVs), these one-dimensional (1D) nanostructures offer: (a) direct paths for charge transport and (b) high interfacial area for electron collection. The morphological, structural, and optical properties of ZnO nanorods are studied using scanning electron microscopy, X-ray diffraction, and ultraviolet-visible light (UV-vis) absorption spectroscopy. Furthermore, the surface chemical features of ZnO films are studied using X-ray photoelectron spectroscopy and contact angle measurements. Using as-grown ZnO, inverted OPVs are fabricated and characterized. For improving device performance, the ZnO nanorods are subjected to UV-ozone irradiation. UV-ozone treated ZnO nanorods show: (i) improvement in optical transmission, (ii) increased wetting of active organic components, and (iii) increased concentration of Zn-O surface bonds. These observations correlate well with improved device performance. The devices fabricated using these optimized buffer layers have an efficiency of similar to 3.2% and a fill factor of 0.50; this is comparable to the best I-OPVs reported that use a P3HT-PCBM active layer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we address the issue of modeling squeeze film damping in nontrivial geometries that are not amenable to analytical solutions. The design and analysis of microelectromechanical systems (MEMS) resonators, especially those that use platelike two-dimensional structures, require structural dynamic response over the entire range of frequencies of interest. This response calculation typically involves the analysis of squeeze film effects and acoustic radiation losses. The acoustic analysis of vibrating plates is a very well understood problem that is routinely carried out using the equivalent electrical circuits that employ lumped parameters (LP) for acoustic impedance. Here, we present a method to use the same circuit with the same elements to account for the squeeze film effects as well by establishing an equivalence between the parameters of the two domains through a rescaled equivalent relationship between the acoustic impedance and the squeeze film impedance. Our analysis is based on a simple observation that the squeeze film impedance rescaled by a factor of jx, where x is the frequency of oscillation, qualitatively mimics the acoustic impedance over a large frequency range. We present a method to curvefit the numerically simulated stiffness and damping coefficients which are obtained using finite element analysis (FEA) analysis. A significant advantage of the proposed method is that it is applicable to any trivial/nontrivial geometry. It requires very limited finite element method (FEM) runs within the frequency range of interest, hence reducing the computational cost, yet modeling the behavior in the entire range accurately. We demonstrate the method using one trivial and one nontrivial geometry.