991 resultados para Fléchier, Esprit, 1632-1710.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conditional branches frequently exhibit similar behavior (bias, time-varying behavior,...), a property that can be used to improve branch prediction accuracy. Branch clustering constructs groups or clusters of branches with similar behavior and applies different branch prediction techniques to each branch cluster. We revisit the topic of branch clustering with the aim of generalizing branch clustering. We investigate several methods to measure cluster information, with the most effective the storage of information in the branch target buffer. Also, we investigate alternative methods of using the branch cluster identification in the branch predictor. By these improvements we arrive at a branch clustering technique that obtains higher accuracy than previous approaches presented in the literature for the gshare predictor. Furthermore, we evaluate our branch clustering technique in a wide range of predictors to show the general applicability of the method. Branch clustering improves the accuracy of the local history (PAg) predictor, the path-based perceptron and the PPM-like predictor, one of the 2004 CBP finalists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Changes to software requirements not only pose a risk to the successful delivery of software applications but also provide opportunity for improved usability and value. Increased understanding of the causes and consequences of change can support requirements management and also make progress towards the goal of change anticipation. This paper presents the results of two case studies that address objectives arising from that ultimate goal. The first case study evaluated the potential of a change source taxonomy containing the elements ‘market’, ‘organisation’, ‘vision’, ‘specification’, and ‘solution’ to provide a meaningful basis for change classification and measurement. The second case study investigated whether the requirements attributes of novelty, complexity, and dependency correlated with requirements volatility. While insufficiency of data in the first case study precluded an investigation of changes arising due to the change source of ‘market’, for the remainder of the change sources, results indicate a significant difference in cost, value to the customer and management considerations. Findings show that higher cost and value changes arose more often from ‘organisation’ and ‘vision’ sources; these changes also generally involved the co-operation of more stakeholder groups and were considered to be less controllable than changes arising from the ‘specification’ or ‘solution’ sources. Results from the second case study indicate that only ‘requirements dependency’ is consistently correlated with volatility and that changes coming from each change source affect different groups of requirements. We conclude that the taxonomy can provide a meaningful means of change classification, but that a single requirement attribute is insufficient for change prediction. A theoretical causal account of requirements change is drawn from the implications of the combined results of the two case studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new algorithm for learning the structure of a special type of Bayesian network. The conditional phase-type (C-Ph) distribution is a Bayesian network that models the probabilistic causal relationships between a skewed continuous variable, modelled by the Coxian phase-type distribution, a special type of Markov model, and a set of interacting discrete variables. The algorithm takes a dataset as input and produces the structure, parameters and graphical representations of the fit of the C-Ph distribution as output.The algorithm, which uses a greedy-search technique and has been implemented in MATLAB, is evaluated using a simulated data set consisting of 20,000 cases. The results show that the original C-Ph distribution is recaptured and the fit of the network to the data is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a novel framework for visual tracking of human body parts is introduced. The approach presented demonstrates the feasibility of recovering human poses with data from a single uncalibrated camera by using a limb-tracking system based on a 2-D articulated model and a double-tracking strategy. Its key contribution is that the 2-D model is only constrained by biomechanical knowledge about human bipedal motion, instead of relying on constraints that are linked to a specific activity or camera view. These characteristics make our approach suitable for real visual surveillance applications. Experiments on a set of indoor and outdoor sequences demonstrate the effectiveness of our method on tracking human lower body parts. Moreover, a detail comparison with current tracking methods is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The anionic speciation of chlorostannate(II) ionic liquids, prepared by mixing 1-alkyl-3-methylimidazolium chloride and tin(II) chloride in various molar ratios, chi(SnCl2), was investigated in both solid and liquid states. The room temperature ionic liquids were investigated by Sn-119 NMR spectroscopy, X-ray photoelectron spectroscopy, and viscometry. Crystalline samples were studied using Raman spectroscopy, single-crystal X-ray crystallography, and differential scanning calorimetry. Both liquid and solid systems (crystallized from the melt) contained [SnCl3](-) in equilibrium with Cl- when chi(SnCl2) < 0.50, [SnCl3](-) in equilibrium with [Sn2Cl5](-) when chi(SnCl2) > 0.50, and only [SnCl3](-) when chi(SnCl2) = 0.50. Tin(II) chloride was found to precipitate when chi(SnCl2) > 0.63. No evidence was detected for the existence of [SnCl4](-) across the entire range of chi(SnCl2) although such anions have been reported in the literature for chlorostannate(II) organic salts crystallized from organic solvents. Furthermore, the Lewis acidity of the chlorostannate(II)-based systems, expressed by their Gutmann acceptor number, has been determined as a function of the composition, chi(SnCl2), to reveal Lewis acidity for chi(SnCl2) > 0.50 samples comparable to the analogous systems based on zinc(II). A change of the Lewis basicity of the anion was estimated using H-1 NMR spectroscopy, by comparison of the measured chemical shifts of the C-2 hydrogen in the imidazolium ring. Finally, compositions containing free chloride anions (chi(SnCl2) < 0.50) were found to oxidize slowly in air to form a chlorostannate(IV) ionic liquid containing the [SnCl6](2-) anion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents single-chip FPGA Rijndael algorithm implementations of the Advanced Encryption Standard (AES) algorithm, Rijndael. In particular, the designs utilise look-up tables to implement the entire Rijndael Round function. A comparison is provided between these designs and similar existing implementations. Hardware implementations of encryption algorithms prove much faster than equivalent software implementations and since there is a need to perform encryption on data in real time, speed is very important. In particular, Field Programmable Gate Arrays (FPGAs) are well suited to encryption implementations due to their flexibility and an architecture, which can be exploited to accommodate typical encryption transformations. In this paper, a Look-Up Table (LUT) methodology is introduced where complex and slow operations are replaced by simple LUTs. A LUT-based fully pipelined Rijndael implementation is described which has a pre-placement performance of 12 Gbits/sec, which is a factor 1.2 times faster than an alternative design in which look-up tables are utilised to implement only one of the Round function transformations, and 6 times faster than other previous single-chip implementations. Iterative Rijndael implementations based on the Look-Up-Table design approach are also discussed and prove faster than typical iterative implementations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web databases are now pervasive. Such a database can be accessed via its query interface (usually HTML query form) only. Extracting Web query interfaces is a critical step in data integration across multiple Web databases, which creates a formal representation of a query form by extracting a set of query conditions in it. This paper presents a novel approach to extracting Web query interfaces. In this approach, a generic set of query condition rules are created to define query conditions that are semantically equivalent to SQL search conditions. Query condition rules represent the semantic roles that labels and form elements play in query conditions, and how they are hierarchically grouped into constructs of query conditions. To group labels and form elements in a query form, we explore both their structural proximity in the hierarchy of structures in the query form, which is captured by a tree of nested tags in the HTML codes of the form, and their semantic similarity, which is captured by various short texts used in labels, form elements and their properties. We have implemented the proposed approach and our experimental results show that the approach is highly effective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To investigate association of scavenger receptor class B, member 1 (SCARB1) genetic variants with serum carotenoid levels of lutein (L) and zeaxanthin (Z) and macular pigment optical density (MPOD).
Design: A cross-sectional study of healthy adults aged 20 to 70.
Participants: We recruited 302 participants after local advertisement.
Methods: We measured MPOD by customized heterochromatic flicker photometry. Fasting blood samples were taken for serum L and Z measurement by high-performance liquid chromatography and lipoprotein analysis by spectrophotometric assay. Forty-seven single nucleotide polymorphisms (SNPs) across SCARB1 were genotyped using Sequenom technology. Association analyses were performed using PLINK to compare allele and haplotype means, with adjustment for potential confounding and correction for multiple comparisons by permutation testing. Replication analysis was performed in the TwinsUK and Carotenoids in Age-Related Eye Disease Study (CAREDS) cohorts.
Main Outcome Measures: Odds ratios for MPOD area, serum L and Z concentrations associated with genetic variations in SCARB1 and interactions between SCARB1 and gender.
Results: After multiple regression analysis with adjustment for age, body mass index, gender, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, smoking, and dietary L and Z levels, 5 SNPs were significantly associated with serum L concentration and 1 SNP with MPOD (P<0.01). Only the association between rs11057841 and serum L withstood correction for multiple comparisons by permutation testing (P<0.01) and replicated in the TwinsUK cohort (P = 0.014). Independent replication was also observed in the CAREDS cohort with rs10846744 (P = 2×10-4), an SNP in high linkage disequilibrium with rs11057841 (r2 = 0.93). No interactions by gender were found. Haplotype analysis revealed no stronger association than obtained with single SNP analyses.
Conclusions: Our study has identified association between rs11057841 and serum L concentration (24% increase per T allele) in healthy subjects, independent of potential confounding factors. Our data supports further evaluation of the role for SCARB1 in the transport of macular pigment and the possible modulation of age-related macular degeneration risk through combating the effects of oxidative stress within the retina.
Financial Disclosure(s): Proprietary or commercial disclosures may be found after the references. Ophthalmology 2013;120:1632–1640 © 2013 by the American Academy of Ophthalmology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates sub-integer implementations of the adaptive Gaussian mixture model (GMM) for background/foreground segmentation to allow the deployment of the method on low cost/low power processors that lack Floating Point Unit (FPU). We propose two novel integer computer arithmetic techniques to update Gaussian parameters. Specifically, the mean value and the variance of each Gaussian are updated by a redefined and generalised "round'' operation that emulates the original updating rules for a large set of learning rates. Weights are represented by counters that are updated following stochastic rules to allow a wider range of learning rates and the weight trend is approximated by a line or a staircase. We demonstrate that the memory footprint and computational cost of GMM are significantly reduced, without significantly affecting the performance of background/foreground segmentation.