130 resultados para Fléchier, Esprit, 1632-1710.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new algorithm for learning the structure of a special type of Bayesian network. The conditional phase-type (C-Ph) distribution is a Bayesian network that models the probabilistic causal relationships between a skewed continuous variable, modelled by the Coxian phase-type distribution, a special type of Markov model, and a set of interacting discrete variables. The algorithm takes a dataset as input and produces the structure, parameters and graphical representations of the fit of the C-Ph distribution as output.The algorithm, which uses a greedy-search technique and has been implemented in MATLAB, is evaluated using a simulated data set consisting of 20,000 cases. The results show that the original C-Ph distribution is recaptured and the fit of the network to the data is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a novel framework for visual tracking of human body parts is introduced. The approach presented demonstrates the feasibility of recovering human poses with data from a single uncalibrated camera by using a limb-tracking system based on a 2-D articulated model and a double-tracking strategy. Its key contribution is that the 2-D model is only constrained by biomechanical knowledge about human bipedal motion, instead of relying on constraints that are linked to a specific activity or camera view. These characteristics make our approach suitable for real visual surveillance applications. Experiments on a set of indoor and outdoor sequences demonstrate the effectiveness of our method on tracking human lower body parts. Moreover, a detail comparison with current tracking methods is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The anionic speciation of chlorostannate(II) ionic liquids, prepared by mixing 1-alkyl-3-methylimidazolium chloride and tin(II) chloride in various molar ratios, chi(SnCl2), was investigated in both solid and liquid states. The room temperature ionic liquids were investigated by Sn-119 NMR spectroscopy, X-ray photoelectron spectroscopy, and viscometry. Crystalline samples were studied using Raman spectroscopy, single-crystal X-ray crystallography, and differential scanning calorimetry. Both liquid and solid systems (crystallized from the melt) contained [SnCl3](-) in equilibrium with Cl- when chi(SnCl2) < 0.50, [SnCl3](-) in equilibrium with [Sn2Cl5](-) when chi(SnCl2) > 0.50, and only [SnCl3](-) when chi(SnCl2) = 0.50. Tin(II) chloride was found to precipitate when chi(SnCl2) > 0.63. No evidence was detected for the existence of [SnCl4](-) across the entire range of chi(SnCl2) although such anions have been reported in the literature for chlorostannate(II) organic salts crystallized from organic solvents. Furthermore, the Lewis acidity of the chlorostannate(II)-based systems, expressed by their Gutmann acceptor number, has been determined as a function of the composition, chi(SnCl2), to reveal Lewis acidity for chi(SnCl2) > 0.50 samples comparable to the analogous systems based on zinc(II). A change of the Lewis basicity of the anion was estimated using H-1 NMR spectroscopy, by comparison of the measured chemical shifts of the C-2 hydrogen in the imidazolium ring. Finally, compositions containing free chloride anions (chi(SnCl2) < 0.50) were found to oxidize slowly in air to form a chlorostannate(IV) ionic liquid containing the [SnCl6](2-) anion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents single-chip FPGA Rijndael algorithm implementations of the Advanced Encryption Standard (AES) algorithm, Rijndael. In particular, the designs utilise look-up tables to implement the entire Rijndael Round function. A comparison is provided between these designs and similar existing implementations. Hardware implementations of encryption algorithms prove much faster than equivalent software implementations and since there is a need to perform encryption on data in real time, speed is very important. In particular, Field Programmable Gate Arrays (FPGAs) are well suited to encryption implementations due to their flexibility and an architecture, which can be exploited to accommodate typical encryption transformations. In this paper, a Look-Up Table (LUT) methodology is introduced where complex and slow operations are replaced by simple LUTs. A LUT-based fully pipelined Rijndael implementation is described which has a pre-placement performance of 12 Gbits/sec, which is a factor 1.2 times faster than an alternative design in which look-up tables are utilised to implement only one of the Round function transformations, and 6 times faster than other previous single-chip implementations. Iterative Rijndael implementations based on the Look-Up-Table design approach are also discussed and prove faster than typical iterative implementations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web databases are now pervasive. Such a database can be accessed via its query interface (usually HTML query form) only. Extracting Web query interfaces is a critical step in data integration across multiple Web databases, which creates a formal representation of a query form by extracting a set of query conditions in it. This paper presents a novel approach to extracting Web query interfaces. In this approach, a generic set of query condition rules are created to define query conditions that are semantically equivalent to SQL search conditions. Query condition rules represent the semantic roles that labels and form elements play in query conditions, and how they are hierarchically grouped into constructs of query conditions. To group labels and form elements in a query form, we explore both their structural proximity in the hierarchy of structures in the query form, which is captured by a tree of nested tags in the HTML codes of the form, and their semantic similarity, which is captured by various short texts used in labels, form elements and their properties. We have implemented the proposed approach and our experimental results show that the approach is highly effective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To investigate association of scavenger receptor class B, member 1 (SCARB1) genetic variants with serum carotenoid levels of lutein (L) and zeaxanthin (Z) and macular pigment optical density (MPOD).
Design: A cross-sectional study of healthy adults aged 20 to 70.
Participants: We recruited 302 participants after local advertisement.
Methods: We measured MPOD by customized heterochromatic flicker photometry. Fasting blood samples were taken for serum L and Z measurement by high-performance liquid chromatography and lipoprotein analysis by spectrophotometric assay. Forty-seven single nucleotide polymorphisms (SNPs) across SCARB1 were genotyped using Sequenom technology. Association analyses were performed using PLINK to compare allele and haplotype means, with adjustment for potential confounding and correction for multiple comparisons by permutation testing. Replication analysis was performed in the TwinsUK and Carotenoids in Age-Related Eye Disease Study (CAREDS) cohorts.
Main Outcome Measures: Odds ratios for MPOD area, serum L and Z concentrations associated with genetic variations in SCARB1 and interactions between SCARB1 and gender.
Results: After multiple regression analysis with adjustment for age, body mass index, gender, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, smoking, and dietary L and Z levels, 5 SNPs were significantly associated with serum L concentration and 1 SNP with MPOD (P<0.01). Only the association between rs11057841 and serum L withstood correction for multiple comparisons by permutation testing (P<0.01) and replicated in the TwinsUK cohort (P = 0.014). Independent replication was also observed in the CAREDS cohort with rs10846744 (P = 2×10-4), an SNP in high linkage disequilibrium with rs11057841 (r2 = 0.93). No interactions by gender were found. Haplotype analysis revealed no stronger association than obtained with single SNP analyses.
Conclusions: Our study has identified association between rs11057841 and serum L concentration (24% increase per T allele) in healthy subjects, independent of potential confounding factors. Our data supports further evaluation of the role for SCARB1 in the transport of macular pigment and the possible modulation of age-related macular degeneration risk through combating the effects of oxidative stress within the retina.
Financial Disclosure(s): Proprietary or commercial disclosures may be found after the references. Ophthalmology 2013;120:1632–1640 © 2013 by the American Academy of Ophthalmology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates sub-integer implementations of the adaptive Gaussian mixture model (GMM) for background/foreground segmentation to allow the deployment of the method on low cost/low power processors that lack Floating Point Unit (FPU). We propose two novel integer computer arithmetic techniques to update Gaussian parameters. Specifically, the mean value and the variance of each Gaussian are updated by a redefined and generalised "round'' operation that emulates the original updating rules for a large set of learning rates. Weights are represented by counters that are updated following stochastic rules to allow a wider range of learning rates and the weight trend is approximated by a line or a staircase. We demonstrate that the memory footprint and computational cost of GMM are significantly reduced, without significantly affecting the performance of background/foreground segmentation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structured parallel programming is recognised as a viable and effective means of tackling parallel programming problems. Recently, a set of simple and powerful parallel building blocks RISC pb2l) has been proposed to support modelling and implementation of parallel frameworks. In this work we demonstrate how that same parallel building block set may be used to model both general purpose parallel programming abstractions, not usually listed in classical skeleton sets, and more specialized domain specific parallel patterns. We show how an implementation of RISC pb2 l can be realised via the FastFlow framework and present experimental evidence of the feasibility and efficiency of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new programming methodology for introducing and tuning parallelism in Erlang programs, using source-level code refactoring from sequential source programs to parallel programs written using our skeleton library, Skel. High-level cost models allow us to predict with reasonable accuracy the parallel performance of the refactored program, enabling programmers to make informed decisions about which refactorings to apply. Using our approach, we demonstrate easily obtainable, significant and scalable speedups of up to 21 on a 24-core machine over the sequential code.