989 resultados para Standard map


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. First, it is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure). Such proofs extend previous complexity results for the problem. Inapproximability results are also derived in the case of trees if the number of states per variable is not bounded. Although the problem is shown to be hard and inapproximable even in very simple scenarios, a new exact algorithm is described that is empirically fast in networks of bounded treewidth and bounded number of states per variable. The same algorithm is used as basis of a Fully Polynomial Time Approximation Scheme for MAP under such assumptions. Approximation schemes were generally thought to be impossible for this problem, but we show otherwise for classes of networks that are important in practice. The algorithms are extensively tested using some well-known networks as well as random generated cases to show their effectiveness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper strengthens the NP-hardness result for the (partial) maximum a posteriori (MAP) problem in Bayesian networks with topology of trees (every variable has at most one parent) and variable cardinality at most three. MAP is the problem of querying the most probable state configuration of some (not necessarily all) of the network variables given evidence. It is demonstrated that the problem remains hard even in such simplistic networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new anytime algorithm for the marginal MAP problem in graphical models of bounded treewidth. We show asymptotic convergence and theoretical error bounds for any fixed step. Experiments show that it compares well to a state-of-the-art systematic search algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The X-linked lymphoproliferative syndrome (XLP) is an inherited immuno-deficiency to Epstein-Barr virus infection that has been mapped to chromosome Xq25. Molecular analysis of XLP patients from ten different families identified a small interstitial constitutional deletion in 1 patient (XLP-D). This deletion, initially defined by a single marker, DF83, known to map to interval Xq24-q26.1, is nested within a previously reported and much larger deletion in another XLP patient (XLP-739). A cosmid minilibrary was constructed from a single mega-YAC and used to establish a contig encompassing the whole XLP-D deletion and a portion of the XLP-739 deletion. Based on this contig, the size of the XLP-D deletion can be estimated at 130 kb. The identification of this minimal deletion, within which at least a portion of the XLP gene is likely to reside, should greatly facilitate efforts in isolating the gene.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The southern industrial rivers (Aire, Calder, Don and Trent) feeding the Humber estuary were routinely monitored for a range of chlorinated micro- organic contaminants at least once a week over a 1.5-year period. Environmental Quality Standards (EQSs) for inland waters were set under the European Economic Community for a limited number of problematic contaminants (18). The results of the monitoring program for seven classes of chlorinated pollutants on the EQS list are presented in this study. All compounds were detected frequently with the exception of hexachlorobutadiene (where only one detectable measurement out of 280 individual samples occurred). In general, the rivers fell into two classes with respect to their contamination patterns. The Aire and Calder carried higher concentrations of micro- pollutants than the Don and Trent, with the exception of hexachlorobenzene (HCB). For Σ hexachlorocyclohexane (HCH) isomers (α + γ) and for dieldrin, a number of samples (~ 5%) exceeded their EQS for both the Aire and Calder. Often, ΣHCH concentrations were just below the EQS level. Levels of p,p'- DDT on occasions approached the EQS for these two rivers, but only one sample (out of 140) exceeded the EQS. No compounds exceeded their EQS levels on the Don and Trent. Analysis of the ratio of γ HCH/αHCH indicated that the source of HCH for the Don and Trent catchments was primarily lindane (γHCH) and, to a lesser extent, technical HCH (mixture of HCH isomers, dominated by α HCH), while the source(s) for the Aire and Calder had a much higher contribution from technical HCH.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes an investigation of various shroud bleed slot configurations of a centrifugal compressor using CFD with a manual multi-block structured grid generation method. The compressor under investigation is used in a turbocharger application for a heavy duty diesel engine of approximately 400hp. The baseline numerical model has been developed and validated against experimental performance measurements. The influence of the bleed slot flow field on a range of operating conditions between surge and choke has been analysed in detail. The impact of the returning bleed flow on the incidence at the impeller blade leading edge due to its mixing with the main through-flow has also been studied. From the baseline geometry, a number of modifications to the bleed slot width have been proposed, and a detailed comparison of the flow characteristics performed. The impact of slot variations on the inlet incidence angle has been investigated, highlighting the improvement in surge and choked flow capability. Along with this, the influence of the bleed slot on stabilizing the blade passage flow by the suction of the tip and over-tip vortex flow by the slot has been considered near surge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As data analytics are growing in importance they are also quickly becoming one of the dominant application domains that require parallel processing. This paper investigates the applicability of OpenMP, the dominant shared-memory parallel programming model in high-performance computing, to the domain of data analytics. We contrast the performance and programmability of key data analytics benchmarks against Phoenix++, a state-of-the-art shared memory map/reduce programming system. Our study shows that OpenMP outperforms the Phoenix++ system by a large margin for several benchmarks. In other cases, however, the programming model is lacking support for this application domain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction
Mild cognitive impairment (MCI) has clinical value in its ability to predict later dementia. A better understanding of cognitive profiles can further help delineate who is most at risk of conversion to dementia. We aimed to (1) examine to what extent the usual MCI subtyping using core criteria corresponds to empirically defined clusters of patients (latent profile analysis [LPA] of continuous neuropsychological data) and (2) compare the two methods of subtyping memory clinic participants in their prediction of conversion to dementia.

Methods
Memory clinic participants (MCI, n = 139) and age-matched controls (n = 98) were recruited. Participants had a full cognitive assessment, and results were grouped (1) according to traditional MCI subtypes and (2) using LPA. MCI participants were followed over approximately 2 years after their initial assessment to monitor for conversion to dementia.

Results
Groups were well matched for age and education. Controls performed significantly better than MCI participants on all cognitive measures. With the traditional analysis, most MCI participants were in the amnestic multidomain subgroup (46.8%) and this group was most at risk of conversion to dementia (63%). From the LPA, a three-profile solution fit the data best. Profile 3 was the largest group (40.3%), the most cognitively impaired, and most at risk of conversion to dementia (68% of the group).

Discussion
LPA provides a useful adjunct in delineating MCI participants most at risk of conversion to dementia and adds confidence to standard categories of clinical inference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The area and power consumption of low-density parity check (LDPC) decoders are typically dominated by embedded memories. To alleviate such high memory costs, this paper exploits the fact that all internal memories of a LDPC decoder are frequently updated with new data. These unique memory access statistics are taken advantage of by replacing all static standard-cell based memories (SCMs) of a prior-art LDPC decoder implementation by dynamic SCMs (D-SCMs), which are designed to retain data just long enough to guarantee reliable operation. The use of D-SCMs leads to a 44% reduction in silicon area of the LDPC decoder compared to the use of static SCMs. The low-power LDPC decoder architecture with refresh-free D-SCMs was implemented in a 90nm CMOS process, and silicon measurements show full functionality and an information bit throughput of up to 600 Mbps (as required by the IEEE 802.11n standard).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Single component geochemical maps are the most basic representation of spatial elemental distributions and commonly used in environmental and exploration geochemistry. However, the compositional nature of geochemical data imposes several limitations on how the data should be presented. The problems relate to the constant sum problem (closure), and the inherently multivariate relative information conveyed by compositional data. Well known is, for instance, the tendency of all heavy metals to show lower values in soils with significant contributions of diluting elements (e.g., the quartz dilution effect); or the contrary effect, apparent enrichment in many elements due to removal of potassium during weathering. The validity of classical single component maps is thus investigated, and reasonable alternatives that honour the compositional character of geochemical concentrations are presented. The first recommended such method relies on knowledge-driven log-ratios, chosen to highlight certain geochemical relations or to filter known artefacts (e.g. dilution with SiO2 or volatiles). This is similar to the classical normalisation approach to a single element. The second approach uses the (so called) log-contrasts, that employ suitable statistical methods (such as classification techniques, regression analysis, principal component analysis, clustering of variables, etc.) to extract potentially interesting geochemical summaries. The caution from this work is that if a compositional approach is not used, it becomes difficult to guarantee that any identified pattern, trend or anomaly is not an artefact of the constant sum constraint. In summary the authors recommend a chain of enquiry that involves searching for the appropriate statistical method that can answer the required geological or geochemical question whilst maintaining the integrity of the compositional nature of the data. The required log-ratio transformations should be applied followed by the chosen statistical method. Interpreting the results may require a closer working relationship between statisticians, data analysts and geochemists.