54 resultados para Apolipoprotéine AI


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eight Duroc × (Landrace × Large White) male pigs housed at a stocking rate of 0.50 m2/pig were subjected to a higher stocking rate of 0.25 m2/pig (higher density, HD) for two 4-day periods over 26 days. Using biochemical and proteomic techniques serum and plasma samples were examined to identify potential biomarkers for monitoring stress due to HD housing. HD housed pigs showed significant differences (P < 0.001) in total cholesterol and low density lipoprotein-associated cholesterol, as well as in concentrations of the pig-major acute phase protein (Pig-MAP) (P = 0.002). No differences were observed in serum cortisol or other acute phase proteins such as haptoglobin, C-reactive protein or apolipoprotein A–I. HD-individuals also showed an imbalance in redox homeostasis, detected as an increase in the level of oxidized proteins measured as the total plasma carbonyl protein content (P < 0.001) with a compensatory increase in the activity of the antioxidant enzyme glutathione peroxidase (P = 0.012). Comparison of the serum proteome yielded a new potential stress biomarker, identified as actin by mass spectrometry. Cluster analysis of the results indicated that individuals segregated into two groups, with different response patterns, suggesting that the stress response depended on individual susceptibility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently there is extensive theoretical work on inconsistencies in logic-based systems. Recently, algorithms for identifying inconsistent clauses in a single conjunctive formula have demonstrated that practical application of this work is possible. However, these algorithms have not been extended for full knowledge base systems and have not been applied to real-world knowledge. To address these issues, we propose a new algorithm for finding the inconsistencies in a knowledge base using existing algorithms for finding inconsistent clauses in a formula. An implementation of this algorithm is then presented as an automated tool for finding inconsistencies in a knowledge base and measuring the inconsistency of formulae. Finally, we look at a case study of a network security rule set for exploit detection (QRadar) and suggest how these automated tools can be applied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes an experimental investigation on the pressure dip phenomenon in a conical pile of granular solids. The roles of different deposition processes such as the pouring rate, pouring height and deposition jet size on the pressure dip formation were studied. Test results confirmed that the pressure dip is a robust phenomenon in a pile formed by top deposition. When the deposition jet radius is significantly smaller than the final pile radius (i.e. concentrated deposition), a dip developed in the centre as shown in previous studies. However, when the deposition jet radius is comparable to the final pile radius (i.e. diffuse deposition), the location of the dip moves towards the edge of deposition jet, with a local maximum pressure developed in the centre. For concentrated deposition, an increase in the pouring rate may enhance the depth of the dip and reduce its width, while an increase in the pouring height has only a negligible effect in the studied range. The results suggest the pressure dip is closely related to the initial location, intensity and form of downslope flows. © 2013 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species' threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project - and avert - future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups - including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems - http://www.predicts.org.uk). We make site-level summary data available alongside this article. The full database will be publicly available in 2015.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A credal network is a graphical tool for representation and manipulation of uncertainty, where probability values may be imprecise or indeterminate. A credal network associates a directed acyclic graph with a collection of sets of probability measures; in this context, inference is the computation of tight lower and upper bounds for conditional probabilities. In this paper we present new algorithms for inference in credal networks based on multilinear programming techniques. Experiments indicate that these new algorithms have better performance than existing ones, in the sense that they can produce more accurate results in larger networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Combination rules proposed so far in the Dempster-Shafer theory of evidence, especially Dempster rule, rely on a basic assumption, that is, pieces of evidence being combined are considered to be on a par, i.e. play the same role. When a source of evidence is less reliable than another, it is possible to discount it and then a symmetric combination operation is still used. In the case of revision, the idea is to let prior knowledge of an agent be altered by some input information. The change problem is thus intrinsically asymmetric. Assuming the input information is reliable, it should be retained whilst the prior information should
be changed minimally to that effect. Although belief revision is already an important subfield of artificial intelligence, so far, it has been little addressed in evidence theory. In this paper, we define the notion of revision for the theory of evidence and propose several different revision rules, called the inner and outer
revisions, and a modified adaptive outer revision, which better corresponds to the idea of revision. Properties of these revision rules are also investigated.