893 resultados para Minimal Set
Resumo:
Belief merging is an important but difficult problem in Artificial Intelligence, especially when sources of information are pervaded with uncertainty. Many merging operators have been proposed to deal with this problem in possibilistic logic, a weighted logic which is powerful for handling inconsistency and deal-ing with uncertainty. They often result in a possibilistic knowledge base which is a set of weighted formulas. Although possibilistic logic is inconsistency tolerant, it suffers from the well-known "drowning effect". Therefore, we may still want to obtain a consistent possibilistic knowledge base as the result of merging. In such a case, we argue that it is not always necessary to keep weighted information after merging. In this paper, we define a merging operator that maps a set of possibilistic knowledge bases and a formula representing the integrity constraints to a classical knowledge base by using lexicographic ordering. We show that it satisfies nine postulates that generalize basic postulates for propositional merging given in [11]. These postulates capture the principle of minimal change in some sense. We then provide an algorithm for generating the resulting knowledge base of our merging operator. Finally, we discuss the compatibility of our merging operator with propositional merging and establish the advantage of our merging operator over existing semantic merging operators in the propositional case.
Resumo:
In this article, we focus on the analysis of competitive gene set methods for detecting the statistical significance of pathways from gene expression data. Our main result is to demonstrate that some of the most frequently used gene set methods, GSEA, GSEArot and GAGE, are severely influenced by the filtering of the data in a way that such an analysis is no longer reconcilable with the principles of statistical inference, rendering the obtained results in the worst case inexpressive. A possible consequence of this is that these methods can increase their power by the addition of unrelated data and noise. Our results are obtained within a bootstrapping framework that allows a rigorous assessment of the robustness of results and enables power estimates. Our results indicate that when using competitive gene set methods, it is imperative to apply a stringent gene filtering criterion. However, even when genes are filtered appropriately, for gene expression data from chips that do not provide a genome-scale coverage of the expression values of all mRNAs, this is not enough for GSEA, GSEArot and GAGE to ensure the statistical soundness of the applied procedure. For this reason, for biomedical and clinical studies, we strongly advice not to use GSEA, GSEArot and GAGE for such data sets.
Resumo:
With the rapid expansion of the internet and the increasing demand on Web servers, many techniques were developed to overcome the servers' hardware performance limitation. Mirrored Web Servers is one of the techniques used where a number of servers carrying the same "mirrored" set of services are deployed. Client access requests are then distributed over the set of mirrored servers to even up the load. In this paper we present a generic reference software architecture for load balancing over mirrored web servers. The architecture was designed adopting the latest NaSr architectural style [1] and described using the ADLARS [2] architecture description language. With minimal effort, different tailored product architectures can be generated from the reference architecture to serve different network protocols and server operating systems. An example product system is described and a sample Java implementation is presented.
Resumo:
In spite of the controversy that they have generated, neutral models provide ecologists with powerful tools for creating dynamic predictions about beta-diversity in ecological communities. Ecologists can achieve an understanding of the assembly rules operating in nature by noting when and how these predictions are met or not met. This is particularly valuable for those groups of organisms that are challenging to study under natural conditions (e.g., bacteria and fungi). Here, we focused on arbuscular mycorrhizal fungal (AMF) communities and performed an extensive literature search that allowed us to synthesize the information in 19 data sets with the minimal requisites for creating a null hypothesis in terms of community dissimilarity expected under neutral dynamics. In order to achieve this task, we calculated the first estimates of neutral parameters for several AMF communities from different ecosystems. Communities were shown either to be consistent with neutrality or to diverge or converge with respect to the levels of compositional dissimilarity expected under neutrality. These data support the hypothesis that divergence occurs in systems where the effect of limited dispersal is overwhelmed by anthropogenic disturbance or extreme biological and environmental heterogeneity, whereas communities converge when systems have the potential for niche divergence within a relatively homogeneous set of environmental conditions. Regarding the study cases that were consistent with neutrality, the sampling designs employed may have covered relatively homogeneous environments in which the effects of dispersal limitation overwhelmed minor differences among AMF taxa that would lead to environmental filtering. Using neutral models we showed for the first time for a soil microbial group the conditions under which different assembly processes may determine different patterns of beta-diversity. Our synthesis is an important step showing how the application of general ecological theories to a model microbial taxon has the potential to shed light on the assembly and ecological dynamics of communities.
Resumo:
Recently, two fast selective encryption methods for context-adaptive variable length coding and context-adaptive binary arithmetic coding in H.264/AVC were proposed by Shahid et al. In this paper, it was demonstrated that these two methods are not as efficient as only encrypting the sign bits of nonzero coefficients. Experimental results showed that without encrypting the sign bits of nonzero coefficients, these two methods can not provide a perceptual scrambling effect. If a much stronger scrambling effect is required, intra prediction modes, and the sign bits of motion vectors can be encrypted together with the sign bits of nonzero coefficients. For practical applications, the required encryption scheme should be customized according to a user's specified requirement on the perceptual scrambling effect and the computational cost. Thus, a tunable encryption scheme combining these three methods is proposed for H.264/AVC. To simplify its implementation and reduce the computational cost, a simple control mechanism is proposed to adjust the control factors. Experimental results show that this scheme can provide different scrambling levels by adjusting three control factors with no or very little impact on the compression performance. The proposed scheme can run in real-time and its computational cost is minimal. The security of the proposed scheme is also discussed. It is secure against the replacement attack when all three control factors are set to one.
Resumo:
Landfills are the primary option for waste disposal all over the world. Most of the landfill sites across the world are old and are not engineered to prevent contamination of the underlying soil and groundwater by the toxic leachate. The pollutants from landfill leachate have accumulative and detrimental effect on the ecology and food chains leading to carcinogenic effects, acute toxicity and genotoxicity among human beings. Management of this highly toxic leachate presents a challenging problem to the regulatory authorities who have set specific regulations regarding maximum limits of contaminants in treated leachate prior to disposal into the environment to ensure minimal environmental impact. There are different stages of leachate management such as monitoring of its formation and flow into the environment, identification of hazards associated with it and its treatment prior to disposal into the environment. This review focuses on: (i) leachate composition, (ii) Plume migration, (iii) Contaminant fate, (iv) Leachate plume monitoring techniques, (v) Risk assessment techniques, Hazard rating methods, mathematical modeling, and (vi) Recent innovations in leachate treatment technologies. However, due to seasonal fluctuations in leachate composition, flow rate and leachate volume, the management approaches cannot be stereotyped. Every scenario is unique and the strategy will vary accordingly. This paper lays out the choices for making an educated guess leading to the best management option.
Resumo:
Reliable detection of JAK2-V617F is critical for accurate diagnosis of myeloproliferative neoplasms (MPNs); in addition, sensitive mutation-specific assays can be applied to monitor disease response. However, there has been no consistent approach to JAK2-V617F detection, with assays varying markedly in performance, affecting clinical utility. Therefore, we established a network of 12 laboratories from seven countries to systematically evaluate nine different DNA-based quantitative PCR (qPCR) assays, including those in widespread clinical use. Seven quality control rounds involving over 21,500 qPCR reactions were undertaken using centrally distributed cell line dilutions and plasmid controls. The two best-performing assays were tested on normal blood samples (n=100) to evaluate assay specificity, followed by analysis of serial samples from 28 patients transplanted for JAK2-V617F-positive disease. The most sensitive assay, which performed consistently across a range of qPCR platforms, predicted outcome following transplant, with the mutant allele detected a median of 22 weeks (range 6-85 weeks) before relapse. Four of seven patients achieved molecular remission following donor lymphocyte infusion, indicative of a graft vs MPN effect. This study has established a robust, reliable assay for sensitive JAK2-V617F detection, suitable for assessing response in clinical trials, predicting outcome and guiding management of patients undergoing allogeneic transplant.
Resumo:
his paper uses fuzzy-set ideal type analysis to assess the conformity of European leave regulations to four theoretical ideal typical divisions of labour: male breadwinner, caregiver parity, universal breadwinner and universal caregiver. In contrast to the majority of previous studies, the focus of this analysis is on the extent to which leave regulations promote gender equality in the family and the transformation of traditional gender roles. The results of this analysis demonstrate that European countries cluster into five models that only partly coincide with countries’ geographical proximity. Second, none of the countries considered constitutes a universal caregiver model, while the male breadwinner ideal continues to provide the normative reference point for parental leave regulations in a large number of European states. Finally, we witness a growing emphasis at the national and EU levels concerning the universal breadwinner ideal, which leaves gender inequality in unpaid work unproblematized.
Resumo:
There is extensive theoretical work on measures of inconsistency for arbitrary formulae in knowledge bases. Many of these are defined in terms of the set of minimal inconsistent subsets (MISes) of the base. However, few have been implemented or experimentally evaluated to support their viability, since computing all MISes is intractable in the worst case. Fortunately, recent work on a related problem of minimal unsatisfiable sets of clauses (MUSes) offers a viable solution in many cases. In this paper, we begin by drawing connections between MISes and MUSes through algorithms based on a MUS generalization approach and a new optimized MUS transformation approach to finding MISes. We implement these algorithms, along with a selection of existing measures for flat and stratified knowledge bases, in a tool called mimus. We then carry out an extensive experimental evaluation of mimus using randomly generated arbitrary knowledge bases. We conclude that these measures are viable for many large and complex random instances. Moreover, they represent a practical and intuitive tool for inconsistency handling.