127 resultados para Fuzzy Sets
Resumo:
We undertake a detailed study of the sets of multiplicity in a second countable locally compact group G and their operator versions. We establish a symbolic calculus for normal completely bounded maps from the space B(L-2(G)) of bounded linear operators on L-2 (G) into the von Neumann algebra VN(G) of G and use it to show that a closed subset E subset of G is a set of multiplicity if and only if the set E* = {(s,t) is an element of G x G : ts(-1) is an element of E} is a set of operator multiplicity. Analogous results are established for M-1-sets and M-0-sets. We show that the property of being a set of multiplicity is preserved under various operations, including taking direct products, and establish an Inverse Image Theorem for such sets. We characterise the sets of finite width that are also sets of operator multiplicity, and show that every compact operator supported on a set of finite width can be approximated by sums of rank one operators supported on the same set. We show that, if G satisfies a mild approximation condition, pointwise multiplication by a given measurable function psi : G -> C defines a closable multiplier on the reduced C*-algebra G(r)*(G) of G if and only if Schur multiplication by the function N(psi): G x G -> C, given by N(psi)(s, t) = psi(ts(-1)), is a closable operator when viewed as a densely defined linear map on the space of compact operators on L-2(G). Similar results are obtained for multipliers on VN(C).
Resumo:
Kuznetsov independence of variables X and Y means that, for any pair of bounded functions f(X) and g(Y), E[f(X)g(Y)]=E[f(X)] *times* E[g(Y)], where E[.] denotes interval-valued expectation and *times* denotes interval multiplication. We present properties of Kuznetsov independence for several variables, and connect it with other concepts of independence in the literature; in particular we show that strong extensions are always included in sets of probability distributions whose lower and upper expectations satisfy Kuznetsov independence. We introduce an algorithm that computes lower expectations subject to judgments of Kuznetsov independence by mixing column generation techniques with nonlinear programming. Finally, we define a concept of conditional Kuznetsov independence, and study its graphoid properties.
Resumo:
Purpose of review: Appropriate selection and definition of outcome measures are essential for clinical trials to be maximally informative. Core outcome sets (an agreed, standardized collection of outcomes measured and reported in all trials for a specific clinical area) were developed due to established inconsistencies in trial outcome selection. This review discusses the rationale for, and methods of, core outcome set development, as well as current initiatives in critical care.
Recent findings: Recent systematic reviews of reported outcomes and measurement instruments relevant to the critically ill highlight inconsistencies in outcome selection, definition, and measurement, thus establishing the need for core outcome sets. Current critical care initiatives include development of core outcome sets for trials aimed at reducing mechanical ventilation duration; rehabilitation following critical illness; long-term outcomes in acute respiratory failure; and epidemic and pandemic studies of severe acute respiratory infection.
Summary: Development and utilization of core outcome sets for studies relevant to the critically ill is in its infancy compared to other specialties. Notwithstanding, core outcome set development frameworks and guidelines are available, several sets are in various stages of development, and there is strong support from international investigator-led collaborations including the International Forum for Acute Care Trialists.
Resumo:
This paper proposes an efficient learning mechanism to build fuzzy rule-based systems through the construction of sparse least-squares support vector machines (LS-SVMs). In addition to the significantly reduced computational complexity in model training, the resultant LS-SVM-based fuzzy system is sparser while offers satisfactory generalization capability over unseen data. It is well known that the LS-SVMs have their computational advantage over conventional SVMs in the model training process; however, the model sparseness is lost, which is the main drawback of LS-SVMs. This is an open problem for the LS-SVMs. To tackle the nonsparseness issue, a new regression alternative to the Lagrangian solution for the LS-SVM is first presented. A novel efficient learning mechanism is then proposed in this paper to extract a sparse set of support vectors for generating fuzzy IF-THEN rules. This novel mechanism works in a stepwise subset selection manner, including a forward expansion phase and a backward exclusion phase in each selection step. The implementation of the algorithm is computationally very efficient due to the introduction of a few key techniques to avoid the matrix inverse operations to accelerate the training process. The computational efficiency is also confirmed by detailed computational complexity analysis. As a result, the proposed approach is not only able to achieve the sparseness of the resultant LS-SVM-based fuzzy systems but significantly reduces the amount of computational effort in model training as well. Three experimental examples are presented to demonstrate the effectiveness and efficiency of the proposed learning mechanism and the sparseness of the obtained LS-SVM-based fuzzy systems, in comparison with other SVM-based learning techniques.
Resumo:
We initiate the study of sets of p-multiplicity in locally compact groups and their operator versions. We show that a closed subset E of a second countable locally compact group G is a set of p-multiplicity if and only if E∗={(s,t):ts−1∈E} is a set of operator p-multiplicity. We exhibit examples of sets of p-multiplicity, establish preservation properties for unions and direct products, and prove a p-version of the Stone–von Neumann Theorem.
Resumo:
BACKGROUND: Core outcome sets can increase the efficiency and value of research and, as a result, there are an increasing number of studies looking to develop core outcome sets (COS). However, the credibility of a COS depends on both the use of sound methodology in its development and clear and transparent reporting of the processes adopted. To date there is no reporting guideline for reporting COS studies. The aim of this programme of research is to develop a reporting guideline for studies developing COS and to highlight some of the important methodological considerations in the process.
METHODS/DESIGN: The study will include a reporting guideline item generation stage which will then be used in a Delphi study. The Delphi study is anticipated to include two rounds. The first round will ask stakeholders to score the items listed and to add any new items they think are relevant. In the second round of the process, participants will be shown the distribution of scores for all stakeholder groups separately and asked to re-score. A final consensus meeting will be held with an expert panel and stakeholder representatives to review the guideline item list. Following the consensus meeting, a reporting guideline will be drafted and review and testing will be undertaken until the guideline is finalised. The final outcome will be the COS-STAR (Core Outcome Set-STAndards for Reporting) guideline for studies developing COS and a supporting explanatory document.
DISCUSSION: To assess the credibility and usefulness of a COS, readers of a COS development report need complete, clear and transparent information on its methodology and proposed core set of outcomes. The COS-STAR guideline will potentially benefit all stakeholders in COS development: COS developers, COS users, e.g. trialists and systematic reviewers, journal editors, policy-makers and patient groups.
Resumo:
Some reasons for registering trials might be considered as self-serving, such as satisfying the requirements of a journal in which the researchers wish to publish their eventual findings or publicising the trial to boost recruitment. Registry entries also help others, including systematic reviewers, to know about ongoing or unpublished studies and contribute to reducing research waste by making it clear what studies are ongoing. Other sources of research waste include inconsistency in outcome measurement across trials in the same area, missing data on important outcomes from some trials, and selective reporting of outcomes. One way to reduce this waste is through the use of core outcome sets: standardised sets of outcomes for research in specific areas of health and social care. These do not restrict the outcomes that will be measured, but provide the minimum to include if a trial is to be of the most use to potential users. We propose that trial registries, such as ISRCTN, encourage researchers to note their use of a core outcome set in their entry. This will help people searching for trials and those worried about selective reporting in closed trials. Trial registries can facilitate these efforts to make new trials as useful as possible and reduce waste. The outcomes section in the entry could prompt the researcher to consider using a core outcome set and facilitate the specification of that core outcome set and its component outcomes through linking to the original core outcome set. In doing this, registries will contribute to the global effort to ensure that trials answer important uncertainties, can be brought together in systematic reviews, and better serve their ultimate aim of improving health and well-being through improving health and social care.
Resumo:
Although visual surveillance has emerged as an effective technolody for public security, privacy has become an issue of great concern in the transmission and distribution of surveillance videos. For example, personal facial images should not be browsed without permission. To cope with this issue, face image scrambling has emerged as a simple solution for privacyrelated applications. Consequently, online facial biometric verification needs to be carried out in the scrambled domain thus bringing a new challenge to face classification. In this paper, we investigate face verification issues in the scrambled domain and propose a novel scheme to handle this challenge. In our proposed method, to make feature extraction from scrambled face images robust, a biased random subspace sampling scheme is applied to construct fuzzy decision trees from randomly selected features, and fuzzy forest decision using fuzzy memberships is then obtained from combining all fuzzy tree decisions. In our experiment, we first estimated the optimal parameters for the construction of the random forest, and then applied the optimized model to the benchmark tests using three publically available face datasets. The experimental results validated that our proposed scheme can robustly cope with the challenging tests in the scrambled domain, and achieved an improved accuracy over all tests, making our method a promising candidate for the emerging privacy-related facial biometric applications.