32 resultados para Random-set theory
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
In recent years, the econometrics literature has shown a growing interest in the study of partially identified models, in which the object of economic and statistical interest is a set rather than a point. The characterization of this set and the development of consistent estimators and inference procedures for it with desirable properties are the main goals of partial identification analysis. This review introduces the fundamental tools of the theory of random sets, which brings together elements of topology, convex geometry, and probability theory to develop a coherent mathematical framework to analyze random elements whose realizations are sets. It then elucidates how these tools have been fruitfully applied in econometrics to reach the goals of partial identification analysis.
Relative Predicativity and dependent recursion in second-order set theory and higher-orders theories
Resumo:
This article reports that some robustness of the notions of predicativity and of autonomous progression is broken down if as the given infinite total entity we choose some mathematical entities other than the traditional ω. Namely, the equivalence between normal transfinite recursion scheme and new dependent transfinite recursion scheme, which does hold in the context of subsystems of second order number theory, does not hold in the context of subsystems of second order set theory where the universe V of sets is treated as the given totality (nor in the contexts of those of n+3-th order number or set theories, where the class of all n+2-th order objects is treated as the given totality).
Resumo:
We discuss several ontological properties of explicit mathematics and operational set theory: global choice, decidable classes, totality and extensionality of operations, function spaces, class and set formation via formulas that contain the definedness predicate and applications.
Resumo:
We introduce a version of operational set theory, OST−, without a choice operation, which has a machinery for Δ0Δ0 separation based on truth functions and the separation operator, and a new kind of applicative set theory, so-called weak explicit set theory WEST, based on Gödel operations. We show that both the theories and Kripke–Platek set theory KPKP with infinity are pairwise Π1Π1 equivalent. We also show analogous assertions for subtheories with ∈-induction restricted in various ways and for supertheories extended by powerset, beta, limit and Mahlo operations. Whereas the upper bound is given by a refinement of inductive definition in KPKP, the lower bound is by a combination, in a specific way, of realisability, (intuitionistic) forcing and negative interpretations. Thus, despite interpretability between classical theories, we make “a detour via intuitionistic theories”. The combined interpretation, seen as a model construction in the sense of Visser's miniature model theory, is a new way of construction for classical theories and could be said the third kind of model construction ever used which is non-trivial on the logical connective level, after generic extension à la Cohen and Krivine's classical realisability model.
Resumo:
Several methods based on Kriging have recently been proposed for calculating a probability of failure involving costly-to-evaluate functions. A closely related problem is to estimate the set of inputs leading to a response exceeding a given threshold. Now, estimating such a level set—and not solely its volume—and quantifying uncertainties on it are not straightforward. Here we use notions from random set theory to obtain an estimate of the level set, together with a quantification of estimation uncertainty. We give explicit formulae in the Gaussian process set-up and provide a consistency result. We then illustrate how space-filling versus adaptive design strategies may sequentially reduce level set estimation uncertainty.
Resumo:
Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.
Resumo:
ims: Periodic leg movements in sleep (PLMS) are a frequent finding in polysomnography. Most patients with restless legs syndrome (RLS) display PLMS. However, since PLMS are also often recorded in healthy elderly subjects, the clinical significance of PLMS is still discussed controversially. Leg movements are seen concurrently with arousals in obstructive sleep apnoea (OSA) may also appear periodically. Quantitative assessment of the periodicity of LM/PLM as measured by inter movement intervals (IMI) is difficult. This is mainly due to influencing factors like sleep architecture and sleep stage, medication, inter and intra patient variability, the arbitrary amplitude and sequence criteria which tend to broaden the IMI distributions or make them even multi-modal. Methods: Here a statistical method is presented that enables eliminating such effects from the raw data before analysing the statistics of IMI. Rather than studying the absolute size of IMI (measured in seconds) we focus on the shape of their distribution (suitably normalized IMI). To this end we employ methods developed in Random Matrix Theory (RMT). Patients: The periodicity of leg movements (LM) of four patient groups (10 to 15 each) showing LM without PLMS (group 1), OSA without PLMS (group 2), PLMS and OSA (group 3) as well as PLMS without OSA (group 4) are compared. Results: The IMI of patients without PLMS (groups 1 and 2) and with PLMS (groups 3 and 4) are statistically different. In patients without PLMS the distribution of normalized IMI resembles closely the one of random events. In contrary IMI of PLMS patients show features of periodic systems (e.g. a pendulum) when studied in normalized manner. Conclusions: For quantifying PLMS periodicity proper normalization of the IMI is crucial. Without this procedure important features are hidden when grouping LM/PLM over whole nights or across patients. The clinical significance of PLMS might be eluded when properly separating random LM from LM that show features of periodic systems.
Resumo:
A new research project has, quite recently, been launched to clarify how different, from systems in second order number theory extending ACA 0, those in second order set theory extending NBG (as well as those in n + 3-th order number theory extending the so-called Bernays−Gödel expansion of full n + 2-order number theory etc.) are. In this article, we establish the equivalence between Δ10\bf-LFP and Δ10\bf-FP, which assert the existence of a least and of a (not necessarily least) fixed point, respectively, for positive elementary operators (or between Δn+20\bf-LFP and Δn+20\bf-FP). Our proof also shows the equivalence between ID 1 and ^ID1, both of which are defined in the standard way but with the starting theory PA replaced by ZFC (or full n + 2-th order number theory with global well-ordering).
Resumo:
An Internet portal accessible at www.gdb.unibe.ch has been set up to automatically generate color-coded similarity maps of the ChEMBL database in relation to up to two sets of active compounds taken from the enhanced Directory of Useful Decoys (eDUD), a random set of molecules, or up to two sets of user-defined reference molecules. These maps visualize the relationships between the selected compounds and ChEMBL in six different high dimensional chemical spaces, namely MQN (42-D molecular quantum numbers), SMIfp (34-D SMILES fingerprint), APfp (20-D shape fingerprint), Xfp (55-D pharmacophore fingerprint), Sfp (1024-bit substructure fingerprint), and ECfp4 (1024-bit extended connectivity fingerprint). The maps are supplied in form of Java based desktop applications called “similarity mapplets” allowing interactive content browsing and linked to a “Multifingerprint Browser for ChEMBL” (also accessible directly at www.gdb.unibe.ch) to perform nearest neighbor searches. One can obtain six similarity mapplets of ChEMBL relative to random reference compounds, 606 similarity mapplets relative to single eDUD active sets, 30 300 similarity mapplets relative to pairs of eDUD active sets, and any number of similarity mapplets relative to user-defined reference sets to help visualize the structural diversity of compound series in drug optimization projects and their relationship to other known bioactive compounds.
Resumo:
Although the recycling of municipal wastewater can play an important role in water supply security and ecosystem protection, the percentage of wastewater recycled is generally low and strikingly variable. Previous research has employed detailed case studies to examine the factors that contribute to recycling success but usually lacks a comparative perspective across cases. In this study, 25 water utilities in New South Wales, Australia, were compared using fuzzy-set Qualitative Comparative Analysis (fsQCA). This research method applies binary logic and set theory to identify the minimal combinations of conditions that are necessary and/or sufficient for an outcome to occur within the set of cases analyzed. The influence of six factors (rainfall, population density, coastal or inland location, proximity to users; cost recovery and revenue for water supply services) was examined for two outcomes, agricultural use and "heavy" (i.e., commercial/municipal/industrial) use. Each outcome was explained by two different pathways, illustrating that different combinations of conditions are associated with the same outcome. Generally, while economic factors are crucial for heavy use, factors relating to water stress and geographical proximity matter most for agricultural reuse. These results suggest that policies to promote wastewater reuse may be most effective if they target uses that are most feasible for utilities and correspond to the local context. This work also makes a methodological contribution through illustrating the potential utility of fsQCA for understanding the complex drivers of performance in water recycling.