806 resultados para UMD SPACES
Resumo:
In this paper, inspired by two very different, successful metric theories such us the real view-point of Lowen's approach spaces and the probabilistic field of Kramosil and Michalek's fuzzymetric spaces, we present a family of spaces, called fuzzy approach spaces, that are appropriate to handle, at the same time, both measure conceptions. To do that, we study the underlying metric interrelationships between the above mentioned theories, obtaining six postulates that allow us to consider such kind of spaces in a unique category. As a result, the natural way in which metric spaces can be embedded in both classes leads to a commutative categorical scheme. Each postulate is interpreted in the context of the study of the evolution of fuzzy systems. First properties of fuzzy approach spaces are introduced, including a topology. Finally, we describe a fixed point theorem in the setting of fuzzy approach spaces that can be particularized to the previous existing measure spaces.
Resumo:
We describe a method to explore the configurational phase space of chemical systems. It is based on the nested sampling algorithm recently proposed by Skilling (AIP Conf. Proc. 2004, 395; J. Bayesian Anal. 2006, 1, 833) and allows us to explore the entire potential energy surface (PES) efficiently in an unbiased way. The algorithm has two parameters which directly control the trade-off between the resolution with which the space is explored and the computational cost. We demonstrate the use of nested sampling on Lennard-Jones (LJ) clusters. Nested sampling provides a straightforward approximation for the partition function; thus, evaluating expectation values of arbitrary smooth operators at arbitrary temperatures becomes a simple postprocessing step. Access to absolute free energies allows us to determine the temperature-density phase diagram for LJ cluster stability. Even for relatively small clusters, the efficiency gain over parallel tempering in calculating the heat capacity is an order of magnitude or more. Furthermore, by analyzing the topology of the resulting samples, we are able to visualize the PES in a new and illuminating way. We identify a discretely valued order parameter with basins and suprabasins of the PES, allowing a straightforward and unambiguous definition of macroscopic states of an atomistic system and the evaluation of the associated free energies.
Resumo:
A pivotal problem in Bayesian nonparametrics is the construction of prior distributions on the space M(V) of probability measures on a given domain V. In principle, such distributions on the infinite-dimensional space M(V) can be constructed from their finite-dimensional marginals---the most prominent example being the construction of the Dirichlet process from finite-dimensional Dirichlet distributions. This approach is both intuitive and applicable to the construction of arbitrary distributions on M(V), but also hamstrung by a number of technical difficulties. We show how these difficulties can be resolved if the domain V is a Polish topological space, and give a representation theorem directly applicable to the construction of any probability distribution on M(V) whose first moment measure is well-defined. The proof draws on a projective limit theorem of Bochner, and on properties of set functions on Polish spaces to establish countable additivity of the resulting random probabilities.
Resumo:
The contribution described in this paper is an algorithm for learning nonlinear, reference tracking, control policies given no prior knowledge of the dynamical system and limited interaction with the system through the learning process. Concepts from the field of reinforcement learning, Bayesian statistics and classical control have been brought together in the formulation of this algorithm which can be viewed as a form of indirect self tuning regulator. On the task of reference tracking using a simulated inverted pendulum it was shown to yield generally improved performance on the best controller derived from the standard linear quadratic method using only 30 s of total interaction with the system. Finally, the algorithm was shown to work on the simulated double pendulum proving its ability to solve nontrivial control tasks. © 2011 IEEE.
Resumo:
The airflow and thermal stratification produced by a localised heat source located at floor level in a closed room is of considerable practical interest and is commonly referred to as a 'filling box'. In rooms with low aspect ratios H/R ≲ 1 (room height H to characteristic horizontal dimension R) the thermal plume spreads laterally on reaching the ceiling and a descending horizontal 'front' forms separating a stably stratified, warm upper region from cooler air below. The stratification is well predicted for H/R ≲ 1 by the original filling box model of Baines and Turner (J. Fluid. Mech. 37 (1968) 51). This model represents a somewhat idealised situation of a plume rising from a point source of buoyancy alone-in particular the momentum flux at the source is zero. In practical situations, real sources of heating and cooling in a ventilation system often include initial fluxes of both buoyancy and momentum, e.g. where a heating system vents warm air into a space. This paper describes laboratory experiments to determine the dependence of the 'front' formation and stratification on the source momentum and buoyancy fluxes of a single source, and on the location and relative strengths of two sources from which momentum and buoyancy fluxes were supplied separately. For a single source with a non-zero input of momentum, the rate of descent of the front is more rapid than for the case of zero source momentum flux and increases with increasing momentum input. Increasing the source momentum flux effectively increases the height of the enclosure, and leads to enhanced overturning motions and finally to complete mixing for highly momentum-driven flows. Stratified flows may be maintained by reducing the aspect ratio of the enclosure. At these low aspect ratios different long-time behaviour is observed depending on the nature of the heat input. A constant heat flux always produces a stratified interior at large times. On the other hand, a constant temperature supply ultimately produces a well-mixed space at the supply temperature. For separate sources of momentum and buoyancy, the developing stratification is shown to be strongly dependent on the separation of the sources and their relative strengths. Even at small separation distances the stratification initially exhibits horizontal inhomogeneity with localised regions of warm fluid (from the buoyancy source) and cool fluid. This inhomogeneity is less pronounced as the strength of one source is increased relative to the other. Regardless of the strengths of the sources, a constant buoyancy flux source dominates after sufficiently large times, although the strength of the momentum source determines whether the enclosure is initially well mixed (strong momentum source) or stably stratified (weak momentum source). © 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Convergence analysis of consensus algorithms is revisited in the light of the Hilbert distance. The Lyapunov function used in the early analysis by Tsitsiklis is shown to be the Hilbert distance to consensus in log coordinates. Birkhoff theorem, which proves contraction of the Hilbert metric for any positive homogeneous monotone map, provides an early yet general convergence result for consensus algorithms. Because Birkhoff theorem holds in arbitrary cones, we extend consensus algorithms to the cone of positive definite matrices. The proposed generalization finds applications in the convergence analysis of quantum stochastic maps, which are a generalization of stochastic maps to non-commutative probability spaces. ©2010 IEEE.
Resumo:
State-of-the-art speech recognisers are usually based on hidden Markov models (HMMs). They model a hidden symbol sequence with a Markov process, with the observations independent given that sequence. These assumptions yield efficient algorithms, but limit the power of the model. An alternative model that allows a wide range of features, including word- and phone-level features, is a log-linear model. To handle, for example, word-level variable-length features, the original feature vectors must be segmented into words. Thus, decoding must find the optimal combination of segmentation of the utterance into words and word sequence. Features must therefore be extracted for each possible segment of audio. For many types of features, this becomes slow. In this paper, long-span features are derived from the likelihoods of word HMMs. Derivatives of the log-likelihoods, which break the Markov assumption, are appended. Previously, decoding with this model took cubic time in the length of the sequence, and longer for higher-order derivatives. This paper shows how to decode in quadratic time. © 2013 IEEE.
Resumo:
Cubic boron nitride (c-BN) films were deposited on Si(001) substrates in an ion beam assisted deposition (IBAD) system under various conditions, and the growth parameter spaces and optical properties of c-BN films have been investigated systematically. The results indicate that suitable ion bombardment is necessary for the growth of c-BN films, and a well defined parameter space can be established by using the P/a-parameter. The refractive index of BN films keeps a constant of 1.8 for the c-BN content lower than 50%, while for c-BN films with higher cubic phase the refractive index increases with the c-BN content from 1.8 at chi(c) = 50% to 2.1 at chi(c) = 90%. Furthermore, the relationship between n and rho for BN films can be described by the Anderson-Schreiber equation, and the overlap field parameter gamma is determined to be 2.05.
Resumo:
The storage of photoexcited electron-hole pairs is experimentally carried out and theoretically realized by transferring electrons in both real and k spaces through resonant Gamma - X in an AlAs/GaAs heterostructure. This is proven by the peculiar capacitance jump and hysteresis in the measured capacitance-voltage curves. Our structure may be used as a photonic memory cell with a long storage time and a fast retrieval of photons as well.
Resumo:
In a recent seminal paper, Gibson and Wexler (1993) take important steps to formalizing the notion of language learning in a (finite) space whose grammars are characterized by a finite number of parameters. They introduce the Triggering Learning Algorithm (TLA) and show that even in finite space convergence may be a problem due to local maxima. In this paper we explicitly formalize learning in finite parameter space as a Markov structure whose states are parameter settings. We show that this captures the dynamics of TLA completely and allows us to explicitly compute the rates of convergence for TLA and other variants of TLA e.g. random walk. Also included in the paper are a corrected version of GW's central convergence proof, a list of "problem states" in addition to local maxima, and batch and PAC-style learning bounds for the model.
Resumo:
Eckerdal, A., McCartney, R., Mostr?m, J. E., Sanders, K., Thomas, L., and Zander, C. 2007. From Limen to Lumen: computing students in liminal spaces. In Proceedings of the Third international Workshop on Computing Education Research (Atlanta, Georgia, USA, September 15 - 16, 2007). ICER '07. ACM, New York, NY, 123-132.
Resumo:
We consider the problem of variable selection in regression modeling in high-dimensional spaces where there is known structure among the covariates. This is an unconventional variable selection problem for two reasons: (1) The dimension of the covariate space is comparable, and often much larger, than the number of subjects in the study, and (2) the covariate space is highly structured, and in some cases it is desirable to incorporate this structural information in to the model building process. We approach this problem through the Bayesian variable selection framework, where we assume that the covariates lie on an undirected graph and formulate an Ising prior on the model space for incorporating structural information. Certain computational and statistical problems arise that are unique to such high-dimensional, structured settings, the most interesting being the phenomenon of phase transitions. We propose theoretical and computational schemes to mitigate these problems. We illustrate our methods on two different graph structures: the linear chain and the regular graph of degree k. Finally, we use our methods to study a specific application in genomics: the modeling of transcription factor binding sites in DNA sequences. © 2010 American Statistical Association.