92 resultados para Euclidean plane,


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A collection of spherical obstacles in the unit ball in Euclidean space is said to be avoidable for Brownian motion if there is a positive probability that Brownian motion diffusing from some point in the ball will avoid all the obstacles and reach the boundary of the ball. The centres of the spherical obstacles are generated according to a Poisson point process while the radius of an obstacle is a deterministic function. If avoidable configurations are generated with positive probability Lundh calls this percolation diffusion. An integral condition for percolation diffusion is derived in terms of the intensity of the point process and the function that determines the radii of the obstacles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les inundacions són actualment les catàstrofes naturals més recurrents i les que generen un major nombre de danys i víctimes arreu del món. L'ocupació de les zones inundables a les lleres del riu és la causa principal d’aquests desastres naturals. En aquest article es descriu la realització de models hidrològics com a mecanisme per la predicció d’inundacions i la gestió del territori. S’han estudiat les conques de la Riera de Santa Coloma (Catalunya) i del riu San Francisco (Guatemala) mitjançant els programes HEC-HMS i HEC-RAS, dels quals s’avalua la seva capacitat com eina per a la gestió del territori. S’ha analitzat l’efecte de la urbanització en el risc d’inundació en el cas de la Riera de Santa Coloma en base a la previsió del Plà d’Ordenament Urbanístic Municipal. S’han determinat les zones inundables resultants de episodis de precipitació extrems al Riu San Francisco per als episodis de les tempestes Stan(2005) i Agatha(2010).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A collection of spherical obstacles in the unit ball in Euclidean space is said to be avoidable for Brownian motion if there is a positive probability that Brownian motion diffusing from some point in the ball will avoid all the obstacles and reach the boundary of the ball. The centres of the spherical obstacles are generated according to a Poisson point process while the radius of an obstacle is a deterministic function. If avoidable con gurations are generated with positive probability Lundh calls this percolation di usion. An integral condition for percolation di ffusion is derived in terms of the intensity of the point process and the function that determines the radii of the obstacles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Report for the scientific sojourn at the University of Bern, Swiss, from Mars until June 2008. Writer identification consists in determining the writer of a piece of handwriting from a set of writers. Even though an important amount of compositions contains handwritten text in the music scores, the aim of the work is to use only music notation to determine the author. It’s been developed two approaches for writer identification in old handwritten music scores. The methods proposed extract features from every music line, and also features from a texture image of music symbols. First of all, the music sheet is first preprocessed for obtaining a binarized music score without the staff lines. The classification is performed using a k-NN classifier based on Euclidean distance. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving encouraging identification rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the conditions allowing the formation of aeropolitan areas as large industrial areas with a high concentration of commercial activities in the proximity of selected airports. We assume that firms deliver their production by plane and land competition takes place among service operators, firms and farmers. Service operators supply facilities that firms can absorb. Our framework identifies a unique land equilibrium characterized by the spatial sequence Airport - Industrial park - Rural area (A-I-R). Aerotropolis-type configurations are associated with the level of transport costs and the degree of intensity of facilities. Keywords: aerotropolis; facilities; bid-rent function. JEL Classification Numbers: L29; L90; R14.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Para medir los coeficientes de transmisión y reflexión, S21 y S11, de diferentes materiales o muestras planas, se usa un sistema de toma de medidas en espacio libre operando banda W (75 – 110 GHz). Usando estos parámetros, S21 y S11, podemos calcular la permitividad dieléctrica relativa compleja (Er ) y la permeabilidad magnética relativa compleja (μr) mediante un proceso llamado NRW (Nicolson-Ross-Weir). El sistema para medir consiste en dos antenas de bocina, una transmisora y otra receptora, dos espejos con los que obtenemos una onda plana para medir las propiedades del material y un ordenador o dispositivo que calcula los resultados. Este dispositivo requiere de calibración para la obtención de resultados óptimos. Dicho sistema se puede simular de manera ideal con un software llamado ADS (Assistance Design System) para el estudio y comparación de grosores, permitividades dieléctricas relativas y permeabilidades magnéticas relativas de los materiales en función de la frecuencia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariableswith some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependenceof a composition with a categorical variable, a colored set of ternary diagrams mightbe a good idea for a first look at the data, but it will fast hide important aspects ifthe composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if theconventional, black-box ilr is used.Thinking on terms of the Euclidean structure of the simplex, we suggest to set upappropriate projections, which on one side show the compositional geometry and on theother side are still comprehensible by a non-expert analyst, readable for all locations andscales of the data. This is e.g. done by defining special balance displays with carefully-selected axes. Following this idea, we need to systematically ask how to display, explore,describe, and test the relation to complementary or explanatory data of categorical, real,ratio or again compositional scales.This contribution shows that it is sufficient to use some basic concepts and very fewadvanced tools from multivariate statistics (principal covariances, multivariate linearmodels, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariateanalysis

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-organizing maps (Kohonen 1997) is a type of artificial neural network developedto explore patterns in high-dimensional multivariate data. The conventional versionof the algorithm involves the use of Euclidean metric in the process of adaptation ofthe model vectors, thus rendering in theory a whole methodology incompatible withnon-Euclidean geometries.In this contribution we explore the two main aspects of the problem:1. Whether the conventional approach using Euclidean metric can shed valid resultswith compositional data.2. If a modification of the conventional approach replacing vectorial sum and scalarmultiplication by the canonical operators in the simplex (i.e. perturbation andpowering) can converge to an adequate solution.Preliminary tests showed that both methodologies can be used on compositional data.However, the modified version of the algorithm performs poorer than the conventionalversion, in particular, when the data is pathological. Moreover, the conventional ap-proach converges faster to a solution, when data is \well-behaved".Key words: Self Organizing Map; Artificial Neural networks; Compositional data

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En aquest article es defineixen uns nous índexs tridimensionals per a la descripció de les molècules a partir de paràmetres derivats de la Teoria de la Semblança Molecular i de les distàncies euclidianes entre els àtoms i les càrregues atòmiques efectives. Aquests indexs,anomenats 3D, s'han aplicat a l'estudi de les relacions estructura-propietat d'una família d'hidrocarburs, i han demostrat una capacitat de descripció de tres propietats de la família (temperatura d'ebullició, temperatura de fusió i densitat) molt més acurada que quan s'utilitzen els indexs 2D clàssics

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Observations in daily practice are sometimes registered as positive values larger then a given threshold α. The sample space is in this case the interval (α,+∞), α & 0, which can be structured as a real Euclidean space in different ways. This fact opens the door to alternative statistical models depending not only on the assumed distribution function, but also on the metric which is considered as appropriate, i.e. the way differences are measured, and thus variability

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densitiesby generalizing the Aitchison geometry for compositions in the simplex into the set probability densities

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The simplex, the sample space of compositional data, can be structured as a real Euclidean space. This fact allows to work with the coefficients with respect to an orthonormal basis. Over these coefficients we apply standard real analysis, inparticular, we define two different laws of probability trought the density function and we study their main properties

Relevância:

10.00% 10.00%

Publicador:

Resumo:

R from http://www.r-project.org/ is ‘GNU S’ – a language and environment for statistical computingand graphics. The environment in which many classical and modern statistical techniques havebeen implemented, but many are supplied as packages. There are 8 standard packages and many moreare available through the cran family of Internet sites http://cran.r-project.org .We started to develop a library of functions in R to support the analysis of mixtures and our goal isa MixeR package for compositional data analysis that provides support foroperations on compositions: perturbation and power multiplication, subcomposition with or withoutresiduals, centering of the data, computing Aitchison’s, Euclidean, Bhattacharyya distances,compositional Kullback-Leibler divergence etc.graphical presentation of compositions in ternary diagrams and tetrahedrons with additional features:barycenter, geometric mean of the data set, the percentiles lines, marking and coloring ofsubsets of the data set, theirs geometric means, notation of individual data in the set . . .dealing with zeros and missing values in compositional data sets with R procedures for simpleand multiplicative replacement strategy,the time series analysis of compositional data.We’ll present the current status of MixeR development and illustrate its use on selected data sets

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper focuses on the problem of realizing a plane-to-plane virtual link between a camera attached to the end-effector of a robot and a planar object. In order to do the system independent to the object surface appearance, a structured light emitter is linked to the camera so that 4 laser pointers are projected onto the object. In a previous paper we showed that such a system has good performance and nice characteristics like partial decoupling near the desired state and robustness against misalignment of the emitter and the camera (J. Pages et al., 2004). However, no analytical results concerning the global asymptotic stability of the system were obtained due to the high complexity of the visual features utilized. In this work we present a better set of visual features which improves the properties of the features in (J. Pages et al., 2004) and for which it is possible to prove the global asymptotic stability

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.