101 resultados para Skidding distances
Resumo:
Bimodal dispersal probability distributions with characteristic distances differing by several orders of magnitude have been derived and favorably compared to observations by Nathan [Nature (London) 418, 409 (2002)]. For such bimodal kernels, we show that two-dimensional molecular dynamics computer simulations are unable to yield accurate front speeds. Analytically, the usual continuous-space random walks (CSRWs) are applied to two dimensions. We also introduce discrete-space random walks and use them to check the CSRW results (because of the inefficiency of the numerical simulations). The physical results reported are shown to predict front speeds high enough to possibly explain Reid's paradox of rapid tree migration. We also show that, for a time-ordered evolution equation, fronts are always slower in two dimensions than in one dimension and that this difference is important both for unimodal and for bimodal kernels
Resumo:
To detect directional couplings from time series various measures based on distances in reconstructed state spaces were introduced. These measures can, however, be biased by asymmetries in the dynamics' structure, noise color, or noise level, which are ubiquitous in experimental signals. Using theoretical reasoning and results from model systems we identify the various sources of bias and show that most of them can be eliminated by an appropriate normalization. We furthermore diminish the remaining biases by introducing a measure based on ranks of distances. This rank-based measure outperforms existing distance-based measures concerning both sensitivity and specificity for directional couplings. Therefore, our findings are relevant for a reliable detection of directional couplings from experimental signals.
Resumo:
In this paper we propose a metaheuristic to solve a new version of the Maximum Capture Problem. In the original MCP, market capture is obtained by lower traveling distances or lower traveling time, in this new version not only the traveling time but also the waiting time will affect the market share. This problem is hard to solve using standard optimization techniques. Metaheuristics are shown to offer accurate results within acceptable computing times.
Resumo:
Graphical displays which show inter--sample distances are importantfor the interpretation and presentation of multivariate data. Except whenthe displays are two--dimensional, however, they are often difficult tovisualize as a whole. A device, based on multidimensional unfolding, isdescribed for presenting some intrinsically high--dimensional displays infewer, usually two, dimensions. This goal is achieved by representing eachsample by a pair of points, say $R_i$ and $r_i$, so that a theoreticaldistance between the $i$-th and $j$-th samples is represented twice, onceby the distance between $R_i$ and $r_j$ and once by the distance between$R_j$ and $r_i$. Self--distances between $R_i$ and $r_i$ need not be zero.The mathematical conditions for unfolding to exhibit symmetry are established.Algorithms for finding approximate fits, not constrained to be symmetric,are discussed and some examples are given.
Resumo:
This paper establishes a general framework for metric scaling of any distance measure between individuals based on a rectangular individuals-by-variables data matrix. The method allows visualization of both individuals and variables as well as preserving all the good properties of principal axis methods such as principal components and correspondence analysis, based on the singular-value decomposition, including the decomposition of variance into components along principal axes which provide the numerical diagnostics known as contributions. The idea is inspired from the chi-square distance in correspondence analysis which weights each coordinate by an amount calculated from the margins of the data table. In weighted metric multidimensional scaling (WMDS) we allow these weights to be unknown parameters which are estimated from the data to maximize the fit to the original distances. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing a matrix and displaying its rows and columns in biplots.
Resumo:
Subcompositional coherence is a fundamental property of Aitchison s approach to compositional data analysis, and is the principal justification for using ratios of components. We maintain, however, that lack of subcompositional coherence, that is incoherence, can be measured in an attempt to evaluate whether any given technique is close enough, for all practical purposes, to being subcompositionally coherent. This opens up the field to alternative methods, which might be better suited to cope with problems such as data zeros and outliers, while being only slightly incoherent. The measure that we propose is based on the distance measure between components. We show that the two-part subcompositions, which appear to be the most sensitive to subcompositional incoherence, can be used to establish a distance matrix which can be directly compared with the pairwise distances in the full composition. The closeness of these two matrices can be quantified using a stress measure that is common in multidimensional scaling, providing a measure of subcompositional incoherence. The approach is illustrated using power-transformed correspondence analysis, which has already been shown to converge to log-ratio analysis as the power transform tends to zero.
Resumo:
In this paper we propose a metaheuristic to solve a new version of the Maximum CaptureProblem. In the original MCP, market capture is obtained by lower traveling distances or lowertraveling time, in this new version not only the traveling time but also the waiting time willaffect the market share. This problem is hard to solve using standard optimization techniques.Metaheuristics are shown to offer accurate results within acceptable computing times.
Resumo:
When the behaviour of a specific hypothesis test statistic is studied by aMonte Carlo experiment, the usual way to describe its quality is by givingthe empirical level of the test. As an alternative to this procedure, we usethe empirical distribution of the obtained \emph{p-}values and exploit itsinformation both graphically and numerically.
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
We present Stroemgren uvby and Hbeta_ photometry for a set of 575 northern main sequence A type stars, most of them belonging to the Hipparcos Input Catalogue, with V from 5mag to 10mag and with known radial velocities. These observations enlarge the catalogue we began to compile some years ago to more than 1500 stars. Our catalogue includes kinematic and astrophysical data for each star. Our future goal is to perform an accurate analysis of the kinematical behaviour of these stars in the solar neighbourhood.
Resumo:
In recent years, several authors have revised the calibrations used to compute physical parameters (tex2html_wrap_inline498, tex2html_wrap_inline500, log g, [Fe/H]) from intrinsic colours in the tex2html_wrap_inline504 photometric system. For reddened stars, these intrinsic colours can be computed through the standard relations among colour indices for each of the regions defined by Strömgren (1966) on the HR diagram. We present a discussion of the coherence of these calibrations for main-sequence stars. Stars from open clusters are used to carry out this analysis. Assuming that individual reddening values and distances should be similar for all the members of a given open cluster, systematic differences among the calibrations used in each of the photometric regions might arise when comparing mean reddening values and distances for the members of each region. To classify the stars into Strömgren's regions we extended the algorithm presented by Figueras et al. (1991) to a wider range of spectral types and luminosity classes. The observational ZAMS are compared with the theoretical ZAMS from stellar evolutionary models, in the range tex2html_wrap_inline506 K. The discrepancies are also discussed.
Resumo:
A method of making a multiple matched filter which allows the recognition of different characters in successive planes in simple conditions is proposed. The generation of the filter is based on recording on the same plate the Fourier transforms of the different patterns to be recognized, each of which is affected by different spherical phase factors because the patterns have been placed at different distances from the lens. This is proved by means of experiments with a triple filter which allows satisfactory recognition of three characters.
Resumo:
Commuting consists in the fact that an important fraction of workers in developed countries do not reside close to their workplaces but at long distances from them, so they have to travel to their jobs and then back home daily. Although most workers hold a job in the same municipality where they live or in a neighbouring one, an important fraction of workers face long daily trips to get to their workplace and then back home.Even if we divide Catalonia (Spain) in small aggregations of municipalities, trying to make them as close to local labour markets as possible, we will find out that some of them have a positive commuting balance, attracting many workers from other areas and providing local jobs for almost all their resident workers. On the other side, other zones seem to be mostly residential, so an important fraction of their resident workers hold jobs in different local labour markets. Which variables influence an area¿s role as an attraction pole or a residential zone? In previous papers (Artís et al, 1998a, 2000; Romaní, 1999) we have brought out the main individual variables that influence commuting by analysing a sample of Catalan workers and their commuting decisions. In this paper we perform an analysis of the territorial variables that influence commuting, using data for aggregate commuting flows in Catalonia from the 1991 and 1996 Spanish Population Censuses.These variables influence commuting in two different ways: a zone with a dense, welldeveloped economical structure will have a high density of jobs. Work demand cannot be fulfilled with resident workers, so it spills over local boundaries. On the other side, this economical activity has a series of side-effects like pollution, congestion or high land prices which make these areas less desirable to live in. Workers who can afford it may prefer to live in less populated, less congested zones, where they can find cheaper land, larger homes and a better quality of life. The penalty of this decision is an increased commuting time. Our aim in this paper is to highlight the influence of local economical structure and amenities endowment in the workplace-residence location decision. A place-to-place logit commuting models is estimated for 1991 and 1996 in order to find the economical and amenities variables with higher influence in commuting decisions. From these models, we can outline a first approximation to the evolution of these variables in the 1986-1996 period. Data have been obtained from aggregate flow travel-matrix from the 1986, 1991 and 1996 Spanish Population Censuses
Resumo:
In this paper we examine whether access to markets had a significant influence onmigration choices of Spanish internal migrants in the inter-war years. We perform astructural contrast of a New Economic Geography model that focus on the forwardlinkage that links workers location choice with the geography of industrial production,one of the centripetal forces that drive agglomeration in the NEG models. The resultshighlight the presence of this forward linkage in the Spanish economy of the inter-warperiod. That is, we prove the existence of a direct relation between workers¿ localizationdecisions and the market potential of the host regions. In addition, the direct estimationof the values associated with key parameters in the NEG model allows us to simulatethe migratory flows derived from different scenarios of the relative size of regions andthe distances between them. We show that in Spain the power of attraction of theagglomerations grew as they increased in size, but the high elasticity estimated for themigration costs reduced the intensity of the migratory flows. This could help to explainthe apparently low intensity of internal migrations in Spain until its upsurge during the1920s. This also explains the geography of migrations in Spain during this period,which hardly affected the regions furthest from the large industrial agglomerations (i.e.,regions such as Andalusia, Estremadura and Castile-La Mancha) but had an intenseeffect on the provinces nearest to the principal centres of industrial development.
Resumo:
La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.