125 resultados para Scaling sStrategies
Resumo:
This document presents an integrated analysis of the performance of Catalonia based on an analysis of how the energy consumption (measured at the societal level for the Catalan Society) is used within both the productive sectors of the economy and the household, to generate added value, jobs, and to guarantee a given level of material standard of living to the population. The trends found in Catalonia are compared to the trends of other European Countries to contextualize the performance of Catalonia with respect to other societies that have followed different paths of economic development. The first part of the document consists of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) approach that has been used to provide this integrated analysis of Catalan Society across different scales (starting from an analysis of the specific sectors of the Catalan economy as an Autonomous Community and scaling up to an intra-regional (European Union 14) comparison) and across different dimensions of analyses of energy consumption coupled with added value generation. Within the scope of this study, we observe the various trajectories of changes in the metabolic pattern for Catalonia and the EU14 countries in the Paid Work Sectors composed of namely, the Agricultural Sector, the Productive Sector and the Services and Government Sector also in comparison with the changes in the household sector. The flow intensities of the exosomatic energy and the added value generated for each specific sector are defined per hour of human activity, thus characterized as exosomatic energy (MJ/hour) (or Exosomatic Metabolic Rate) and added value (€/hour) (Economic Labour Productivity) across multiple levels. Within the second part of the document, the possible usage of the MuSIASEM approach to land use analyses (using a multi-level matrix of categories of land use) has been conducted.
Resumo:
The Keller-Segel system has been widely proposed as a model for bacterial waves driven by chemotactic processes. Current experiments on E. coli have shown precise structure of traveling pulses. We present here an alternative mathematical description of traveling pulses at a macroscopic scale. This modeling task is complemented with numerical simulations in accordance with the experimental observations. Our model is derived from an accurate kinetic description of the mesoscopic run-and-tumble process performed by bacteria. This model can account for recent experimental observations with E. coli. Qualitative agreements include the asymmetry of the pulse and transition in the collective behaviour (clustered motion versus dispersion). In addition we can capture quantitatively the main characteristics of the pulse such as the speed and the relative size of tails. This work opens several experimental and theoretical perspectives. Coefficients at the macroscopic level are derived from considerations at the cellular scale. For instance the stiffness of the signal integration process turns out to have a strong effect on collective motion. Furthermore the bottom-up scaling allows to perform preliminary mathematical analysis and write efficient numerical schemes. This model is intended as a predictive tool for the investigation of bacterial collective motion.
Resumo:
Durante los cuatro años de disfrute de la beca (2006 – 2009) se ha consolidado una base de datos de medidas osteológicas del esqueleto apendicular de numerosas especies del O. Carnivora. Concretamente, se han medido 364 individuos de 126 especies. Los ejemplares pertenecían a las colecciones del Phyletisches Museum (Jena, Alemania), el Museum für Naturkunde (Berlín, Alemania), el Museu de Ciències Naturals de la Ciutadella (Barcelona, España), el Múseum National d'Histoire Naturelle (París, Francia), y el Museo Nacional de Ciencias Naturales (Madrid, España). Asimismo, con estos datos se han estado preparando tres artículos sobre la morfología de ciertos elementos del esqueleto apendicular en carnívoros, dos de los cuales se encuentran actualmente en estado de revisión para su publicación científica. Dos de ellos, "Scapula, habitat and locomotion in Carnivora" y "Size and shape in the carnivore scapula", relacionan la morfología escapular con factores como el tamaño del animal, el tipo de locomoción que presenta y el hábitat en el que se encuentra; el primero mediante metodología multivariante (análisis funcional) y el segundo bajo las nuevas técnicas de morfometría geométrica. El tercer artículo, "Scaling and mechanics in the carnivore calcaneus: A comparison of natural and artificial selection", evalúa el efecto de diferentes tipos de selección, natural frente a artificial, sobre la morfología del calcáneo y su influencia en la biomecánica de este hueso. Finalmente, también se ha desarrollado un estudio experimental sobre la búsqueda de estabilidad durante la locomoción arbórea, cuyos resultados han dado lugar al artículo "The search for stability on narrow supports: An experimental study in cats and dogs", que también se halla bajo revisión actualmente.
Resumo:
El consumo energético es un aspecto cada vez más importante en el diseño de microprocesadores. Este trabajo experimenta con una técnica de control del consumo, el escalado dinámico de tensión y frecuencia (DVFS, siglas en inglés), para determinar cuan efectiva es la misma en la ejecución de programas con diferentes cargas de trabajo, intensivas en cómputo o memoria. Además, se ha extendido la experimentación a varios núcleos de ejecución, permitiendo comprobar en que medida las características de la ejecución en una arquitectura multicore afecta al desempeño de dicha técnica.
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observedfor each individual. A particular case of FDA is when the observed functions are densityfunctions, that are also an example of infinite dimensional compositional data. In thiswork we compare several methods for dimensionality reduction for this particular typeof data: functional principal components analysis (PCA) with or without a previousdata transformation and multidimensional scaling (MDS) for diferent inter-densitiesdistances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (householdsincome distributions)
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods
Resumo:
Es defineix l'expansió general d'operadors com una combinació lineal de projectors i s'exposa la seva aplicació generalitzada al càlcul d'integrals moleculars. Com a exemple numèric, es fa l'aplicació al càlcul d'integrals de repulsió electrònica entre quatre funcions de tipus s centrades en punts diferents, i es mostren tant resultats del càlcul com la definició d'escalat respecte a un valor de referència, que facilitarà el procés d'optimització de l'expansió per uns paràmetres arbitraris. Es donen resultats ajustats al valor exacte
Resumo:
We propose to analyze shapes as “compositions” of distances in Aitchison geometry asan alternate and complementary tool to classical shape analysis, especially when sizeis non-informative.Shapes are typically described by the location of user-chosen landmarks. Howeverthe shape – considered as invariant under scaling, translation, mirroring and rotation– does not uniquely define the location of landmarks. A simple approach is to usedistances of landmarks instead of the locations of landmarks them self. Distances arepositive numbers defined up to joint scaling, a mathematical structure quite similar tocompositions. The shape fixes only ratios of distances. Perturbations correspond torelative changes of the size of subshapes and of aspect ratios. The power transformincreases the expression of the shape by increasing distance ratios. In analogy to thesubcompositional consistency, results should not depend too much on the choice ofdistances, because different subsets of the pairwise distances of landmarks uniquelydefine the shape.Various compositional analysis tools can be applied to sets of distances directly or afterminor modifications concerning the singularity of the covariance matrix and yield resultswith direct interpretations in terms of shape changes. The remaining problem isthat not all sets of distances correspond to a valid shape. Nevertheless interpolated orpredicted shapes can be backtransformated by multidimensional scaling (when all pairwisedistances are used) or free geodetic adjustment (when sufficiently many distancesare used)
Resumo:
Colour image segmentation based on the hue component presents some problems due to the physical process of image formation. One of that problems is colour clipping, which appear when at least one of the sensor components is saturated. We have designed a system, that works for a trained set of colours, to recover the chromatic information of those pixels on which colour has been clipped. The chromatic correction method is based on the fact that hue and saturation are invariant to the uniform scaling of the three RGB components. The proposed method has been validated by means of a specific colour image processing board that has allowed its execution in real time. We show experimental results of the application of our method
Resumo:
Quantitative linguistics has provided us with a number of empirical laws that characterise the evolution of languages and competition amongst them. In terms of language usage, one of the most influential results is Zipf’s law of word frequencies. Zipf’s law appears to be universal, and may not even be unique to human language. However, there is ongoing controversy over whether Zipf’s law is a good indicator of complexity. Here we present an alternative approach that puts Zipf’s law in the context of critical phenomena (the cornerstone of complexity in physics) and establishes the presence of a large-scale “attraction” between successive repetitions of words. Moreover, this phenomenon is scale-invariant and universal – the pattern is independent of word frequency and is observed in texts by different authors and written in different languages. There is evidence, however, that the shape of the scaling relation changes for words that play a key role in the text, implying the existence of different “universality classes” in the repetition of words. These behaviours exhibit striking parallels with complex catastrophic phenomena.
Resumo:
It has been long stated that there are profound analogies between fracture experiments and earthquakes; however, few works attempt a complete characterization of the parallelisms between these so separate phenomena. We study the Acoustic Emission events produced during the compression of Vycor (SiO&sub&2&/sub&). The Gutenberg-Richter law, the modified Omori's law, and the law of aftershock productivity hold for a minimum of 5 decades, are independent of the compression rate, and keep stationary for all the duration of the experiments. The waiting-time distribution fulfills a unified scaling law with a power-law exponent close to 2.45 for long times, which is explained in terms of the temporal variations of the activity rate.
Resumo:
The dynamics of homogeneously heated granular gases which fragment due to particle collisions is analyzed. We introduce a kinetic model which accounts for correlations induced at the grain collisions and analyze both the kinetics and relevant distribution functions these systems develop. The work combines analytical and numerical studies based on direct simulation Monte Carlo calculations. A broad family of fragmentation probabilities is considered, and its implications for the system kinetics are discussed. We show that generically these driven materials evolve asymptotically into a dynamical scaling regime. If the fragmentation probability tends to a constant, the grain number diverges at a finite time, leading to a shattering singularity. If the fragmentation probability vanishes, then the number of grains grows monotonously as a power law. We consider different homogeneous thermostats and show that the kinetics of these systems depends weakly on both the grain inelasticity and driving. We observe that fragmentation plays a relevant role in the shape of the velocity distribution of the particles. When the fragmentation is driven by local stochastic events, the longvelocity tail is essentially exponential independently of the heating frequency and the breaking rule. However, for a Lowe-Andersen thermostat, numerical evidence strongly supports the conjecture that the scaled velocity distribution follows a generalized exponential behavior f (c)~exp (−cⁿ), with n ≈1.2, regarding less the fragmentation mechanisms
Resumo:
The analysis of the multiantenna capacity in the high-SNR regime has hitherto focused on the high-SNR slope (or maximum multiplexing gain), which quantifies the multiplicative increase as function of the number of antennas. This traditional characterization is unable to assess the impact of prominent channel features since, for a majority of channels, the slope equals the minimum of the number of transmit and receive antennas. Furthermore, a characterization based solely on the slope captures only the scaling but it has no notion of the power required for a certain capacity. This paper advocates a more refined characterization whereby, as function of SNRjdB, the high-SNR capacity is expanded as an affine function where the impact of channel features such as antenna correlation, unfaded components, etc, resides in the zero-order term or power offset. The power offset, for which we find insightful closed-form expressions, is shown to play a chief role for SNR levels of practical interest.
Resumo:
Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.
Resumo:
Correspondence analysis, when used to visualize relationships in a table of counts(for example, abundance data in ecology), has been frequently criticized as being too sensitiveto objects (for example, species) that occur with very low frequency or in very few samples. Inthis statistical report we show that this criticism is generally unfounded. We demonstrate this inseveral data sets by calculating the actual contributions of rare objects to the results ofcorrespondence analysis and canonical correspondence analysis, both to the determination ofthe principal axes and to the chi-square distance. It is a fact that rare objects are oftenpositioned as outliers in correspondence analysis maps, which gives the impression that theyare highly influential, but their low weight offsets their distant positions and reduces their effecton the results. An alternative scaling of the correspondence analysis solution, the contributionbiplot, is proposed as a way of mapping the results in order to avoid the problem of outlying andlow contributing rare objects.