64 resultados para self-similar analysis
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We propose a procedure for analyzing and characterizing complex networks. We apply this to the social network as constructed from email communications within a medium sized university with about 1700 employees. Email networks provide an accurate and nonintrusive description of the flow of information within human organizations. Our results reveal the self-organization of the network into a state where the distribution of community sizes is self-similar. This suggests that a universal mechanism, responsible for emergence of scaling in other self-organized complex systems, as, for instance, river networks, could also be the underlying driving force in the formation and evolution of social networks.
Resumo:
We propose a procedure for analyzing and characterizing complex networks. We apply this to the social network as constructed from email communications within a medium sized university with about 1700 employees. Email networks provide an accurate and nonintrusive description of the flow of information within human organizations. Our results reveal the self-organization of the network into a state where the distribution of community sizes is self-similar. This suggests that a universal mechanism, responsible for emergence of scaling in other self-organized complex systems, as, for instance, river networks, could also be the underlying driving force in the formation and evolution of social networks.
Resumo:
Context. The interaction of microquasar jets with their environment can produce non-thermal radiation as in the case of extragalactic outflows impacting on their surroundings. Significant observational evidence of jet/medium interaction in galactic microquasars has been collected in the past few years, although little theoretical work has been done regarding the resulting non-thermal emission. Aims. In this work, we investigate the non-thermal emission produced in the interaction between microquasar jets and their environment, and the physical conditions for its production. Methods. We developed an analytical model based on those successfully applied to extragalactic sources. The jet is taken to be a supersonic and mildly relativistic hydrodynamical outflow. We focus on the jet/shocked medium structure in its adiabatic phase, and assume that it grows in a self-similar way. We calculate the fluxes and spectra of the radiation produced via synchrotron, inverse Compton, and relativistic bremsstrahlung processes by electrons accelerated in strong shocks. A hydrodynamical simulation is also performed to investigate further the jet interaction with the environment and check the physical parameters used in the analytical model. Results. For reasonable values of the magnetic field, and using typical values of the external matter density, the non-thermal particles could produce significant amounts of radiation at different wavelengths, although they do not cool primarily radiatively, but by adiabatic losses. The physical conditions of the analytical jet/medium interaction model are consistent with those found in the hydrodynamical simulation. Conclusions. Microquasar jet termination regions could be detectable at radio wavelengths for current instruments sensitive to ~arcminute scales. At X-ray energies, the expected luminosities are moderate, although the emitter is more compact than the radio one. The source may be detectable by XMM-Newton or Chandra, with 1-10 arcsec of angular resolution. The radiation at gamma-ray energies may be within the detection limits of the next generation of satellite and ground-based instruments.
Resumo:
We present new analytical tools able to predict the averaged behavior of fronts spreading through self-similar spatial systems starting from reaction-diffusion equations. The averaged speed for these fronts is predicted and compared with the predictions from a more general equation (proposed in a previous work of ours) and simulations. We focus here on two fractals, the Sierpinski gasket (SG) and the Koch curve (KC), for two reasons, i.e. i) they are widely known structures and ii) they are deterministic fractals, so the analytical study of them turns out to be more intuitive. These structures, despite their simplicity, let us observe several characteristics of fractal fronts. Finally, we discuss the usefulness and limitations of our approa
Resumo:
Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.
Resumo:
Adipose tissue (AT) is distributed as large differentiated masses, and smaller depots covering vessels, and organs, as well as interspersed within them. The differences between types and size of cells makes AT one of the most disperse and complex organs. Lipid storage is partly shared by other tissues such as muscle and liver. We intended to obtain an approximate estimation of the size of lipid reserves stored outside the main fat depots. Both male and female rats were made overweight by 4-weeks feeding of a cafeteria diet. Total lipid content was analyzed in brain, liver, gastrocnemius muscle, four white AT sites: subcutaneous, perigonadal, retroperitoneal and mesenteric, two brown AT sites (interscapular and perirenal) and in a pool of the rest of organs and tissues (after discarding gut contents). Organ lipid content was estimated and tabulated for each individual rat. Food intake was measured daily. There was a surprisingly high proportion of lipid not accounted for by the main macroscopic AT sites, even when brain, liver and BAT main sites were discounted. Muscle contained about 8% of body lipids, liver 1-1.4%, four white AT sites lipid 28-63% of body lipid, and the rest of the body (including muscle) 38-44%. There was a good correlation between AT lipid and body lipid, but lipid in"other organs" was highly correlated too with body lipid. Brain lipid was not. Irrespective of dietary intake, accumulation of body fat was uniform both for the main lipid storage and handling organs: large masses of AT (but also liver, muscle), as well as in the"rest" of tissues. These storage sites, in specialized (adipose) or not-specialized (liver, muscle) tissues reacted in parallel against a hyperlipidic diet challenge. We postulate that body lipid stores are handled and regulated coordinately, with a more centralized and overall mechanisms than usually assumed.
Resumo:
Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.
Resumo:
Per a poder comprendre la dimensió de les possibles transformacions que Internet pot comportar, cal investigar els usos concrets de què es o ha estat objecte. En definitiva, cal realitzar investigacions empíriques que aportin informació sobre qui usa Internet, en quines circumstàncies i amb quins objectius. En aquest context, s’ha iniciat l’anàlisi en profunditat d’un cas específic de comunitat virtual de suport social creada per una persona afectada pel trastorn bipolar. L’objectiu d’una comunitat d’autoajuda virtual és proporcionar informació i recolzament emocional a través d’Internet. Mitjançant una metodologia qualitativa s’ha realitzat l’observació del fòrum virtual que penja de la pàgina web “Bipoloarweb.com”. En les investigacions del fenomen dels grups d’autoajuda a Internet, sovint s’ha destacat que a diferència dels grups ‘presencials’, aquests només poden aportar recolzament emocional i informatiu als seus membres. El tipus de recolzament instrumental o altre tipus d’assistència física, en canvi, no és possible en els casos virtuals. Els primers resultats de la recerca invaliden aquesta afirmació general ja que s’ha pogut observar episodis diferents d’ajut instrumental. En relació a la gestió d’informació i producció de coneixement, ja es pot avançar algunes qüestions interessants. En primer lloc, la quantitat i el detall de la informació que en el fòrum circula sobre el trastorn Bipolar. La majoria d’aquests coneixements, però, sorgeixen directament de compartir l’experiència diària entre els membres del grup. Tot això permet avançar la següent hipòtesi de treball pel futur: a més a més de suport emocional i instrumental, aquest grup ‘empodera’ el seus membres? Si la resposta fos positiva, el nostre cas tindria semblances amb altres fenòmens com son les associacions de malalts.
Resumo:
In this paper a contest game with heterogeneous players is analyzed in which heterogeneity could be the consequence of past discrimination. Based on the normative perception of the heterogeneity there are two policy options to tackle this heterogeneity: either it is ignored and the contestants are treated equally, or affirmative action is implemented which compensates discriminated players. The consequences of these two policy options are analyzed for a simple two-person contest game and it is shown that the frequently criticized trade-off between affirmative action and total effort does not exist: Instead, affirmative action fosters effort incentives. A generalization to the n-person case and to a case with a partially informed contest designer yields the same result if the participation level is similar under each policy.
Resumo:
One of the most persistent and lasting debates in economic research refers to whether the answers to subjective questions can be used to explain individuals’ economic behavior. Using panel data for twelve EU countries, in the present study we analyze the causal relationship between self-reported housing satisfaction and residential mobility. Our results indicate that: i) households unsatisfied with their current housing situation are more likely to move; ii) housing satisfaction raises after a move, and; iii) housing satisfaction increases with the transition from being a renter to becoming a homeowner. Some interesting cross-country differences are observed. Our findings provide evidence in favor of use of subjective indicators of satisfaction with certain life domains in the analysis of individuals’ economic conduct.
Resumo:
Initiatives to stimulate the development and propagation of open educational resources (OER) need a sufficiently large community that can be mobilized to participate in this endeavour. Failure to achieve this could lead to underuse of OER. In the context of the Wikiwijs initiative a large scale survey was undertaken amongst primary and secondary school teachers to explore possible determinants of the educational use of digital learning materials (DLMs). Basing on the Integrative Model of Behaviour Prediction it was conjectured that self-efficacy, attitude and perceived norm would take a central role in explaining the intention to use DLMs. Several other predictors were added to the model as well whose effects were hypothesized to be mediated by the three central variables.All conjectured relationships were found using path analysis on survey data from 1484 teachers. Intention to DLMs was most strongly determined by self-efficacy, followed by attitude. ICT proficiency was in its turn the strongest predictor of self-efficacy. Perceived norm played only a limited role in the intention to use DLMs. Concluding, it seems paramount for the success of projects such as Wikiwijs to train teachers in the use of digital learning materials and ICT (e.g. the digital blackboard) and to impact on their attitude.
Resumo:
We propose to analyze shapes as “compositions” of distances in Aitchison geometry asan alternate and complementary tool to classical shape analysis, especially when sizeis non-informative.Shapes are typically described by the location of user-chosen landmarks. Howeverthe shape – considered as invariant under scaling, translation, mirroring and rotation– does not uniquely define the location of landmarks. A simple approach is to usedistances of landmarks instead of the locations of landmarks them self. Distances arepositive numbers defined up to joint scaling, a mathematical structure quite similar tocompositions. The shape fixes only ratios of distances. Perturbations correspond torelative changes of the size of subshapes and of aspect ratios. The power transformincreases the expression of the shape by increasing distance ratios. In analogy to thesubcompositional consistency, results should not depend too much on the choice ofdistances, because different subsets of the pairwise distances of landmarks uniquelydefine the shape.Various compositional analysis tools can be applied to sets of distances directly or afterminor modifications concerning the singularity of the covariance matrix and yield resultswith direct interpretations in terms of shape changes. The remaining problem isthat not all sets of distances correspond to a valid shape. Nevertheless interpolated orpredicted shapes can be backtransformated by multidimensional scaling (when all pairwisedistances are used) or free geodetic adjustment (when sufficiently many distancesare used)
Resumo:
We compare correspondance análisis to the logratio approach based on compositional data. We also compare correspondance análisis and an alternative approach using Hellinger distance, for representing categorical data in a contingency table. We propose a coefficient which globally measures the similarity between these approaches. This coefficient can be decomposed into several components, one component for each principal dimension, indicating the contribution of the dimensions to the difference between the two representations. These three methods of representation can produce quite similar results. One illustrative example is given
Resumo:
Compositional data naturally arises from the scientific analysis of the chemicalcomposition of archaeological material such as ceramic and glass artefacts. Data of thistype can be explored using a variety of techniques, from standard multivariate methodssuch as principal components analysis and cluster analysis, to methods based upon theuse of log-ratios. The general aim is to identify groups of chemically similar artefactsthat could potentially be used to answer questions of provenance.This paper will demonstrate work in progress on the development of a documentedlibrary of methods, implemented using the statistical package R, for the analysis ofcompositional data. R is an open source package that makes available very powerfulstatistical facilities at no cost. We aim to show how, with the aid of statistical softwaresuch as R, traditional exploratory multivariate analysis can easily be used alongside, orin combination with, specialist techniques of compositional data analysis.The library has been developed from a core of basic R functionality, together withpurpose-written routines arising from our own research (for example that reported atCoDaWork'03). In addition, we have included other appropriate publicly availabletechniques and libraries that have been implemented in R by other authors. Availablefunctions range from standard multivariate techniques through to various approaches tolog-ratio analysis and zero replacement. We also discuss and demonstrate a smallselection of relatively new techniques that have hitherto been little-used inarchaeometric applications involving compositional data. The application of the libraryto the analysis of data arising in archaeometry will be demonstrated; results fromdifferent analyses will be compared; and the utility of the various methods discussed
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its majordrawbacks however is its susceptibility to response biases. A known strategy to control these biases hasbeen the use of so-called ipsative items. Ipsative items are items that require the respondent to makebetween-scale comparisons within each item. The selected option determines to which scale the weight ofthe answer is attributed. Consequently in questionnaires only consisting of ipsative items everyrespondent is allotted an equal amount, i.e. the total score, that each can distribute differently over thescales. Therefore this type of response format yields data that can be considered compositional from itsinception.Methodological oriented psychologists have heavily criticized this type of item format, since the resultingdata is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians havekept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate bothpositions and addresses the similarities and differences between the two data collection methods. Theultimate objective is to formulate a guideline when to use which type of item format.The comparison is based on data obtained with both an ipsative and normative version of threepsychological questionnaires, which were administered to 502 first-year students in psychology accordingto a balanced within-subjects design. Previous research only compared the direct ipsative scale scoreswith the derived ipsative scale scores. The use of compositional data analysis techniques also enables oneto compare derived normative score ratios with direct normative score ratios. The addition of the secondcomparison not only offers the advantage of a better-balanced research strategy. In principle it also allowsfor parametric testing in the evaluation