986 resultados para SELF-SIMILARITY


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The relationships between ecological diversity and ecosystem functions such as stability and productivity have long been debated and have no final conclusion until now. It is ignored that the debate should be firstly based on the same diversity index, which should be theoretically complete, and on same observation scale. For the issue on the scale of ecotope observation, ecosystems should be distinguished according to intensity of human disturbance. For the issue on the scale of species observation, either number diversity or biomass diversity should be identified. This paper takes grassland ecosystems located within the Bayin Xile grassland of Xilin Gol League of Inner Mongolia Autonomous Region as an example to analyze effects of different diversity indices and spatial scales on the conclusions of ecological diversity and its relationships with ecosystem functions. The analysis results both on the scale of ecotope observation and on the scale of species observation show that different diversity indices might give different conclusions and spatial resolution has a great effect on the relative conclusions. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dynamic scaling and fractal behaviour of spinodal phase separation is studied in a binary polymer mixture of poly(methyl methacrylate) (PMMA) and poly(styrene-co-acrylonitrile) (SAN). In the later stages of spinodal phase separation, a simple dynamic scaling law was found for the scattering function S(q,t):S(q,t) approximately q(m)-3S approximately (q/q(m)). The possibility of using fractal theory to describe the complex morphology of spinodal phase separation is discussed. In phase separation, morphology exhibits strong self-similarity. The two-dimensional image obtained by optical microscopy can be analysed within the framework of fractal concepts. The results give a fractal dimension of 1.64. This implies that the fractal structure may be the reason for the dynamic scaling behaviour of the structure function.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As the World Wide Web (Web) is increasingly adopted as the infrastructure for large-scale distributed information systems, issues of performance modeling become ever more critical. In particular, locality of reference is an important property in the performance modeling of distributed information systems. In the case of the Web, understanding the nature of reference locality will help improve the design of middleware, such as caching, prefetching, and document dissemination systems. For example, good measurements of reference locality would allow us to generate synthetic reference streams with accurate performance characteristics, would allow us to compare empirically measured streams to explain differences, and would allow us to predict expected performance for system design and capacity planning. In this paper we propose models for both temporal and spatial locality of reference in streams of requests arriving at Web servers. We show that simple models based only on document popularity (likelihood of reference) are insufficient for capturing either temporal or spatial locality. Instead, we rely on an equivalent, but numerical, representation of a reference stream: a stack distance trace. We show that temporal locality can be characterized by the marginal distribution of the stack distance trace, and we propose models for typical distributions and compare their cache performance to our traces. We also show that spatial locality in a reference stream can be characterized using the notion of self-similarity. Self-similarity describes long-range correlations in the dataset, which is a property that previous researchers have found hard to incorporate into synthetic reference strings. We show that stack distance strings appear to be strongly self-similar, and we provide measurements of the degree of self-similarity in our traces. Finally, we discuss methods for generating synthetic Web traces that exhibit the properties of temporal and spatial locality that we measured in our data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a tool called Gismo (Generator of Internet Streaming Media Objects and workloads). Gismo enables the specification of a number of streaming media access characteristics, including object popularity, temporal correlation of request, seasonal access patterns, user session durations, user interactivity times, and variable bit-rate (VBR) self-similarity and marginal distributions. The embodiment of these characteristics in Gismo enables the generation of realistic and scalable request streams for use in the benchmarking and comparative evaluation of Internet streaming media delivery techniques. To demonstrate the usefulness of Gismo, we present a case study that shows the importance of various workload characteristics in determining the effectiveness of proxy caching and server patching techniques in reducing bandwidth requirements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims. A magneto-hydrostatic model is constructed with spectropolarimetric properties close to those of solar photospheric magnetic bright points.
Methods. Results of solar radiative magneto-convection simulations are used to produce the spatial structure of the vertical component of the magnetic field. The horizontal component of magnetic field is reconstructed using the self-similarity condition, while the magneto-hydrostatic equilibrium condition is applied to the standard photospheric model with the magnetic field embedded. Partial ionisation processes are found to be necessary for reconstructing the correct temperature structure of the model.
Results. The structures obtained are in good agreement with observational data. By combining the realistic structure of the magnetic field with the temperature structure of the quiet solar photosphere, the continuum formation level above the equipartition layer can be found. Preliminary results are shown of wave propagation through this magnetic structure. The observational consequences of the oscillations are examined in continuum intensity and in the Fe I 6302 angstrom magnetically sensitive line.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fractals have found widespread application in a range of scientific fields, including ecology. This rapid growth has produced substantial new insights, but has also spawned confusion and a host of methodological problems. In this paper, we review the value of fractal methods, in particular for applications to spatial ecology, and outline potential pitfalls. Methods for measuring fractals in nature and generating fractal patterns for use in modelling are surveyed. We stress the limitations and the strengths of fractal models. Strictly speaking, no ecological pattern can be truly fractal, but fractal methods may nonetheless provide the most efficient tool available for describing and predicting ecological patterns at multiple scales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We derive the species-area relationship (SAR) expected from an assemblage of fractally distributed species. If species have truly fractal spatial distributions with different fractal dimensions, we show that the expected SAR is not the classical power-law function, as suggested recently in the literature. This analytically derived SAR has a distinctive shape that is not commonly observed in nature: upward-accelerating richness with increasing area (when plotted on log-log axes). This suggests that, in reality, most species depart from true fractal spatial structure. We demonstrate the fitting of a fractal SAR using two plant assemblages (Alaskan trees and British grasses). We show that in both cases, when modelled as fractal patterns, the modelled SAR departs from the observed SAR in the same way, in accord with the theory developed here. The challenge is to identify how species depart from fractality, either individually or within assemblages, and more importantly to suggest reasons why species distributions are not self-similar and what, if anything, this can tell us about the spatial processes involved in their generation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tese de doutoramento, Ciências Biomédicas (Neurociências), Universidade de Lisboa, Faculdade de Medicina, 2014

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En aquest treball es presenta l'ús de funcions de densitat electrònica de forat de Fermi per incrementar el paper que pren una regió molecular concreta, considerada com a responsable de la reactivitat molecular, tot i mantenir la mida de la funció de densitat original. Aquestes densitats s'utilitzen per fer mesures d'autosemblança molecular quàntica i es presenten com una alternativa a l'ús de fragments moleculars aillats en estudis de relació entre estructura i propietat. El treball es complementa amb un exemple pràctic, on es correlaciona l'autosemblanca molecular a partir de densitats modificades amb l'energia d'una reacció isodòsmica

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La tesis tracta diferents aspectes relacionats amb el càlcul de la semblança quàntica, així com la seva aplicació en la racionalització i predicció de l'activitat de fàrmacs. Es poden destacar dos progressos importants en el desenvolupament de noves metodologies que faciliten el càlcul de les mesures de semblança quàntica. En primer lloc, la descripció de les molècules mitjançant les funciones densitat aproximades PASA (Promolecular Atomic Shell Approximation) ha permès descriure amb suficient precisió la densitat electrònica dels sistemes moleculars analitzats, reduint substancialment el temps de càlcul de les mesures de semblança. En segon lloc, el desenvolupament de tècniques de superposició molecular específiques de les mesures de semblança quàntica ha permès resoldre el problema de l'alineament en l'espai dels compostos comparats. El perfeccionament d'aquests nous procediments i algoritmes matemàtics associats a les mesures de semblança molecular quàntica, ha estat essencial per poder progressar en diferents disciplines de la química computacional, sobretot les relacionades amb les anàlisis quantitatives entre les estructures moleculars i les seves activitats biològiques, conegudes amb les sigles angleses QSAR (Quantitative Structure-Activity Relationships). Precisament en l'àrea de les relacions estructura-activitat s'han presentat dues aproximacions fonamentades en la semblança molecular quàntica que s'originen a partir de dues representacions diferents de les molècules. La primera descripció considera la densitat electrònica global de les molècules i és important, entre altres, la disposició dels objectes comparats en l'espai i la seva conformació tridimensional. El resultat és una matriu de semblança amb les mesures de semblança de tots els parells de compostos que formen el conjunt estudiat. La segona descripció es fonamenta en la partició de la densitat global de les molècules en fragments. S'utilitzen mesures d'autosemblança per analitzar els requeriments bàsics d'una determinada activitat des del punt de vista de la semblança quàntica. El procés permet la detecció de les regions moleculars que són responsables d'una alta resposta biològica. Això permet obtenir un patró amb les regions actives que és d'evident interès per als propòsits del disseny de fàrmacs. En definitiva, s'ha comprovat que mitjançant la simulació i manipulació informàtica de les molècules en tres dimensions es pot obtenir una informació essencial en l'estudi de la interacció entre els fàrmacs i els seus receptors macromoleculars.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La present tesi, tot i que emmarcada dins de la teoria de les Mesures Semblança Molecular Quántica (MQSM), es deriva en tres àmbits clarament definits: - La creació de Contorns Moleculars de IsoDensitat Electrònica (MIDCOs, de l'anglès Molecular IsoDensity COntours) a partir de densitats electròniques ajustades. - El desenvolupament d'un mètode de sobreposició molecular, alternatiu a la regla de la màxima semblança. - Relacions Quantitatives Estructura-Activitat (QSAR, de l'anglès Quantitative Structure-Activity Relationships). L'objectiu en el camp dels MIDCOs és l'aplicació de funcions densitat ajustades, ideades inicialment per a abaratir els càlculs de MQSM, per a l'obtenció de MIDCOs. Així, es realitza un estudi gràfic comparatiu entre diferents funcions densitat ajustades a diferents bases amb densitats obtingudes de càlculs duts a terme a nivells ab initio. D'aquesta manera, l'analogia visual entre les funcions ajustades i les ab initio obtinguda en el ventall de representacions de densitat obtingudes, i juntament amb els valors de les mesures de semblança obtinguts prèviament, totalment comparables, fonamenta l'ús d'aquestes funcions ajustades. Més enllà del propòsit inicial, es van realitzar dos estudis complementaris a la simple representació de densitats, i són l'anàlisi de curvatura i l'extensió a macromolècules. La primera observació correspon a comprovar no només la semblança dels MIDCOs, sinó la coherència del seu comportament a nivell de curvatura, podent-se així observar punts d'inflexió en la representació de densitats i veure gràficament aquelles zones on la densitat és còncava o convexa. Aquest primer estudi revela que tant les densitats ajustades com les calculades a nivell ab initio es comporten de manera totalment anàloga. En la segona part d'aquest treball es va poder estendre el mètode a molècules més grans, de fins uns 2500 àtoms. Finalment, s'aplica part de la filosofia del MEDLA. Sabent que la densitat electrònica decau ràpidament al allunyar-se dels nuclis, el càlcul d'aquesta pot ser obviat a distàncies grans d'aquests. D'aquesta manera es va proposar particionar l'espai, i calcular tan sols les funcions ajustades de cada àtom tan sols en una regió petita, envoltant l'àtom en qüestió. Duent a terme aquest procés, es disminueix el temps de càlcul i el procés esdevé lineal amb nombre d'àtoms presents en la molècula tractada. En el tema dedicat a la sobreposició molecular es tracta la creació d'un algorisme, així com la seva implementació en forma de programa, batejat Topo-Geometrical Superposition Algorithm (TGSA), d'un mètode que proporcionés aquells alineaments que coincideixen amb la intuïció química. El resultat és un programa informàtic, codificat en Fortran 90, el qual alinea les molècules per parelles considerant tan sols nombres i distàncies atòmiques. La total absència de paràmetres teòrics permet desenvolupar un mètode de sobreposició molecular general, que proporcioni una sobreposició intuïtiva, i també de forma rellevant, de manera ràpida i amb poca intervenció de l'usuari. L'ús màxim del TGSA s'ha dedicat a calcular semblances per al seu ús posterior en QSAR, les quals majoritàriament no corresponen al valor que s'obtindria d'emprar la regla de la màxima semblança, sobretot si hi ha àtoms pesats en joc. Finalment, en l'últim tema, dedicat a la Semblança Quàntica en el marc del QSAR, es tracten tres aspectes diferents: - Ús de matrius de semblança. Aquí intervé l'anomenada matriu de semblança, calculada a partir de les semblances per parelles d'entre un conjunt de molècules. Aquesta matriu és emprada posteriorment, degudament tractada, com a font de descriptors moleculars per a estudis QSAR. Dins d'aquest àmbit s'han fet diversos estudis de correlació d'interès farmacològic, toxicològic, així com de diverses propietats físiques. - Aplicació de l'energia d'interacció electró-electró, assimilat com a una forma d'autosemblança. Aquesta modesta contribució consisteix breument en prendre el valor d'aquesta magnitud, i per analogia amb la notació de l'autosemblança molecular quàntica, assimilar-la com a cas particular de d'aquesta mesura. Aquesta energia d'interacció s'obté fàcilment a partir de programari mecanoquàntic, i esdevé ideal per a fer un primer estudi preliminar de correlació, on s'utilitza aquesta magnitud com a únic descriptor. - Càlcul d'autosemblances, on la densitat ha estat modificada per a augmentar el paper d'un substituent. Treballs previs amb densitats de fragments, tot i donar molt bons resultats, manquen de cert rigor conceptual en aïllar un fragment, suposadament responsable de l'activitat molecular, de la totalitat de l'estructura molecular, tot i que les densitats associades a aquest fragment ja difereixen degut a pertànyer a esquelets amb diferents substitucions. Un procediment per a omplir aquest buit que deixa la simple separació del fragment, considerant així la totalitat de la molècula (calcular-ne l'autosemblança), però evitant al mateix temps valors d'autosemblança no desitjats provocats per àtoms pesats, és l'ús de densitats de Forats de fermi, els quals es troben definits al voltant del fragment d'interès. Aquest procediment modifica la densitat de manera que es troba majoritàriament concentrada a la regió d'interès, però alhora permet obtenir una funció densitat, la qual es comporta matemàticament igual que la densitat electrònica regular, podent-se així incorporar dins del marc de la semblança molecular. Les autosemblances calculades amb aquesta metodologia han portat a bones correlacions amb àcids aromàtics substituïts, podent així donar una explicació al seu comportament. Des d'un altre punt de vista, també s'han fet contribucions conceptuals. S'ha implementat una nova mesura de semblança, la d'energia cinètica, la qual consisteix en prendre la recentment desenvolupada funció densitat d'energia cinètica, la qual al comportar-se matemàticament igual a les densitats electròniques regulars, s'ha incorporat en el marc de la semblança. A partir d'aquesta mesura s'han obtingut models QSAR satisfactoris per diferents conjunts moleculars. Dins de l'aspecte del tractament de les matrius de semblança s'ha implementat l'anomenada transformació estocàstica com a alternativa a l'ús de l'índex Carbó. Aquesta transformació de la matriu de semblança permet obtenir una nova matriu no simètrica, la qual pot ser posteriorment tractada per a construir models QSAR.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Long distance dispersal (LDD) plays an important role in many population processes like colonization, range expansion, and epidemics. LDD of small particles like fungal spores is often a result of turbulent wind dispersal and is best described by functions with power-law behavior in the tails ("fat tailed"). The influence of fat-tailed LDD on population genetic structure is reported in this article. In computer simulations, the population structure generated by power-law dispersal with exponents in the range of -2 to -1, in distinct contrast to that generated by exponential dispersal, has a fractal structure. As the power-law exponent becomes smaller, the distribution of individual genotypes becomes more self-similar at different scales. Common statistics like G(ST) are not well suited to summarizing differences between the population genetic structures. Instead, fractal and self-similarity statistics demonstrated differences in structure arising from fat-tailed and exponential dispersal. When dispersal is fat tailed, a log-log plot of the Simpson index against distance between subpopulations has an approximately constant gradient over a large range of spatial scales. The fractal dimension D-2 is linearly inversely related to the power-law exponent, with a slope of similar to -2. In a large simulation arena, fat-tailed LDD allows colonization of the entire space by all genotypes whereas exponentially bounded dispersal eventually confines all descendants of a single clonal lineage to a relatively small area.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article examines selected methodological insights that complexity theory might provide for planning. In particular, it focuses on the concept of fractals and, through this concept, how ways of organising policy domains across scales might have particular causal impacts. The aim of this article is therefore twofold: (a) to position complexity theory within social science through a ‘generalised discourse’, thereby orienting it to particular ontological and epistemological biases and (b) to reintroduce a comparatively new concept – fractals – from complexity theory in a way that is consistent with the ontological and epistemological biases argued for, and expand on the contribution that this might make to planning. Complexity theory is theoretically positioned as a neo-systems theory with reasons elaborated. Fractal systems from complexity theory are systems that exhibit self-similarity across scales. This concept (as previously introduced by the author in ‘Fractal spaces in planning and governance’) is further developed in this article to (a) illustrate the ontological and epistemological claims for complexity theory, and to (b) draw attention to ways of organising policy systems across scales to emphasise certain characteristics of the systems – certain distinctions. These distinctions when repeated across scales reinforce associated processes/values/end goals resulting in particular policy outcomes. Finally, empirical insights from two case studies in two different policy domains are presented and compared to illustrate the workings of fractals in planning practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion.