108 resultados para Perimetric complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acidity peaks in Greenland ice cores have been used as critical reference horizons for synchronizing ice core records, aiding the construction of a single Greenland Ice Core Chronology (GICC05) for the Holocene. Guided by GICC05, we examined sub-sections of three Greenland cores in the search for tephra from specific eruptions that might facilitate the linkage of ice core records, the dating of prehistoric tephras and the understanding of the eruptions. Here we report the identification of 14 horizons with tephra particles, including 11 that have not previously been reported from the North Atlantic region and that have the potential to be valuable isochrons. The positions of tephras whose major element data are consistent with ash from the Katmai AD 1912 and Öraefajökull AD 1362 eruptions confirm the annually resolved ice core chronology for the last 700 years. We provide a more refined date for the so-called “AD860B” tephra, a widespread isochron found across NW Europe, and present new evidence relating to the 17th century BC Thera/Aniakchak debate that shows N. American eruptions likely contributed to the acid signals at this time. Our results emphasize the variable spatial and temporal distributions of volcanic products in Greenland ice that call for a more cautious approach in the attribution of acid signals to specific eruptive events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the computational complexity of testing dominance and consistency in CP-nets. Previously, the complexity of dominance has been determined for restricted classes in which the dependency graph of the CP-net is acyclic. However, there are preferences of interest that define cyclic dependency graphs; these are modeled with general CP-nets. In our main results, we show here that both dominance and consistency for general CP-nets are PSPACE-complete. We then consider the concept of strong dominance, dominance equivalence and dominance incomparability, and several notions of optimality, and identify the complexity of the corresponding decision problems. The reductions used in the proofs are from STRIPS planning, and thus reinforce the earlier established connections between both areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biodiversity may be seen as a scientific measure of the complexity of a biological system, implying an information basis. Complexity cannot be directly valued, so economists have tried to define the services it provides, though often just valuing the services of 'key' species. Here we provide a new definition of biodiversity as a measure of functional information, arguing that complexity embodies meaningful information as Gregory Bateson defined it. We argue that functional information content (FIC) is the potentially valuable component of total (algorithmic) information content (AIC), as it alone determines biological fitness and supports ecosystem services. Inspired by recent extensions to the Noah's Ark problem, we show how FIC/AIC can be calculated to measure the degree of substitutability within an ecological community. Establishing substitutability is an essential foundation for valuation. From it, we derive a way to rank whole communities by Indirect Use Value, through quantifying the relation between system complexity and the production rate of ecosystem services. Understanding biodiversity as information evidently serves as a practical interface between economics and ecological science. © 2012 Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measures of icon designs rely heavily on surveys of the perceptions of population samples. Thus, measuring the extent to which changes in the structure of an icon will alter its perceived complexity can be costly and slow. An automated system capable of producing reliable estimates of perceived complexity could reduce development costs and time. Measures of icon complexity developed by Garcia, Badre, and Stasko (1994) and McDougall, Curry, and de Bruijn (1999) were correlated with six icon properties measured using Matlab (MathWorks, 2001) software, which uses image-processing techniques to measure icon properties. The six icon properties measured were icon foreground, the number of objects in an icon, the number of holes in those objects, and two calculations of icon edges and homogeneity in icon structure. The strongest correlates with human judgments of perceived icon complexity (McDougall et al., 1999) were structural variability (r(s) = .65) and edge information (r(s) =.64).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We define a multi-modal version of Computation Tree Logic (ctl) by extending the language with path quantifiers E and A where d denotes one of finitely many dimensions, interpreted over Kripke structures with one total relation for each dimension. As expected, the logic is axiomatised by taking a copy of a ctl axiomatisation for each dimension. Completeness is proved by employing the completeness result for ctl to obtain a model along each dimension in turn. We also show that the logic is decidable and that its satisfiability problem is no harder than the corresponding problem for ctl. We then demonstrate how Normative Systems can be conceived as a natural interpretation of such a multi-dimensional ctl logic. © 2009 Springer Science+Business Media B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The regulation of the small GTPases leading to their membrane localization has long been attributed to processing of their C-terminal CAAX box. As deregulation of many of these GTPases have been implicated in cancer and other disorders, prenylation and methylation of this CAAX box has been studied in depth as a possibility for drug targeting, but unfortunately, to date no drug has proved clinically beneficial. However, these GTPases also undergo other modifications that may be important for their regulation. Ubiquitination has long been demonstrated to regulate the fate of numerous cellular proteins and recently it has become apparent that many GTPases, along with their GAPs, GeFs and GDis, undergo ubiquitination leading to a variety of fates such as re-localization or degradation. in this review we focus on the recent literature demonstrating that the regulation of small GTPases by ubiquitination, either directly or indirectly, plays a considerable role in controlling their function and that targeting these modifications could be important for disease treatment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To evaluate the changes in the Visual Field Index (VFI) in eyes with perimetric glaucomatous progression, and to compare these against stable glaucoma patients.

PATIENTS AND METHODS: Consecutive patients with open angle glaucoma with a minimum of 6 reliable visual fields and 2 years of follow-up were identified. Perimetric progression was assessed by 4 masked glaucoma experts from different units, and classified into 3 categories: "definite progression," "suspected progression," or "no progression." This was compared with the Glaucoma Progression Analysis (GPA) II and VFI linear regression analysis, where progression was defined as a negative slope with significance of <5%.

RESULTS: Three hundred ninety-seven visual fields from 51 eyes of 39 patients were assessed. The mean number of visual fields was 7.8 (SD 1.1) per eye, and the mean follow-up duration was 63.7 (SD 13.4) months. The mean VFI linear regression slope showed an overall statistically significant difference (P<0.001, analysis of variance) for each category of progression. Using expert consensus opinion as the reference standard, both VFI analysis and GPA II had high specificity (0.93 and 0.90, respectively), but relatively low sensitivity (0.45 and 0.41, respectively).

CONCLUSIONS: The mean VFI regression slope in our cohort of eyes without perimetric progression showed a statistically significant difference compared with those with suspected and definite progression. VFI analysis and GPA II both had similarly high specificity but low sensitivity when compared with expert consensus opinion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-dimensional gene expression data provide a rich source of information because they capture the expression level of genes in dynamic states that reflect the biological functioning of a cell. For this reason, such data are suitable to reveal systems related properties inside a cell, e.g., in order to elucidate molecular mechanisms of complex diseases like breast or prostate cancer. However, this is not only strongly dependent on the sample size and the correlation structure of a data set, but also on the statistical hypotheses tested. Many different approaches have been developed over the years to analyze gene expression data to (I) identify changes in single genes, (II) identify changes in gene sets or pathways, and (III) identify changes in the correlation structure in pathways. In this paper, we review statistical methods for all three types of approaches, including subtypes, in the context of cancer data and provide links to software implementations and tools and address also the general problem of multiple hypotheses testing. Further, we provide recommendations for the selection of such analysis methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Marine Strategy Framework Directive (MSFD) requires that European Union Member States achieve "Good Environmental Status" (GES) in respect of 11 Descriptors of the marine environment by 2020. Of those, Descriptor 4, which focuses on marine food webs, is perhaps the most challenging to implement since the identification of simple indicators able to assess the health of highly dynamic and complex interactions is difficult. Here, we present the proposed food web criteria/indicators and analyse their theoretical background and applicability in order to highlight both the current knowledge gaps and the difficulties associated with the assessment of GES. We conclude that the existing suite of indicators gives variable focus to the three important food web properties: structure, functioning and dynamics, and more emphasis should be given to the latter two and the general principles that relate these three properties. The development of food web indicators should be directed towards more integrative and process-based indicators with an emphasis on their responsiveness to multiple anthropogenic pressures. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polar codes are one of the most recent advancements in coding theory and they have attracted significant interest. While they are provably capacity achieving over various channels, they have seen limited practical applications. Unfortunately, the successive nature of successive cancellation based decoders hinders fine-grained adaptation of the decoding complexity to design constraints and operating conditions. In this paper, we propose a systematic method for enabling complexity-performance trade-offs by constructing polar codes based on an optimization problem which minimizes the complexity under a suitably defined mutual information based performance constraint. Moreover, a low-complexity greedy algorithm is proposed in order to solve the optimization problem efficiently for very large code lengths.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a low complexity system for spectral analysis of heart rate variability (HRV) is presented. The main idea of the proposed approach is the implementation of the Fast-Lomb periodogram that is a ubiquitous tool in spectral analysis, using a wavelet based Fast Fourier transform. Interestingly we show that the proposed approach enables the classification of processed data into more and less significant based on their contribution to output quality. Based on such a classification a percentage of less-significant data is being pruned leading to a significant reduction of algorithmic complexity with minimal quality degradation. Indeed, our results indicate that the proposed system can achieve up-to 45% reduction in number of computations with only 4.9% average error in the output quality compared to a conventional FFT based HRV system.