589 resultados para info
Resumo:
We report on a search for the production of the Higgs boson decaying to two bottom quarks accompanied by two additional quarks. The data sample used corresponds to an integrated luminosity of approximately 4 fb-1 of pp̅ collisions at √s=1.96 TeV recorded by the CDF II experiment. This search includes twice the integrated luminosity of the previous published result, uses analysis techniques to distinguish jets originating from light flavor quarks and those from gluon radiation, and adds sensitivity to a Higgs boson produced by vector boson fusion. We find no evidence of the Higgs boson and place limits on the Higgs boson production cross section for Higgs boson masses between 100 GeV/c2 and 150 GeV/c2 at the 95% confidence level. For a Higgs boson mass of 120 GeV/c2, the observed (expected) limit is 10.5 (20.0) times the predicted standard model cross section.
Resumo:
The Iberian Peninsula is recognized as an important refugial area for species survival and diversification during the climatic cycles of the Quaternary. Recent phylogeographic studies have revealed Iberia as a complex of multiple refugia. However, most of these studies have focused either on species with narrow distributions within the region or species groups that, although widely distributed, generally have a genetic structure that relates to pre-Quaternary cladogenetic events. In this study we undertake a detailed phylogeographic analysis of the lizard species, Lacerta lepida, whose distribution encompasses the entire Iberian Peninsula. We attempt to identify refugial areas, recolonization routes, zones of secondary contact and date demographic events within this species. Results support the existence of 6 evolutionary lineages (phylogroups) with a strong association between genetic variation and geography, suggesting a history of allopatric divergence in different refugia. Diversification within phylogroups is concordant with the onset of the Pleistocene climatic oscillations. The southern regions of several phylogroups show a high incidence of ancestral alleles in contrast with high incidence of recently derived alleles in northern regions. All phylogroups show signs of recent demographic and spatial expansions. We have further identified several zones of secondary contact, with divergent mitochondrial haplotypes occurring in narrow zones of sympatry. The concordant patterns of spatial and demographic expansions detected within phylogroups, together with the high incidence of ancestral haplotypes in southern regions of several phylogroups, suggests a pattern of contraction of populations into southern refugia during adverse climatic conditions from which subsequent northern expansions occurred. This study supports the emergent pattern of multiple refugia within Iberia but adds to it by identifying a pattern of refugia coincident with the southern distribution limits of individual evolutionary lineages. These areas are important in terms of long-term species persistence and therefore important areas for conservation.
Resumo:
This work studies decision problems from the perspective of nondeterministic distributed algorithms. For a yes-instance there must exist a proof that can be verified with a distributed algorithm: all nodes must accept a valid proof, and at least one node must reject an invalid proof. We focus on locally checkable proofs that can be verified with a constant-time distributed algorithm. For example, it is easy to prove that a graph is bipartite: the locally checkable proof gives a 2-colouring of the graph, which only takes 1 bit per node. However, it is more difficult to prove that a graph is not bipartite—it turns out that any locally checkable proof requires Ω(log n) bits per node. In this work we classify graph problems according to their local proof complexity, i.e., how many bits per node are needed in a locally checkable proof. We establish tight or near-tight results for classical graph properties such as the chromatic number. We show that the proof complexities form a natural hierarchy of complexity classes: for many classical graph problems, the proof complexity is either 0, Θ(1), Θ(log n), or poly(n) bits per node. Among the most difficult graph properties are symmetric graphs, which require Ω(n2) bits per node, and non-3-colourable graphs, which require Ω(n2/log n) bits per node—any pure graph property admits a trivial proof of size O(n2).
Resumo:
We study the following problem: given a geometric graph G and an integer k, determine if G has a planar spanning subgraph (with the original embedding and straight-line edges) such that all nodes have degree at least k. If G is a unit disk graph, the problem is trivial to solve for k = 1. We show that even the slightest deviation from the trivial case (e.g., quasi unit disk graphs or k = 1) leads to NP-hard problems.
Resumo:
The distinction between a priori and a posteriori knowledge has been the subject of an enormous amount of discussion, but the literature is biased against recognizing the intimate relationship between these forms of knowledge. For instance, it seems to be almost impossible to find a sample of pure a priori or a posteriori knowledge. In this paper it will be suggested that distinguishing between a priori and a posteriori is more problematic than is often suggested, and that a priori and a posteriori resources are in fact used in parallel. We will define this relationship between a priori and a posteriori knowledge as the bootstrapping relationship. As we will see, this relationship gives us reasons to seek for an altogether novel definition of a priori and a posteriori knowledge. Specifically, we will have to analyse the relationship between a priori knowledge and a priori reasoning, and it will be suggested that the latter serves as a more promising starting point for the analysis of aprioricity. We will also analyse a number of examples from the natural sciences and consider the role of a priori reasoning in these examples. The focus of this paper is the analysis of the concepts of a priori and a posteriori knowledge rather than the epistemic domain of a posteriori and a priori justification.
Resumo:
To test the reliability of the radiocarbon method for determining root age, we analyzed fine roots (originating from the years 1985 to 1993) from ingrowth cores with known maximum root age (1 to 6 years old). For this purpose, three Scots pine (Pinus sylvestris L.) stands were selected from boreal forests in Finland. We analyzed root 14C age by the radiocarbon method and compared it with the above-mentioned known maximum fine root age. In general, ages determined by the two methods (root 14C age and ingrowth core root maximum age) were in agreement with each other for roots of small diameter (<0.5mm). By contrast, in most of the samples of fine roots of larger diameter (1.5-2mm), the 14C age of root samples of 1987-89 exceeded the ingrowth core root maximum age by 1-10 years. This shows that these roots had received a large amount of older stored carbon from unknown sources in addition to atmospheric CO2 directly from photosynthesis. We conclude that the 14C signature of fine roots, especially those of larger diameter, may not always be indicative of root age, and that further studies are needed concerning the extent of possible root uptake of older carbon and its residence time in roots. Keywords: fine root age, Pinus sylvestris, radiocarbon, root carbon, ingrowth cores, tree ring
Resumo:
The Grad–Shafranov reconstruction is a method of estimating the orientation (invariant axis) and cross section of magnetic flux ropes using the data from a single spacecraft. It can be applied to various magnetic structures such as magnetic clouds (MCs) and flux ropes embedded in the magnetopause and in the solar wind. We develop a number of improvements of this technique and show some examples of the reconstruction procedure of interplanetary coronal mass ejections (ICMEs) observed at 1 AU by the STEREO, Wind, and ACE spacecraft during the minimum following Solar Cycle 23. The analysis is conducted not only for ideal localized ICME events but also for non-trivial cases of magnetic clouds in fast solar wind. The Grad–Shafranov reconstruction gives reasonable results for the sample events, although it possesses certain limitations, which need to be taken into account during the interpretation of the model results.
Resumo:
We propose an efficient and parameter-free scoring criterion, the factorized conditional log-likelihood (ˆfCLL), for learning Bayesian network classifiers. The proposed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as well as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-theoretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-of-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show that ˆfCLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, using significantly less computational resources.
Resumo:
We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.
Resumo:
This paper describes the cost-benefit analysis of digital long-term preservation (LTP) that was carried out in the context of the Finnish National Digital Library Project (NDL) in 2010. The analysis was based on the assumption that as many as 200 archives, libraries, and museums will share an LTP system. The term ‘system’ shall be understood as encompassing not only information technology, but also human resources, organizational structures, policies and funding mechanisms. The cost analysis shows that an LTP system will incur, over the first 12 years, cumulative costs of €42 million, i.e. an average of €3.5 million per annum. Human resources and investments in information technology are the major cost factors. After the initial stages, the analysis predicts annual costs of circa €4 million. The analysis compared scenarios with and without a shared LTP system. The results indicate that a shared system will have remarkable benefits. At the development and implementation stages, a shared system shows an advantage of €30 million against the alternative scenario consisting of five independent LTP solutions. During the later stages, the advantage is estimated at €10 million per annum. The cumulative cost benefit over the first 12 years would amount to circa €100 million.