951 resultados para Upper bound estimate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

To calibrate the in situ 10Be production rate, we collected surface samples from nine large granitic boulders within the deposits of a rock avalanche that occurred in AD 1717 in the upper Ferret Valley, Mont Blanc Massif, Italy. The 10Be concentrations were extremely low and successfully measured within 10% analytical uncertainty or less. The concentrations vary from 4829 ± 448 to 5917 ± 476 at g−1. Using the historical age exposure time, we calculated the local and sea level-high latitude (i.e. ≥60°) cosmogenic 10Be spallogenic production rates. Depending on the scaling schemes, these vary between 4.60 ± 0.38 and 5.26 ± 0.43 at g−1 a−1. Although they correlate well with global values, our production rates are clearly higher than those from more recent calibration sites. We conclude that our 10Be production rate is a mean and an upper bound for production rates in the Massif region over the past 300 years. This rate is probably influenced by inheritance and will yield inaccurate (e.g. too young) exposure ages when applied to surface-exposure studies in the area. Other independently dated rock-avalanche deposits in the region that are approximately 103 years old could be considered as possible calibration sites.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study of operations on representations of objects is well documented in the realm of spatial engineering. However, the mathematical structure and formal proof of these operational phenomena are not thoroughly explored. Other works have often focused on query-based models that seek to order classes and instances of objects in the form of semantic hierarchies or graphs. In some models, nodes of graphs represent objects and are connected by edges that represent different types of coarsening operators. This work, however, studies how the coarsening operator "simplification" can manipulate partitions of finite sets, independent from objects and their attributes. Partitions that are "simplified first have a collection of elements filtered (removed), and then the remaining partition is amalgamated (some sub-collections are unified). Simplification has many interesting mathematical properties. A finite composition of simplifications can also be accomplished with some single simplification. Also, if one partition is a simplification of the other, the simplified partition is defined to be less than the other partition according to the simp relation. This relation is shown to be a partial-order relation based on simplification. Collections of partitions can not only be proven to have a partial- order structure, but also have a lattice structure and are complete. In regard to a geographic information system (GIs), partitions related to subsets of attribute domains for objects are called views. Objects belong to different views based whether or not their attribute values lie in the underlying view domain. Given a particular view, objects with their attribute n-tuple codings contained in the view are part of the actualization set on views, and objects are labeled according to the particular subset of the view in which their coding lies. Though the scope of the work does not mainly focus on queries related directly to geographic objects, it provides verification for the existence of particular views in a system with this underlying structure. Given a finite attribute domain, one can say with mathematical certainty that different views of objects are partially ordered by simplification, and every collection of views has a greatest lower bound and least upper bound, which provides the validity for exploring queries in this regard.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Adolescents 15 – 19 years of age have the highest prevalence of Chlamydia trachomatis out of any age group, reaching 28.3% among detained youth [1]. The 2010 Center for Disease Control guidelines recommend one dose of azithromycin for the treatment of uncomplicated chlamydia infections based on 97% cure rate with azithromycin. Recent studies found an 8% or higher failure rate of azithromycin treatment in adolescents [2-5]. We conducted a prospective study beginning May, 2012 in the Harris County Juvenile Justice Center (HCJJC) medical department. Study subjects were detainees with positive urine NAAT tests for chlamydia on intake. We provided treatment with Azithromycin, completed questionnaires assessing risk factors and performed a test of cure for chlamydia three weeks after successful treatment. Those with treatment failure (positive TOC) received doxycycline for seven days. The preliminary results summarized herein are based on data collected from May 2012 to January 2013. Of the 97 youth enrolled in the study to date, 4 (4.1%) experienced treatment failure after administration of Azithromycin. Of these four patients, all were male, African-American and asymptomatic at the time of initial diagnosis and treatment. Of note, 37 (38%) patients in the cohort complained of abdominal pain with administration of Azithromycin. Results to date suggest that the efficacy of Azithromycin in our study is higher than the recent reported studies indicating a possible upper bound of Azithromycin. These results are preliminary and recruitment will continue until a sample size of 127 youth is reached.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The distribution and composition of minerals in the silt and clay fraction of the fine-grained slope sediments were examined. Special interest was focused on diagenesis. The results are listed as follows. (1) Smectite, andesitic Plagioclase, quartz, and low-Mg calcite are the main mineral components of the sediment. Authigenic dolomite was observed in the weathering zones of serpentinites, together with aragonite, as well as in clayey silt. (2) The mineralogy and geochemistry of the sediments is analogous to that of the andesitic rocks of Costa Rica and Guatemala. (3) Unstable components like volcanic glass, amphiboles, and pyroxenes show increasing etching with depth. (4) The diagenetic alteration of opal-A skeletons from etching pits and replacement by opal-CT to replacement by chalcedony as a final stage corresponds to the typical opal diagenesis. (5) Clinoptilolite is the stable zeolite mineral according to mineral stability fields; its neoformation is well documented. (6) The early diagenesis of smectites is shown by an increase of crystallinity with depth. Only the smectites in the oldest sediments (Oligocene and early Eocene) contain nonexpanding illite layers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A zonation is presented for the oceanic late Middle Jurassic to Late Jurassic of the Atlantic Ocean. The oldest zone, the Stephenolithion bigotii Zone (subdivided into a Stephanolithion hexum Subzone and a Cyclagelosphaera margerelii Subzone), is middle Callovian to early Oxfordian. The Vagalapilla stradneri Zone is middle Oxfordian to Kimmeridgian. The Conusphaera mexicana Zone, subdivided into a lower Hexapodorhabdus cuvillieri Subzone and a Polycostella beckmannii Subzone, is the latest Kimmeridgian to Tithonian. Direct correlation of this zonation with the boreal zonation established for Britain and northern France (Barnard and Hay, 1974; Medd, 1982; Hamilton, 1982) is difficult because of poor preservation resulting in low diversity for the cored section at Site 534 and a lack of Tithonian marker species in the boreal realm. Correlations based on dinoflagellates and on nannofossils with stratotype sections (or regions) give somewhat different results. Dinoflagellates give generally younger ages, especially for the Oxfordian to Kimmeridgian part of the recovered section, than do nannofossils.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The climate during the Cenozoic era changed in several steps from ice-free poles and warm conditions to ice-covered poles and cold conditions. Since the 1950s, a body of information on ice volume and temperature changes has been built up predominantly on the basis of measurements of the oxygen isotopic composition of shells of benthic foraminifera collected from marine sediment cores. The statistical methodology of time series analysis has also evolved, allowing more information to be extracted from these records. Here we provide a comprehensive view of Cenozoic climate evolution by means of a coherent and systematic application of time series analytical tools to each record from a compilation spanning the interval from 4 to 61 Myr ago. We quantitatively describe several prominent features of the oxygen isotope record, taking into account the various sources of uncertainty (including measurement, proxy noise, and dating errors). The estimated transition times and amplitudes allow us to assess causal climatological-tectonic influences on the following known features of the Cenozoic oxygen isotopic record: Paleocene-Eocene Thermal Maximum, Eocene-Oligocene Transition, Oligocene-Miocene Boundary, and the Middle Miocene Climate Optimum. We further describe and causally interpret the following features: Paleocene-Eocene warming trend, the two-step, long-term Eocene cooling, and the changes within the most recent interval (Miocene-Pliocene). We review the scope and methods of constructing Cenozoic stacks of benthic oxygen isotope records and present two new latitudinal stacks, which capture besides global ice volume also bottom water temperatures at low (less than 30°) and high latitudes. This review concludes with an identification of future directions for data collection, statistical method development, and climate modeling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information about the computational cost of programs is potentially useful for a variety of purposes, including selecting among different algorithms, guiding program transformations, in granularity control and mapping decisions in parallelizing compilers, and query optimization in deductive databases. Cost analysis of logic programs is complicated by nondeterminism: on the one hand, procedures can return múltiple Solutions, making it necessary to estímate the number of solutions in order to give nontrivial upper bound cost estimates; on the other hand, the possibility of failure has to be taken into account while estimating lower bounds. Here we discuss techniques to address these problems to some extent.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Automatic cost analysis of programs has been traditionally concentrated on a reduced number of resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certiflcation of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually application-dependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a datábase, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-deflnable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these deflnitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering a signiflcant set of interesting resources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Automatic cost analysis of programs has been traditionally studied in terms of a number of concrete, predefined resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certification of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually applicationdependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a database, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-definable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these definitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering an ample set of interesting resources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental results for standard benchmarks, which further support the feasibility of the solution adopted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este trabajo se da un ejemplo de un conjunto de n puntos situados en posición general, en el que se alcanza el mínimo número de puntos que pueden formar parte de algún k-set para todo k con 1menor que=kmenor quen/2. Se generaliza también, a puntos en posición no general, el resultado de Erdõs et al., 1973, sobre el mínimo número de puntos que pueden formar parte de algún k-set. The study of k- sets is a very relevant topic in the research area of computational geometry. The study of the maximum and minimum number of k-sets in sets of points of the plane in general position, specifically, has been developed at great length in the literature. With respect to the maximum number of k-sets, lower bounds for this maximum have been provided by Erdõs et al., Edelsbrunner and Welzl, and later by Toth. Dey also stated an upper bound for this maximum number of k-sets. With respect to the minimum number of k-set, this has been stated by Erdos el al. and, independently, by Lovasz et al. In this paper the authors give an example of a set of n points in the plane in general position (no three collinear), in which the minimum number of points that can take part in, at least, a k-set is attained for every k with 1 ≤ k < n/2. The authors also extend Erdos’s result about the minimum number of points in general position which can take part in a k-set to a set of n points not necessarily in general position. That is why this work complements the classic works we have mentioned before.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a fully automatic goal-oriented hp-adaptive finite element strategy for open region electromagnetic problems (radiation and scattering) is presented. The methodology leads to exponential rates of convergence in terms of an upper bound of an user-prescribed quantity of interest. Thus, the adaptivity may be guided to provide an optimal error, not globally for the field in the whole finite element domain, but for specific parameters of engineering interest. For instance, the error on the numerical computation of the S-parameters of an antenna array, the field radiated by an antenna, or the Radar Cross Section on given directions, can be minimized. The efficiency of the approach is illustrated with several numerical simulations with two dimensional problem domains. Results include the comparison with the previously developed energy-norm based hp-adaptivity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The concept of unreliable failure detector was introduced by Chandra and Toueg as a mechanism that provides information about process failures. This mechanism has been used to solve several agreement problems, such as the consensus problem. In this paper, algorithms that implement failure detectors in partially synchronous systems are presented. First two simple algorithms of the weakest class to solve the consensus problem, namely the Eventually Strong class (⋄S), are presented. While the first algorithm is wait-free, the second algorithm is f-resilient, where f is a known upper bound on the number of faulty processes. Both algorithms guarantee that, eventually, all the correct processes agree permanently on a common correct process, i.e. they also implement a failure detector of the class Omega (Ω). They are also shown to be optimal in terms of the number of communication links used forever. Additionally, a wait-free algorithm that implements a failure detector of the Eventually Perfect class (⋄P) is presented. This algorithm is shown to be optimal in terms of the number of bidirectional links used forever.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En el presente trabajo se estudia la sensibilidad a las entallas suaves de alambres de acero inoxidable dúplex fuertemente trefilado. Las entallas consideradas son reducciones de la sección transversal del alambre no simétricas respecto al eje de revolución. Tras constatar experimentalmente que los alambres con este tipo de daño fallan por agotamiento plástico, se ha determinado numéricamente la carga de agotamiento plástico en función de la profundidad de entalla empleando un mode-lo computacional de elementos finitos. Los resultados numéricos indican que la mayor deformabilidad del alambre debida a la entalla actúa en favor de su tolerancia al daño y permite que ésta alcance el límite superior dado por la carga de agotamiento a tracción simple del ligamento resistente.This research deals with the sensitivity to blunt notches of highly cold-drawn wires made of dúplex stainless steel. The analyzed notches are actually reductions of the wire cross section, asymmetrically mechanized with respect to the longitudinal wire axis. Once experimentally verified that the notched wires fail by plástic collapse, the failure load was found as a function of notch depth by means of a finite element model. The numerical results show that the higher compliance of the wires provided by the notch increases their damage tolerance up to the upper bound given by the tensile plástic failure load of the notch ligament.