952 resultados para upper bound solution
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.
Resumo:
We consider the 2d XY Model with topological lattice actions, which are invariant against small deformations of the field configuration. These actions constrain the angle between neighbouring spins by an upper bound, or they explicitly suppress vortices (and anti-vortices). Although topological actions do not have a classical limit, they still lead to the universal behaviour of the Berezinskii-Kosterlitz-Thouless (BKT) phase transition — at least up to moderate vortex suppression. In the massive phase, the analytically known Step Scaling Function (SSF) is reproduced in numerical simulations. However, deviations from the expected universal behaviour of the lattice artifacts are observed. In the massless phase, the BKT value of the critical exponent ηc is confirmed. Hence, even though for some topological actions vortices cost zero energy, they still drive the standard BKT transition. In addition we identify a vortex-free transition point, which deviates from the BKT behaviour.
Resumo:
Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback–Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.
Resumo:
We introduce a version of operational set theory, OST−, without a choice operation, which has a machinery for Δ0Δ0 separation based on truth functions and the separation operator, and a new kind of applicative set theory, so-called weak explicit set theory WEST, based on Gödel operations. We show that both the theories and Kripke–Platek set theory KPKP with infinity are pairwise Π1Π1 equivalent. We also show analogous assertions for subtheories with ∈-induction restricted in various ways and for supertheories extended by powerset, beta, limit and Mahlo operations. Whereas the upper bound is given by a refinement of inductive definition in KPKP, the lower bound is by a combination, in a specific way, of realisability, (intuitionistic) forcing and negative interpretations. Thus, despite interpretability between classical theories, we make “a detour via intuitionistic theories”. The combined interpretation, seen as a model construction in the sense of Visser's miniature model theory, is a new way of construction for classical theories and could be said the third kind of model construction ever used which is non-trivial on the logical connective level, after generic extension à la Cohen and Krivine's classical realisability model.
Resumo:
In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).
Resumo:
To calibrate the in situ 10Be production rate, we collected surface samples from nine large granitic boulders within the deposits of a rock avalanche that occurred in AD 1717 in the upper Ferret Valley, Mont Blanc Massif, Italy. The 10Be concentrations were extremely low and successfully measured within 10% analytical uncertainty or less. The concentrations vary from 4829 ± 448 to 5917 ± 476 at g−1. Using the historical age exposure time, we calculated the local and sea level-high latitude (i.e. ≥60°) cosmogenic 10Be spallogenic production rates. Depending on the scaling schemes, these vary between 4.60 ± 0.38 and 5.26 ± 0.43 at g−1 a−1. Although they correlate well with global values, our production rates are clearly higher than those from more recent calibration sites. We conclude that our 10Be production rate is a mean and an upper bound for production rates in the Massif region over the past 300 years. This rate is probably influenced by inheritance and will yield inaccurate (e.g. too young) exposure ages when applied to surface-exposure studies in the area. Other independently dated rock-avalanche deposits in the region that are approximately 103 years old could be considered as possible calibration sites.
Resumo:
The study of operations on representations of objects is well documented in the realm of spatial engineering. However, the mathematical structure and formal proof of these operational phenomena are not thoroughly explored. Other works have often focused on query-based models that seek to order classes and instances of objects in the form of semantic hierarchies or graphs. In some models, nodes of graphs represent objects and are connected by edges that represent different types of coarsening operators. This work, however, studies how the coarsening operator "simplification" can manipulate partitions of finite sets, independent from objects and their attributes. Partitions that are "simplified first have a collection of elements filtered (removed), and then the remaining partition is amalgamated (some sub-collections are unified). Simplification has many interesting mathematical properties. A finite composition of simplifications can also be accomplished with some single simplification. Also, if one partition is a simplification of the other, the simplified partition is defined to be less than the other partition according to the simp relation. This relation is shown to be a partial-order relation based on simplification. Collections of partitions can not only be proven to have a partial- order structure, but also have a lattice structure and are complete. In regard to a geographic information system (GIs), partitions related to subsets of attribute domains for objects are called views. Objects belong to different views based whether or not their attribute values lie in the underlying view domain. Given a particular view, objects with their attribute n-tuple codings contained in the view are part of the actualization set on views, and objects are labeled according to the particular subset of the view in which their coding lies. Though the scope of the work does not mainly focus on queries related directly to geographic objects, it provides verification for the existence of particular views in a system with this underlying structure. Given a finite attribute domain, one can say with mathematical certainty that different views of objects are partially ordered by simplification, and every collection of views has a greatest lower bound and least upper bound, which provides the validity for exploring queries in this regard.
Resumo:
Adolescents 15 – 19 years of age have the highest prevalence of Chlamydia trachomatis out of any age group, reaching 28.3% among detained youth [1]. The 2010 Center for Disease Control guidelines recommend one dose of azithromycin for the treatment of uncomplicated chlamydia infections based on 97% cure rate with azithromycin. Recent studies found an 8% or higher failure rate of azithromycin treatment in adolescents [2-5]. We conducted a prospective study beginning May, 2012 in the Harris County Juvenile Justice Center (HCJJC) medical department. Study subjects were detainees with positive urine NAAT tests for chlamydia on intake. We provided treatment with Azithromycin, completed questionnaires assessing risk factors and performed a test of cure for chlamydia three weeks after successful treatment. Those with treatment failure (positive TOC) received doxycycline for seven days. The preliminary results summarized herein are based on data collected from May 2012 to January 2013. Of the 97 youth enrolled in the study to date, 4 (4.1%) experienced treatment failure after administration of Azithromycin. Of these four patients, all were male, African-American and asymptomatic at the time of initial diagnosis and treatment. Of note, 37 (38%) patients in the cohort complained of abdominal pain with administration of Azithromycin. Results to date suggest that the efficacy of Azithromycin in our study is higher than the recent reported studies indicating a possible upper bound of Azithromycin. These results are preliminary and recruitment will continue until a sample size of 127 youth is reached.^
Resumo:
The distribution and composition of minerals in the silt and clay fraction of the fine-grained slope sediments were examined. Special interest was focused on diagenesis. The results are listed as follows. (1) Smectite, andesitic Plagioclase, quartz, and low-Mg calcite are the main mineral components of the sediment. Authigenic dolomite was observed in the weathering zones of serpentinites, together with aragonite, as well as in clayey silt. (2) The mineralogy and geochemistry of the sediments is analogous to that of the andesitic rocks of Costa Rica and Guatemala. (3) Unstable components like volcanic glass, amphiboles, and pyroxenes show increasing etching with depth. (4) The diagenetic alteration of opal-A skeletons from etching pits and replacement by opal-CT to replacement by chalcedony as a final stage corresponds to the typical opal diagenesis. (5) Clinoptilolite is the stable zeolite mineral according to mineral stability fields; its neoformation is well documented. (6) The early diagenesis of smectites is shown by an increase of crystallinity with depth. Only the smectites in the oldest sediments (Oligocene and early Eocene) contain nonexpanding illite layers.
Resumo:
A zonation is presented for the oceanic late Middle Jurassic to Late Jurassic of the Atlantic Ocean. The oldest zone, the Stephenolithion bigotii Zone (subdivided into a Stephanolithion hexum Subzone and a Cyclagelosphaera margerelii Subzone), is middle Callovian to early Oxfordian. The Vagalapilla stradneri Zone is middle Oxfordian to Kimmeridgian. The Conusphaera mexicana Zone, subdivided into a lower Hexapodorhabdus cuvillieri Subzone and a Polycostella beckmannii Subzone, is the latest Kimmeridgian to Tithonian. Direct correlation of this zonation with the boreal zonation established for Britain and northern France (Barnard and Hay, 1974; Medd, 1982; Hamilton, 1982) is difficult because of poor preservation resulting in low diversity for the cored section at Site 534 and a lack of Tithonian marker species in the boreal realm. Correlations based on dinoflagellates and on nannofossils with stratotype sections (or regions) give somewhat different results. Dinoflagellates give generally younger ages, especially for the Oxfordian to Kimmeridgian part of the recovered section, than do nannofossils.
Resumo:
The climate during the Cenozoic era changed in several steps from ice-free poles and warm conditions to ice-covered poles and cold conditions. Since the 1950s, a body of information on ice volume and temperature changes has been built up predominantly on the basis of measurements of the oxygen isotopic composition of shells of benthic foraminifera collected from marine sediment cores. The statistical methodology of time series analysis has also evolved, allowing more information to be extracted from these records. Here we provide a comprehensive view of Cenozoic climate evolution by means of a coherent and systematic application of time series analytical tools to each record from a compilation spanning the interval from 4 to 61 Myr ago. We quantitatively describe several prominent features of the oxygen isotope record, taking into account the various sources of uncertainty (including measurement, proxy noise, and dating errors). The estimated transition times and amplitudes allow us to assess causal climatological-tectonic influences on the following known features of the Cenozoic oxygen isotopic record: Paleocene-Eocene Thermal Maximum, Eocene-Oligocene Transition, Oligocene-Miocene Boundary, and the Middle Miocene Climate Optimum. We further describe and causally interpret the following features: Paleocene-Eocene warming trend, the two-step, long-term Eocene cooling, and the changes within the most recent interval (Miocene-Pliocene). We review the scope and methods of constructing Cenozoic stacks of benthic oxygen isotope records and present two new latitudinal stacks, which capture besides global ice volume also bottom water temperatures at low (less than 30°) and high latitudes. This review concludes with an identification of future directions for data collection, statistical method development, and climate modeling.
Resumo:
Information about the computational cost of programs is potentially useful for a variety of purposes, including selecting among different algorithms, guiding program transformations, in granularity control and mapping decisions in parallelizing compilers, and query optimization in deductive databases. Cost analysis of logic programs is complicated by nondeterminism: on the one hand, procedures can return múltiple Solutions, making it necessary to estímate the number of solutions in order to give nontrivial upper bound cost estimates; on the other hand, the possibility of failure has to be taken into account while estimating lower bounds. Here we discuss techniques to address these problems to some extent.
Resumo:
In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.
Resumo:
Automatic cost analysis of programs has been traditionally concentrated on a reduced number of resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certiflcation of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually application-dependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a datábase, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-deflnable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these deflnitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering a signiflcant set of interesting resources.
Resumo:
Automatic cost analysis of programs has been traditionally studied in terms of a number of concrete, predefined resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certification of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually applicationdependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a database, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-definable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these definitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering an ample set of interesting resources.