890 resultados para lower upper bound estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

lWe report on a search for second generation leptoquarks (LQ(2)) which decay into a muon plus quark in (p) over barp collisions at a center-of-mass energy of root s = 1.96 TeV in the DO detector using an integrated luminosity of about 300 pb(-1). No evidence for a leptoquark signal is observed and an upper bound on the product of the cross section for single leptoquark production times branching fraction into a quark and a muon was determined for second generation scalar leptoquaiks as a function of the leptoquark mass. This result has been combined with a previously published DO search for leptoquark pair production to obtain leptoquark mass limits as a function of the leptoquark-muon-quark coupling, lambda. Assuming lambda = 1, lower limits on the mass of a second generation scalar leptoquark coupling to a u quark and a muon are m(LQ2) > 274 GeV and m(LQ2) > 226 GeV for beta = 1 and beta = 1/2, respectively. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical networks based on passive-star couplers and employing WDM have been proposed for deployment in local and metropolitan areas. These networks suffer from splitting, coupling, and attenuation losses. Since there is an upper bound on transmitter power and a lower bound on receiver sensitivity, optical amplifiers are usually required to compensate for the power losses mentioned above. Due to the high cost of amplifiers, it is desirable to minimize their total number in the network. However, an optical amplifier has constraints on the maximum gain and the maximum output power it can supply; thus, optical amplifier placement becomes a challenging problem. In fact, the general problem of minimizing the total amplifier count is a mixed-integer nonlinear problem. Previous studies have attacked the amplifier-placement problem by adding the “artificial” constraint that all wavelengths, which are present at a particular point in a fiber, be at the same power level. This constraint simplifies the problem into a solvable mixed integer linear program. Unfortunately, this artificial constraint can miss feasible solutions that have a lower amplifier count but do not have the equally powered wavelengths constraint. In this paper, we present a method to solve the minimum amplifier- placement problem, while avoiding the equally powered wavelength constraint. We demonstrate that, by allowing signals to operate at different power levels, our method can reduce the number of amplifiers required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aimed to apply genetic algorithms (GA) and particle swarm optimization (PSO) in cash balance management using Miller-Orr model, which consists in a stochastic model that does not define a single ideal point for cash balance, but an oscillation range between a lower bound, an ideal balance and an upper bound. Thus, this paper proposes the application of GA and PSO to minimize the Total Cost of cash maintenance, obtaining the parameter of the lower bound of the Miller-Orr model, using for this the assumptions presented in literature. Computational experiments were applied in the development and validation of the models. The results indicated that both the GA and PSO are applicable in determining the cash level from the lower limit, with best results of PSO model, which had not yet been applied in this type of problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goals of this Ph.D. study are to investigate the regional and global geophysical components related to present polar ice melting and to provide independent cross validation checks of GIA models using both geophysical data detected by satellite mission, and geological observations from far field sites, in order to determine a lower and upper bound of uncertainty of GIA effect. The subject of this Thesis is the sea level change from decades to millennia scale. Within ice2sea collaboration, we developed a Fortran numerical code to analyze the local short-term sea level change and vertical deformation resulting from the loss of ice mass. This method is used to investigate polar regions: Greenland and Antarctica. We have used mass balance based on ICESat data for Greenland ice sheet and a plausible mass balance for Antarctic ice sheet. We have determined the regional and global fingerprint of sea level variations, vertical deformations of the solid surface of the Earth and variations of shape of the geoid for each ice source mentioned above. The coastal areas are affected by the long wavelength component of GIA process. Hence understanding the response of the Earth to loading is crucial in various contexts. Based on the hypothesis that Earth mantle materials obey to a linear rheology, and that the physical parameters of this rheology can be only characterized by their depth dependence, we investigate the Glacial Isostatic Effect upon the far field sites of Mediterranean area using an improved SELEN program. We presented new and revised observations for archaeological fish tanks located along the Tyrrhenian and Adriatic coast of Italy and new RSL for the SE Tunisia. Spatial and temporal variations of the Holocene sea levels studied in central Italy and Tunisia, provided important constraints on the melting history of the major ice sheets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 1983, M. van den Berg made his Fundamental Gap Conjecture about the difference between the first two Dirichlet eigenvalues (the fundamental gap) of any convex domain in the Euclidean plane. Recently, progress has been made in the case where the domains are polygons and, in particular, triangles. We examine the conjecture for triangles in hyperbolic geometry, though we seek an for an upper bound for the fundamental gap rather than a lower bound.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a version of operational set theory, OST−, without a choice operation, which has a machinery for Δ0Δ0 separation based on truth functions and the separation operator, and a new kind of applicative set theory, so-called weak explicit set theory WEST, based on Gödel operations. We show that both the theories and Kripke–Platek set theory KPKP with infinity are pairwise Π1Π1 equivalent. We also show analogous assertions for subtheories with ∈-induction restricted in various ways and for supertheories extended by powerset, beta, limit and Mahlo operations. Whereas the upper bound is given by a refinement of inductive definition in KPKP, the lower bound is by a combination, in a specific way, of realisability, (intuitionistic) forcing and negative interpretations. Thus, despite interpretability between classical theories, we make “a detour via intuitionistic theories”. The combined interpretation, seen as a model construction in the sense of Visser's miniature model theory, is a new way of construction for classical theories and could be said the third kind of model construction ever used which is non-trivial on the logical connective level, after generic extension à la Cohen and Krivine's classical realisability model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of operations on representations of objects is well documented in the realm of spatial engineering. However, the mathematical structure and formal proof of these operational phenomena are not thoroughly explored. Other works have often focused on query-based models that seek to order classes and instances of objects in the form of semantic hierarchies or graphs. In some models, nodes of graphs represent objects and are connected by edges that represent different types of coarsening operators. This work, however, studies how the coarsening operator "simplification" can manipulate partitions of finite sets, independent from objects and their attributes. Partitions that are "simplified first have a collection of elements filtered (removed), and then the remaining partition is amalgamated (some sub-collections are unified). Simplification has many interesting mathematical properties. A finite composition of simplifications can also be accomplished with some single simplification. Also, if one partition is a simplification of the other, the simplified partition is defined to be less than the other partition according to the simp relation. This relation is shown to be a partial-order relation based on simplification. Collections of partitions can not only be proven to have a partial- order structure, but also have a lattice structure and are complete. In regard to a geographic information system (GIs), partitions related to subsets of attribute domains for objects are called views. Objects belong to different views based whether or not their attribute values lie in the underlying view domain. Given a particular view, objects with their attribute n-tuple codings contained in the view are part of the actualization set on views, and objects are labeled according to the particular subset of the view in which their coding lies. Though the scope of the work does not mainly focus on queries related directly to geographic objects, it provides verification for the existence of particular views in a system with this underlying structure. Given a finite attribute domain, one can say with mathematical certainty that different views of objects are partially ordered by simplification, and every collection of views has a greatest lower bound and least upper bound, which provides the validity for exploring queries in this regard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The distribution and composition of minerals in the silt and clay fraction of the fine-grained slope sediments were examined. Special interest was focused on diagenesis. The results are listed as follows. (1) Smectite, andesitic Plagioclase, quartz, and low-Mg calcite are the main mineral components of the sediment. Authigenic dolomite was observed in the weathering zones of serpentinites, together with aragonite, as well as in clayey silt. (2) The mineralogy and geochemistry of the sediments is analogous to that of the andesitic rocks of Costa Rica and Guatemala. (3) Unstable components like volcanic glass, amphiboles, and pyroxenes show increasing etching with depth. (4) The diagenetic alteration of opal-A skeletons from etching pits and replacement by opal-CT to replacement by chalcedony as a final stage corresponds to the typical opal diagenesis. (5) Clinoptilolite is the stable zeolite mineral according to mineral stability fields; its neoformation is well documented. (6) The early diagenesis of smectites is shown by an increase of crystallinity with depth. Only the smectites in the oldest sediments (Oligocene and early Eocene) contain nonexpanding illite layers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Manganese contents in reduced sediments and accumulation rates were investigated. Their values in sediments of most of cores are background (0.03-0.07 %).Anomalous concentrations (up to 2.5 %) and accumulation rates (up to 60 mg/cm**2/ka) occur near the known region of hydrothermal barite mineralization in the Derugin Basin. High accumulation rates of Mn (>10 mg/cm**2/ka) also occur in Holocene sediments to south-east from the Derugin Basin. It can be assumed that high Mn contents and accumulation rates occur there due to transportation of Mn-rich water from the Derugin Basin in the near-bottom layer under the lower border of the Sea of Okhotsk Intermediate Water. Intensive Mn accumulation is also typical for the South Okhotsk Basin near the Bussol Strait. Mn accumulation rates of glacial sediments of the second oxygen isotope stage are less significant, which is presumed to be caused by paleoceanological reasons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A zonation is presented for the oceanic late Middle Jurassic to Late Jurassic of the Atlantic Ocean. The oldest zone, the Stephenolithion bigotii Zone (subdivided into a Stephanolithion hexum Subzone and a Cyclagelosphaera margerelii Subzone), is middle Callovian to early Oxfordian. The Vagalapilla stradneri Zone is middle Oxfordian to Kimmeridgian. The Conusphaera mexicana Zone, subdivided into a lower Hexapodorhabdus cuvillieri Subzone and a Polycostella beckmannii Subzone, is the latest Kimmeridgian to Tithonian. Direct correlation of this zonation with the boreal zonation established for Britain and northern France (Barnard and Hay, 1974; Medd, 1982; Hamilton, 1982) is difficult because of poor preservation resulting in low diversity for the cored section at Site 534 and a lack of Tithonian marker species in the boreal realm. Correlations based on dinoflagellates and on nannofossils with stratotype sections (or regions) give somewhat different results. Dinoflagellates give generally younger ages, especially for the Oxfordian to Kimmeridgian part of the recovered section, than do nannofossils.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The climate during the Cenozoic era changed in several steps from ice-free poles and warm conditions to ice-covered poles and cold conditions. Since the 1950s, a body of information on ice volume and temperature changes has been built up predominantly on the basis of measurements of the oxygen isotopic composition of shells of benthic foraminifera collected from marine sediment cores. The statistical methodology of time series analysis has also evolved, allowing more information to be extracted from these records. Here we provide a comprehensive view of Cenozoic climate evolution by means of a coherent and systematic application of time series analytical tools to each record from a compilation spanning the interval from 4 to 61 Myr ago. We quantitatively describe several prominent features of the oxygen isotope record, taking into account the various sources of uncertainty (including measurement, proxy noise, and dating errors). The estimated transition times and amplitudes allow us to assess causal climatological-tectonic influences on the following known features of the Cenozoic oxygen isotopic record: Paleocene-Eocene Thermal Maximum, Eocene-Oligocene Transition, Oligocene-Miocene Boundary, and the Middle Miocene Climate Optimum. We further describe and causally interpret the following features: Paleocene-Eocene warming trend, the two-step, long-term Eocene cooling, and the changes within the most recent interval (Miocene-Pliocene). We review the scope and methods of constructing Cenozoic stacks of benthic oxygen isotope records and present two new latitudinal stacks, which capture besides global ice volume also bottom water temperatures at low (less than 30°) and high latitudes. This review concludes with an identification of future directions for data collection, statistical method development, and climate modeling.