484 resultados para Lobatto formulae


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a new Artificial Neural Network (ANN) able to predict at once the main parameters representative of the wave-structure interaction processes, i.e. the wave overtopping discharge, the wave transmission coefficient and the wave reflection coefficient. The new ANN has been specifically developed in order to provide managers and scientists with a tool that can be efficiently used for design purposes. The development of this ANN started with the preparation of a new extended and homogeneous database that collects all the available tests reporting at least one of the three parameters, for a total amount of 16’165 data. The variety of structure types and wave attack conditions in the database includes smooth, rock and armour unit slopes, berm breakwaters, vertical walls, low crested structures, oblique wave attacks. Some of the existing ANNs were compared and improved, leading to the selection of a final ANN, whose architecture was optimized through an in-depth sensitivity analysis to the training parameters of the ANN. Each of the selected 15 input parameters represents a physical aspect of the wave-structure interaction process, describing the wave attack (wave steepness and obliquity, breaking and shoaling factors), the structure geometry (submergence, straight or non-straight slope, with or without berm or toe, presence or not of a crown wall), or the structure type (smooth or covered by an armour layer, with permeable or impermeable core). The advanced ANN here proposed provides accurate predictions for all the three parameters, and demonstrates to overcome the limits imposed by the traditional formulae and approach adopted so far by some of the existing ANNs. The possibility to adopt just one model to obtain a handy and accurate evaluation of the overall performance of a coastal or harbor structure represents the most important and exportable result of the work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ist $f: X \to S$ eine glatte Familie von Calabi-Yau-Mannigfaltigkeiten der Dimension $m$ über einer quasiprojektiven Kurve, so trägt nach einem Resultat von Zucker die erste $L^2$-Kohomologiegruppe $H^1_{(2)}(S, R^m f_* \mathbb{C}_X)$ eine reine Hodgestruktur vom Gewicht $m+1$. In dieser Arbeit berechnen wir die Hodgezahlen solcher Hodgestrukturen für $m= 1, 2, 3$ und verallgemeinern dabei Formeln aus einem Artikel von del Angel, Müller-Stach, van Straten und Zuo auf den Fall, in dem die lokalen Monodromiematrizen bei Unendlich nicht unipotent, sondern echt quasi-unipotent sind. Wir verwenden dazu den $L^2$-Higgs-Komplex nach Jost, Yang und Zuo. Für Familien von Kurven führt dies auf eine bereits bekannte Formel von Cox und Zucker. Schließlich wenden wir die Ergebnisse im Fall $m=3$ auf 14 Familien von Calabi-Yau-Mannigfaltigkeiten an, die eine Rolle in der Spiegelsymmetrie spielen, sowie auf eine von Rohde konstruierte Familie ohne Punkte mit maximal unipotenter Monodromie.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Si dimostra che una classe di trasformazioni espandenti a tratti sull'intervallo unitario soddisfa le ipotesi di un teorema di analisi funzionale contenuto nell'articolo "Rare Events, Escape Rates and Quasistationarity: Some Exact Formulae" di G. Keller e C. Liverani. Si considera un sistema dinamico aperto, con buco di misura epsilon. Se al diminuire di epsilon i buchi costituiscono una famiglia decrescente di sottointervalli di I, e per epsilon che tende a zero essi tendono a un buco formato da un solo punto, allora il teorema precedente consente di dimostrare la differenziabilità del tasso di fuga del sistema aperto, visto come funzione della dimensione del buco. In particolare, si ricava una formula esplicita per l'espansione al prim'ordine del tasso di fuga .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A general approach is presented for implementing discrete transforms as a set of first-order or second-order recursive digital filters. Clenshaw's recurrence formulae are used to formulate the second-order filters. The resulting structure is suitable for efficient implementation of discrete transforms in VLSI or FPGA circuits. The general approach is applied to the discrete Legendre transform as an illustration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Malani and Neilsen (1992) we have proposed alternative estimates of survival function (for time to disease) using a simple marker that describes time to some intermediate stage in a disease process. In this paper we derive the asymptotic variance of one such proposed estimator using two different methods and compare terms of order 1/n when there is no censoring. In the absence of censoring the asymptotic variance obtained using the Greenwood type approach converges to exact variance up to terms involving 1/n. But the asymptotic variance obtained using the theory of the counting process and results from Voelkel and Crowley (1984) on semi-Markov processes has a different term of order 1/n. It is not clear to us at this point why the variance formulae using the latter approach give different results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Several conversion tables and formulas have been suggested to correct applanation intraocular pressure (IOP) for central corneal thickness (CCT). CCT is also thought to represent an independent glaucoma risk factor. In an attempt to integrate IOP and CCT into a unified risk factor and avoid uncertain correction for tonometric inaccuracy, a new pressure-to-cornea index (PCI) is proposed. METHODS: PCI (IOP/CCT(3)) was defined as the ratio between untreated IOP and CCT(3) in mm (ultrasound pachymetry). PCI distribution in 220 normal controls, 53 patients with normal-tension glaucoma (NTG), 76 with ocular hypertension (OHT), and 89 with primary open-angle glaucoma (POAG) was investigated. PCI's ability to discriminate between glaucoma (NTG+POAG) and non-glaucoma (controls+OHT) was compared with that of three published formulae for correcting IOP for CCT. Receiver operating characteristic (ROC) curves were built. RESULTS: Mean PCI values were: Controls 92.0 (SD 24.8), NTG 129.1 (SD 25.8), OHT 134.0 (SD 26.5), POAG 173.6 (SD 40.9). To minimise IOP bias, eyes within the same 2 mm Hg range between 16 and 29 mm Hg (16-17, 18-19, etc) were separately compared: control and NTG eyes as well as OHT and POAG eyes differed significantly. PCI demonstrated a larger area under the ROC curve (AUC) and significantly higher sensitivity at fixed 80% and 90% specificities compared with each of the correction formulas; optimum PCI cut-off value 133.8. CONCLUSIONS: A PCI range of 120-140 is proposed as the upper limit of "normality", 120 being the cut-off value for eyes with untreated pressures or=22 mm Hg. PCI may reflect individual susceptibility to a given IOP level, and thus represent a glaucoma risk factor. Longitudinal studies are needed to prove its prognostic value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several methods based on Kriging have recently been proposed for calculating a probability of failure involving costly-to-evaluate functions. A closely related problem is to estimate the set of inputs leading to a response exceeding a given threshold. Now, estimating such a level set—and not solely its volume—and quantifying uncertainties on it are not straightforward. Here we use notions from random set theory to obtain an estimate of the level set, together with a quantification of estimation uncertainty. We give explicit formulae in the Gaussian process set-up and provide a consistency result. We then illustrate how space-filling versus adaptive design strategies may sequentially reduce level set estimation uncertainty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A characterization is provided for the von Mises–Fisher random variable, in terms of first exit point from the unit hypersphere of the drifted Wiener process. Laplace transform formulae for the first exit time from the unit hypersphere of the drifted Wiener process are provided. Post representations in terms of Bell polynomials are provided for the densities of the first exit times from the circle and from the sphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: The aim of this study was to determine whether the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI)- or Cockcroft-Gault (CG)-based estimated glomerular filtration rates (eGFRs) performs better in the cohort setting for predicting moderate/advanced chronic kidney disease (CKD) or end-stage renal disease (ESRD). METHODS: A total of 9521 persons in the EuroSIDA study contributed 133 873 eGFRs. Poisson regression was used to model the incidence of moderate and advanced CKD (confirmed eGFR < 60 and < 30 mL/min/1.73 m(2) , respectively) or ESRD (fatal/nonfatal) using CG and CKD-EPI eGFRs. RESULTS: Of 133 873 eGFR values, the ratio of CG to CKD-EPI was ≥ 1.1 in 22 092 (16.5%) and the difference between them (CG minus CKD-EPI) was ≥ 10 mL/min/1.73 m(2) in 20 867 (15.6%). Differences between CKD-EPI and CG were much greater when CG was not standardized for body surface area (BSA). A total of 403 persons developed moderate CKD using CG [incidence 8.9/1000 person-years of follow-up (PYFU); 95% confidence interval (CI) 8.0-9.8] and 364 using CKD-EPI (incidence 7.3/1000 PYFU; 95% CI 6.5-8.0). CG-derived eGFRs were equal to CKD-EPI-derived eGFRs at predicting ESRD (n = 36) and death (n = 565), as measured by the Akaike information criterion. CG-based moderate and advanced CKDs were associated with ESRD [adjusted incidence rate ratio (aIRR) 7.17; 95% CI 2.65-19.36 and aIRR 23.46; 95% CI 8.54-64.48, respectively], as were CKD-EPI-based moderate and advanced CKDs (aIRR 12.41; 95% CI 4.74-32.51 and aIRR 12.44; 95% CI 4.83-32.03, respectively). CONCLUSIONS: Differences between eGFRs using CG adjusted for BSA or CKD-EPI were modest. In the absence of a gold standard, the two formulae predicted clinical outcomes with equal precision and can be used to estimate GFR in HIV-positive persons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Holtite, approximately (Al,Ta,square)Al(6)(BO(3))(Si,Sb(3+),As(3+))(Sigma 3)O(12)(O,OH,square)(Sigma 3), is a member of the dumortierite group that has been found in pegmatite, or alluvial deposits derived from pegmatite, at three localities: Greenbushes, Western Australia; Voron'i Tundry, Kola Peninsula, Russia; and Szklary, Lower Silesia, Poland. Holtite can contain >30 wt.% Sb(2)O(3), As(2)O(3), Ta(2)O(5), Nb(2)O(5), and TiO(2) (taken together), but none of these constituents is dominant at a crystallographic site, which raises the question whether this mineral is distinct from dumortierite. The crystal structures of four samples from the three localities have been refined to R(1) = 0.02-0.05. The results show dominantly: Al, Ta, and vacancies at the Al(1) position; Al and vacancies at the Al(2), (3) and (4) sites; Si and vacancies at the Si positions; and Sb, As and vacancies at the Sb sites for both Sb-poor (holtite I) and Sb-rich (holtite II) specimens. Although charge-balance calculations based on our single-crystal structure refinements suggest that essentially no water is present, Fourier transform infrared spectra confirm that some OH is present in the three samples that could be measured. By analogy with dumortierite, the largest peak at 3505-3490 cm(-1) is identified with OH at the O(2) and O(7) positions. The single-crystal X-ray refinements and FTIR results suggest the following general formula for holtite: Al(7-[5x+y+z]/3)(Ta,Nb)(x)square([2x+y+z]/3)BSi(3-y)(Sb,As)(y)O(18-y-z)(OH)(z), where x is the total number of pentavalent cations, y is the total amount of Sb + As, and z <= y is the total amount of OH. Comparison with the electron microprobe compositions suggests the following approximate general formulae Al(5.83)(Ta,Nb)(0.50)square(0.67)BSi(2.50)(Sb,As)(0.50)O(17.00)(OH)(0.50) and Al(5.92)(Ta,Nb)(0.25)square(0.83)BSi(2.00)(Sb,As)(1.00) O(16.00)(OH)(1.00) for holtite I and holtite II respectively. However, the crystal structure refinements do not indicate a fundamental difference in cation ordering that might serve as a criterion for recognizing the two holtites as distinct species, and anion compositions are also not sufficiently different. Moreover, available analyses suggest the possibility of a continuum in the Si/(Sb + As) ratio between holtite I and dumortierite, and at least a partial continuum between holtite I and holtite II. We recommend that use of the terms holtite I and holtite II be discontinued.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The important application of semistatic hedging in financial markets naturally leads to the notion of quasi--self-dual processes. The focus of our study is to give new characterizations of quasi--self-duality. We analyze quasi--self-dual Lévy driven markets which do not admit arbitrage opportunities and derive a set of equivalent conditions for the stochastic logarithm of quasi--self-dual martingale models. Since for nonvanishing order parameter two martingale properties have to be satisfied simultaneously, there is a nontrivial relation between the order and shift parameter representing carrying costs in financial applications. This leads to an equation containing an integral term which has to be inverted in applications. We first discuss several important properties of this equation and, for some well-known Lévy-driven models, we derive a family of closed-form inversion formulae.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider one-dimensional Schrödinger-type operators in a bounded interval with non-self-adjoint Robin-type boundary conditions. It is well known that such operators are generically conjugate to normal operators via a similarity transformation. Motivated by recent interests in quasi-Hermitian Hamiltonians in quantum mechanics, we study properties of the transformations and similar operators in detail. In the case of parity and time reversal boundary conditions, we establish closed integral-type formulae for the similarity transformations, derive a non-local self-adjoint operator similar to the Schrödinger operator and also find the associated “charge conjugation” operator, which plays the role of fundamental symmetry in a Krein-space reformulation of the problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

XMapTools is a MATLAB©-based graphical user interface program for electron microprobe X-ray image processing, which can be used to estimate the pressure–temperature conditions of crystallization of minerals in metamorphic rocks. This program (available online at http://www.xmaptools.com) provides a method to standardize raw electron microprobe data and includes functions to calculate the oxide weight percent compositions for various minerals. A set of external functions is provided to calculate structural formulae from the standardized analyses as well as to estimate pressure–temperature conditions of crystallization, using empirical and semi-empirical thermobarometers from the literature. Two graphical user interface modules, Chem2D and Triplot3D, are used to plot mineral compositions into binary and ternary diagrams. As an example, the software is used to study a high-pressure Himalayan eclogite sample from the Stak massif in Pakistan. The high-pressure paragenesis consisting of omphacite and garnet has been retrogressed to a symplectitic assemblage of amphibole, plagioclase and clinopyroxene. Mineral compositions corresponding to ~165,000 analyses yield estimates for the eclogitic pressure–temperature retrograde path from 25 kbar to 9 kbar. Corresponding pressure–temperature maps were plotted and used to interpret the link between the equilibrium conditions of crystallization and the symplectitic microstructures. This example illustrates the usefulness of XMapTools for studying variations of the chemical composition of minerals and for retrieving information on metamorphic conditions on a microscale, towards computation of continuous pressure–temperature-and relative time path in zoned metamorphic minerals not affected by post-crystallization diffusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.