890 resultados para lower upper bound estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use benthic foraminifers to reconstruct the Neogene paleobathymetric history of the Marion Plateau, Queensland Plateau, Townsville Trough, and Queensland Trough on the northeastern Australian margin (Ocean Drilling Program Leg 133). Western Queensland Plateau Site 811/825 (present depth, ~938 m) deepened from the neritic zone (0-200 m) to the upper bathyal zone (200-600 m) during the middle Miocene (~13-14 Ma), with further deepening into the middle bathyal zone (600-1000 m) occurring during the late Miocene (~7 Ma). A depth transect across the southern Queensland Plateau shows that deepening from the outer neritic zone (100-200 m) to the upper bathyal zone began during the latest Miocene (~6 Ma) at the deepest location (Site 813, present depth, 539.1 m), whereas the shallower Sites 812 and 814 (present depths, 461.6 and 520.4 m, respectively) deepened during the late Pliocene (~2.7 and ~2.9 Ma). At Marion Plateau Site 815 (present depth, 465.5 m), water depth increased during the late Miocene (~6.7 Ma) from the outer neritic to the upper bathyal zone. Nearby Site 816 (present water depth, 437.3 m) contains Pliocene upper bathyal assemblages that directly overlie middle Miocene shallow neritic deposits; the timing of the deepening is uncertain because of a late Miocene hiatus. On the northern slope of the Townsville Trough (Site 817, present depth, 1015.8 m), benthic foraminifers and sponge spicules indicate deepening from the lower upper bathyal (400-600 m) to the middle bathyal zone in the late Miocene (by ~6.8 Ma). Benthic foraminiferal faunas at nearby Site 818 (present water depth, 752.1 m) do not show evidence of paleobathymetric change; however, a late Pliocene (~2-3 Ma) increase in downslope transport may have been related to the drowning of the Queensland Plateau. Site 822 (present depth, 955.2 m), at the base of the Great Barrier Reef slope, deepened from the upper bathyal to the middle bathyal zone during the late Pliocene (by ~2.3 Ma). Queensland Trough Site 823 (present depth, 1638.4 m) deepened from the middle bathyal to the lower bathyal (1000-2000 m) zone during the late Miocene (~6.5 Ma). Benthic foraminiferal faunal changes at these Leg 133 sites indicate that rapid deepening occurred during the middle Miocene (~13-14 Ma), late Miocene (6-7 Ma), and late Pliocene (2-3 Ma) along the northeastern Australian margin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A scaling law is presented that provides a complete solution to the equations bounding the stability and rupture of thin films. The scaling law depends on the fundamental physicochemical properties of the film and interface to calculate bounds for the critical thickness and other key film thicknesses, the relevant waveforms associated with instability and rupture, and film lifetimes. Critical thicknesses calculated from the scaling law are shown to bound the values reported in the literature for numerous emulsion and foam films. The majority of critical thickness values are between 15 to 40% lower than the upper bound critical thickness provided by the scaling law.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work describes the programme of activities relating to a mechanical study of the Conform extrusion process. The main objective was to provide a basic understanding of the mechanics of the Conform process with particular emphasis placed on modelling using experimental and theoretical considerations. The experimental equipment used includes a state of the art computer-aided data-logging system and high temperature loadcells (up to 260oC) manufactured from tungsten carbide. Full details of the experimental equipment is presented in sections 3 and 4. A theoretical model is given in Section 5. The model presented is based on the upper bound theorem using a variation of the existing extrusion theories combined with temperature changes in the feed metal across the deformation zone. In addition, constitutive equations used in the model have been generated from existing experimental data. Theoretical and experimental data are presented in tabular form in Section 6. The discussion of results includes a comprehensive graphical presentation of the experimental and theoretical data. The main findings are: (i) the establishment of stress/strain relationships and an energy balance in order to study the factors affecting redundant work, and hence a model suitable for design purposes; (ii) optimisation of the process, by determination of the extrusion pressure for the range of reduction and changes in the extrusion chamber geometry at lower wheel speeds; and (iii) an understanding of the control of the peak temperature reach during extrusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

IEEE 802.11 standard has achieved huge success in the past decade and is still under development to provide higher physical data rate and better quality of service (QoS). An important problem for the development and optimization of IEEE 802.11 networks is the modeling of the MAC layer channel access protocol. Although there are already many theoretic analysis for the 802.11 MAC protocol in the literature, most of the models focus on the saturated traffic and assume infinite buffer at the MAC layer. In this paper we develop a unified analytical model for IEEE 802.11 MAC protocol in ad hoc networks. The impacts of channel access parameters, traffic rate and buffer size at the MAC layer are modeled with the assistance of a generalized Markov chain and an M/G/1/K queue model. The performance of throughput, packet delivery delay and dropping probability can be achieved. Extensive simulations show the analytical model is highly accurate. From the analytical model it is shown that for practical buffer configuration (e.g. buffer size larger than one), we can maximize the total throughput and reduce the packet blocking probability (due to limited buffer size) and the average queuing delay to zero by effectively controlling the offered load. The average MAC layer service delay as well as its standard deviation, is also much lower than that in saturated conditions and has an upper bound. It is also observed that the optimal load is very close to the maximum achievable throughput regardless of the number of stations or buffer size. Moreover, the model is scalable for performance analysis of 802.11e in unsaturated conditions and 802.11 ad hoc networks with heterogenous traffic flows. © 2012 KSI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exponentially increasing demand on operational data rate has been met with technological advances in telecommunication systems such as advanced multilevel and multidimensional modulation formats, fast signal processing, and research into new different media for signal transmission. Since the current communication channels are essentially nonlinear, estimation of the Shannon capacity for modern nonlinear communication channels is required. This PhD research project has targeted the study of the capacity limits of different nonlinear communication channels with a view to enable a significant enhancement in the data rate of the currently deployed fiber networks. In the current study, a theoretical framework for calculating the Shannon capacity of nonlinear regenerative channels has been developed and illustrated on the example of the proposed here regenerative Fourier transform (RFT). Moreover, the maximum gain in Shannon capacity due to regeneration (that is, the Shannon capacity of a system with ideal regenerators – the upper bound on capacity for all regenerative schemes) is calculated analytically. Thus, we derived a regenerative limit to which the capacity of any regenerative system can be compared, as analogue of the seminal linear Shannon limit. A general optimization scheme (regenerative mapping) has been introduced and demonstrated on systems with different regenerative elements: phase sensitive amplifiers and the proposed here multilevel regenerative schemes: the regenerative Fourier transform and the coupled nonlinear loop mirror.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compressional- and shear-wave velocity logs (Vp and Vs, respectively) that were run to a sub-basement depth of 1013 m (1287.5 m sub-bottom) in Hole 504B suggest the presence of Layer 2A and document the presence of layers 2B and 2C on the Costa Rica Rift. Layer 2A extends from the mudline to 225 m sub-basement and is characterized by compressional-wave velocities of 4.0 km/s or less. Layer 2B extends from 225 to 900 m and may be divided into two intervals: an upper level from 225 to 600 m in which Vp decreases slowly from 5.0 to 4.8 km/s and a lower level from 600 to about 900 m in which Vp increases slowly to 6.0 km/s. In Layer 2C, which was logged for about 100 m to a depth of 1 km, Vp and Vs appear to be constant at 6.0 and 3.2 km/s, respectively. This velocity structure is consistent with, but more detailed than the structure determined by the oblique seismic experiment in the same hole. Since laboratory measurements of the compressional- and shear-wave velocity of samples from Hole 504B at Pconfining = Pdifferential average 6.0 and 3.2 km/s respectively, and show only slight increases with depth, we conclude that the velocity structure of Layer 2 is controlled almost entirely by variations in porosity and that the crack porosity of Layer 2C approaches zero. A comparison between the compressional-wave velocities determined by logging and the formation porosities calculated from the results of the large-scale resistivity experiment using Archie's Law suggest that the velocity- porosity relation derived by Hyndman et al. (1984) for laboratory samples serves as an upper bound for Vp, and the noninteractive relation derived by Toksöz et al. (1976) for cracks with an aspect ratio a = 1/32 serves as a lower bound.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Laurentide Ice Sheet (LIS) was a large, dynamic ice sheet in the early Holocene. The glacial events through Hudson Strait leading to its eventual demise are recorded in the well-dated Labrador shelf core, MD99-2236 from the Cartwright Saddle. We develop a detailed history of the timing of ice-sheet discharge events from the Hudson Strait outlet of the LIS during the Holocene using high-resolution detrital carbonate, ice rafted detritus (IRD), d18O, and sediment color data. Eight detrital carbonate peaks (DCPs) associated with IRD peaks and light oxygen isotope events punctuate the MD99-2236 record between 11.5 and 8.0 ka. We use the stratigraphy of the DCPs developed from MD99-2236 to select the appropriate DeltaR to calibrate the ages of recorded glacial events in Hudson Bay and Hudson Strait such that they match the DCPs in MD99-2236. We associate the eight DCPs with H0, Gold Cove advance, Noble Inlet advance, initial retreat of the Hudson Strait ice stream (HSIS) from Hudson Strait, opening of the Tyrrell Sea, and drainage of glacial lakes Agassiz and Ojibway. The opening of Foxe Channel and retreat of glacial ice from Foxe Basin are represented by a shoulder in the carbonate data. DeltaR of 350 years applied to the radiocarbon ages constraining glacial events H0 through the opening of the Tyrell Sea provided the best match with the MD99-2236 DCPs; DeltaR values and ages from the literature are used for the younger events. A very close age match was achieved between the 8.2 ka cold event in the Greenland ice cores, DCP7 (8.15 ka BP), and the drainage of glacial lakes Agassiz and Ojibway. Our stratigraphic comparison between the DCPs in MD99-2236 and the calibrated ages of Hudson Strait/Bay deglacial events shows that the retreat of the HSIS, the opening of the Tyrell Sea, and the catastrophic drainage of glacial lakes Agassiz and Ojibway at 8.2 ka are separate events that have been combined in previous estimates of the timing of the 8.2 ka event from marine records. SW Iceland shelf core MD99-2256 documents freshwater entrainment into the subpolar gyre from the Hudson Strait outlet via the Labrador, North Atlantic, and Irminger currents. The timing of freshwater release from the LIS Hudson Strait outlet in MD99-2236 matches evidence for freshwater forcing and LIS icebergs carrying foreign minerals to the SW Iceland shelf between 11.5 and 8.2 ka. The congruency of these records supports the conclusion of the entrainment of freshwater from the retreat of the LIS through Hudson Strait into the subpolar gyre and provides specific time periods when pulses of LIS freshwater were present to influence climate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

International audience

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a paper by Biro et al. [7], a novel twist on guarding in art galleries is introduced. A beacon is a fixed point with an attraction pull that can move points within the polygon. Points move greedily to monotonically decrease their Euclidean distance to the beacon by moving straight towards the beacon or sliding on the edges of the polygon. The beacon attracts a point if the point eventually reaches the beacon. Unlike most variations of the art gallery problem, the beacon attraction has the intriguing property of being asymmetric, leading to separate definitions of attraction region and inverse attraction region. The attraction region of a beacon is the set of points that it attracts. For a given point in the polygon, the inverse attraction region is the set of beacon locations that can attract the point. We first study the characteristics of beacon attraction. We consider the quality of a "successful" beacon attraction and provide an upper bound of $\sqrt{2}$ on the ratio between the length of the beacon trajectory and the length of the geodesic distance in a simple polygon. In addition, we provide an example of a polygon with holes in which this ratio is unbounded. Next we consider the problem of computing the shortest beacon watchtower in a polygonal terrain and present an $O(n \log n)$ time algorithm to solve this problem. In doing this, we introduce $O(n \log n)$ time algorithms to compute the beacon kernel and the inverse beacon kernel in a monotone polygon. We also prove that $\Omega(n \log n)$ time is a lower bound for computing the beacon kernel of a monotone polygon. Finally, we study the inverse attraction region of a point in a simple polygon. We present algorithms to efficiently compute the inverse attraction region of a point for simple, monotone, and terrain polygons with respective time complexities $O(n^2)$, $O(n \log n)$ and $O(n)$. We show that the inverse attraction region of a point in a simple polygon has linear complexity and the problem of computing the inverse attraction region has a lower bound of $\Omega(n \log n)$ in monotone polygons and consequently in simple polygons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A planar polynomial differential system has a finite number of limit cycles. However, finding the upper bound of the number of limit cycles is an open problem for the general nonlinear dynamical systems. In this paper, we investigated a class of Liénard systems of the form x'=y, y'=f(x)+y g(x) with deg f=5 and deg g=4. We proved that the related elliptic integrals of the Liénard systems have at most three zeros including multiple zeros, which implies that the number of limit cycles bifurcated from the periodic orbits of the unperturbed system is less than or equal to 3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study Krylov subspace methods for approximating the matrix-function vector product φ(tA)b where φ(z) = [exp(z) - 1]/z. This product arises in the numerical integration of large stiff systems of differential equations by the Exponential Euler Method, where A is the Jacobian matrix of the system. Recently, this method has found application in the simulation of transport phenomena in porous media within mathematical models of wood drying and groundwater flow. We develop an a posteriori upper bound on the Krylov subspace approximation error and provide a new interpretation of a previously published error estimate. This leads to an alternative Krylov approximation to φ(tA)b, the so-called Harmonic Ritz approximant, which we find does not exhibit oscillatory behaviour of the residual error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper formulates a node-based smoothed conforming point interpolation method (NS-CPIM) for solid mechanics. In the proposed NS-CPIM, the higher order conforming PIM shape functions (CPIM) have been constructed to produce a continuous and piecewise quadratic displacement field over the whole problem domain, whereby the smoothed strain field was obtained through smoothing operation over each smoothing domain associated with domain nodes. The smoothed Galerkin weak form was then developed to create the discretized system equations. Numerical studies have demonstrated the following good properties: NS-CPIM (1) can pass both standard and quadratic patch test; (2) provides an upper bound of strain energy; (3) avoid the volumetric locking; (4) provides the higher accuracy than those in the node-based smoothed schemes of the original PIMs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we review the sequential slotted amplify-decode-and-forward (SADF) protocol with half-duplex single-antenna and evaluate its performance in terms of pairwise error probability (PEP). We obtain the PEP upper bound of the protocol and find out that the achievable diversity order of the protocol is two with arbitrary number of relay terminals. To achieve the maximum achievable diversity order, we propose a simple precoder that is easy to implement with any number of relay terminals and transmission slots. Simulation results show that the proposed precoder achieves the maximum achievable diversity order and has similar BER performance compared to some of the existing precoders.