907 resultados para Asymptotic Variance of Estimate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

温度跃层是反映海洋温度场的重要物理特性指标,对水下通讯、潜艇活动及渔业养殖、捕捞等有重要影响。本文利用中国科学院海洋研究所“中国海洋科学数据库”在中国近海及西北太平洋(110ºE-140ºE,10ºN-40ºN)的多年历史资料(1930-2002年,510143站次),基于一种改进的温跃层判定方法,分析了该海域温跃层特征量的时空分布状况。同时利用Princeton Ocean Model(POM),对中国近海,特别是东南沿海的水文结构进行了模拟,研究了海洋水文环境对逆温跃层的影响。最后根据历史海温观测资料,利用EOF分解统计技术,提出了一种适于我国近海及毗邻海域,基于现场有限层实测海温数据,快速重构海洋水温垂直结构的统计预报方法,以达到对现场温跃层的快速估计。 历史资料分析结果表明,受太阳辐射和风应力的影响,20°N以北研究海域,温跃层季节变化明显,夏季温跃层最浅、最强,冬季相反,温跃层厚度的相位明显滞后于其他变量,其在春季最薄、秋季最厚。12月份到翌年3月份,渤、黄及东海西岸,呈无跃层结构,西北太平洋部分海域从1月到3月份,也基本无跃层结构。在黄海西和东岸以及台湾海峡附近的浅滩海域,由于风力搅拌和潮混合作用,温跃层出现概率常年较低。夏季,海水层化现象在近海陆架海域得到了加强,陆架海域温跃层强度季节性变化幅度(0.31°C/m)明显大于深水区(约0.05°C/m),而前者温跃层深度和厚度的季节性变化幅度小于后者。20°N以南研究海域,温跃层季节变化不明显。逆温跃层主要出现在冬、春季节(10月-翌年5月)。受长江冲淡水和台湾暖流的影响,东南沿海区域逆温跃层持续时间最长,出现概率最大,而在山东半岛北及东沿岸、朝鲜半岛西及北岸,逆温跃层消长过程似乎和黄海暖流有关。多温跃层结构常年出现于北赤道流及对马暖流区。在黑潮入侵黄、东、南海的区域,多温跃层呈现明显不同的季节变化。在黄海中部,春季多温跃层发生概率高于夏季和秋季,在东海西部,多跃层主要出现在夏季,在南海北部,冬季和春季多温跃层发生概率大于夏季和秋季。这些变化可能主要受海表面温度变化和风力驱动的表层流的影响。 利用Princeton Ocean Model(POM),对中国东南沿海逆温跃层结构进行了模拟,模拟结果显示,长江冲淡水的季节性变化以及夏季转向与实际结果符合较好,基本再现了渤、黄、东海海域主要的环流、温盐场以及逆温跃层的分布特征和季节变化。通过数值实验发现,若无长江、黄河淡水输入,则在整个研究海域基本无逆温跃层出现,因此陆源淡水可能是河口附近逆温跃层出现的基本因素之一。长江以及暖流(黑潮和台湾暖流)流量的增加,均可在不同程度上使逆温跃层出现概率及强度、深度和厚度增加,且暖流的影响更加明显。长江对东南沿海逆温跃层的出现,特别是秋季到冬季初期,有明显的影响,使长江口海域逆温跃层位置偏向东南。暖流对于中国东南沿海的逆温跃层结构,特别是初春时期,有较大影响,使长江口海域的逆温跃层位置向东北偏移。 通过对温跃层长期变化分析得出,黄海冷水团区域,夏季温跃层强度存在3.8年左右的年际变化及18.9年左右的年代际变化,此变化可能主要表现为对当年夏季和前冬东亚地区大气气温的热力响应。东海冷涡区域,夏季温跃层强度存在3.7年的年际变化,在El Nino年为正的强度异常,其可能主要受局地气旋式大气环流变异所影响。谱分析同时表明,该海域夏季温跃层强度还存在33.2年的年代际变化,上世纪70年代中期,温跃层强度由弱转强,而此变化可能与黑潮流量的年代际变化有关。 海洋水温垂直结构的统计预报结果显示,EOF分解的前四个主分量即能够解释原空间点温度距平总方差的95%以上,以海洋表层附近观测资料求解的特征系数推断温度垂直结构分布的结果最稳定。利用东海陆架区、南海深水区和台湾周边海域三个不同区域的实测CTD样本廓线资料,对重构模型的检验结果表明,重构与实测廓线的相关程度超过95%的置信水平。三个区重构与实测温度廓线值的平均误差分别为0.69℃,0.52℃,1.18℃,平均重构廓线误差小于平均气候偏差,统计模式可以很好的估算温度廓线垂直结构。东海陆架海区温度垂直重构廓线与CTD观测廓线获得的温跃层结果对比表明,重构温跃层上界、下界深度和强度的平均绝对误差分别为1.51m、1.36m和0.17℃/m,它们的平均相对误差分别为24.7%、8.9%和22.6%,虽然温跃层深度和强度的平均相对误差较大,但其绝对误差量值较小。而在南海海区,模型重构温跃层上界、下界和强度的平均绝对预报误差分别为4.1m、27.7m和0.007℃/m,它们的平均相对误差分别为16.1%、16.8%和9.5%,重构温跃层各特征值的平均相对误差都在20%以内。虽然南海区温跃层下界深度平均绝对预报误差较大,但相对于温跃层下界深度的空间尺度变化而言(平均温跃层下界深度为168m),平均相对误差仅为16.8%。因此说模型重构的温度廓线可以达到对我国陆架海域、深水区温跃层的较好估算。 基于对历史水文温度廓线观测资料的分析及自主温跃层统计预报模型,研制了实时可利用微机简单、快捷地进行温跃层估算及查询的可视化系统,这是迄今进行大范围海域温跃层统计与实时预报研究的较系统成果。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Precipitation is considered to be the primary resource limiting terrestrial biological activity in water-limited regions. Its overriding effect on the production of grassland is complex. In this paper, field data of 48 sites (including temperate meadow steppe,temperate steppe, temperate desert steppe and alpine meadow) were gathered from 31 published papers and monographs to analyze the relationship between above-ground net primary productivity (ANPP) and precipitation by the method of regression analysis. The results indicated that there was a great difference between spatial pattern and temporal pattern by which precipitation influenced grassland ANPP. Mean annual precipitation (MAP) was the main factor determining spatial distribution of grassland ANPP (r~2 = 0.61,P < 0.01); while temporally, no significant relationship was found between the variance of AN PP and inter-annual precipitation for the four types of grassland. However, after dividing annual precipitation into monthly value and taking time lag effect into account, the study found significant relationships between ANPP and precipitation. For the temperate meadow steppe, the key variable determining inter-annual change of ANPP was last August-May precipitation (r~2= 0.47, P = 0.01); for the temperate steppe, the key variable was July precipitation (r~2 = 0.36, P = 0.02); for the temperate desert steppe, the key variable was April-June precipitation (r~2 = 0.51, P <0.01); for the alpine meadow, the key variable was last September-May precipitation (r~2 = 0.29, P < 0.05). In comparison with analogous research, the study demonstrated that the key factor determining inter-annual changes of grassland ANPP was the cumulative precipitation in certain periods of that year or the previous year.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attaining sufficient accuracy and efficiency of generalized screen propagator and improving the quality of input gathers are often problems of wave equation presack depth migration, in this paper,a high order formula of generalized screen propagator for one-way wave equation is proposed by using the asymptotic expansion of single-square-root operator. Based on the formula,a new generalized screen propagator is developed ,which is composed of split-step Fourier propagator and high order correction terms,the new generalized screen propagator not only improving calculation precision without sharply increasing the quantity of computation,facilitates the suitability of generalized screen propagator to the media with strong lateral velocity variation. As wave-equation prestack depth migration is sensitive to the quality of input gathers, which greatly affect the output,and the available seismic data processing system has inability to obtain traveltimes corresponding to the multiple arrivals, to estimate of great residual statics, to merge seismic datum from different projects and to design inverse Q filter, we establish difference equations with an embodiment of Huygens’s principle for obtaining traveltimes corresponding to the multiple arrivals,bring forward a time variable matching filter for seismic datum merging by using the fast algorithm called Mallat tree for wavelet transformations, put forward a method for estimation of residual statics by applying the optimum model parameters estimated by iterative inversion with three organized algorithm,i.e,the CMP intertrace cross-correlation algorithm,the Laplacian image edge extraction algorithm,and the DFP algorithm, and present phase-shift inverse Q filter based on Futterman’s amplitude and phase-velocity dispersion formula and wave field extrapolation theory. All of their numerical and real data calculating results shows that our theory and method are practical and efficient. Key words: prestack depth migration, generalized screen propagator, residual statics,inverse Q filter ,traveltime,3D seismic datum mergence

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce and explore an approach to estimating statistical significance of classification accuracy, which is particularly useful in scientific applications of machine learning where high dimensionality of the data and the small number of training examples render most standard convergence bounds too loose to yield a meaningful guarantee of the generalization ability of the classifier. Instead, we estimate statistical significance of the observed classification accuracy, or the likelihood of observing such accuracy by chance due to spurious correlations of the high-dimensional data patterns with the class labels in the given training set. We adopt permutation testing, a non-parametric technique previously developed in classical statistics for hypothesis testing in the generative setting (i.e., comparing two probability distributions). We demonstrate the method on real examples from neuroimaging studies and DNA microarray analysis and suggest a theoretical analysis of the procedure that relates the asymptotic behavior of the test to the existing convergence bounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyses the asymptotic properties of nonlinear least squares estimators of the long run parameters in a bivariate unbalanced cointegration framework. Unbalanced cointegration refers to the situation where the integration orders of the observables are different, but their corresponding balanced versions (with equal integration orders after filtering) are cointegrated in the usual sense. Within this setting, the long run linkage between the observables is driven by both the cointegrating parameter and the difference between the integration orders of the observables, which we consider to be unknown. Our results reveal three noticeable features. First, superconsistent (faster than √ n-consistent) estimators of the difference between memory parameters are achievable. Next, the joint limiting distribution of the estimators of both parameters is singular, and, finally, a modified version of the ‘‘Type II’’ fractional Brownian motion arises in the limiting theory. A Monte Carlo experiment and the discussion of an economic example are included.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current congestion-oriented design of TCP hinders its ability to perform well in hybrid wireless/wired networks. We propose a new improvement on TCP NewReno (NewReno-FF) using a new loss labeling technique to discriminate wireless from congestion losses. The proposed technique is based on the estimation of average and variance of the round trip time using a filter cal led Flip Flop filter that is augmented with history information. We show the comparative performance of TCP NewReno, NewReno-FF, and TCP Westwood through extensive simulations. We study the fundamental gains and limits using TCP NewReno with varying Loss Labeling accuracy (NewReno-LL) as a benchmark. Lastly our investigation opens up important research directions. First, there is a need for a finer grained classification of losses (even within congestion and wireless losses) for TCP in heterogeneous networks. Second, it is essential to develop an appropriate control strategy for recovery after the correct classification of a packet loss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors analyzed several cytomorphonuclear parameters related to chromatin distribution and DNA ploidy in typical and atypical carcinoids and in small cell lung cancers. Nuclear measurements and analysis were performed with a SAMBA 200 (TITN, Grenoble, France) cell image processor with software allowing the discrimination of parameters computed on cytospin preparations of Feulgen-stained nuclei extracted from deparaffinized tumor tissues. The authors' results indicate a significant increase in DNA content--assessed by integrated optical density (IOD)--from typical carcinoids to small cell lung carcinomas, with atypical carcinoids showing an intermediate value. Parameters related to hyperchromatism (short and long run length and variance of optical density) also characterize the atypical carcinoids as being intermediate between typical carcinoids and small cell lung cancers. The systematic measurement of these cytomorphonuclear parameters seems to define an objective, reproducible "scale" of differentiation that helps to define the atypical carcinoid and may be of value in establishing cytologic criteria for differential diagnosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Continuing our development of a mathematical theory of stochastic microlensing, we study the random shear and expected number of random lensed images of different types. In particular, we characterize the first three leading terms in the asymptotic expression of the joint probability density function (pdf) of the random shear tensor due to point masses in the limit of an infinite number of stars. Up to this order, the pdf depends on the magnitude of the shear tensor, the optical depth, and the mean number of stars through a combination of radial position and the star's mass. As a consequence, the pdf's of the shear components are seen to converge, in the limit of an infinite number of stars, to shifted Cauchy distributions, which shows that the shear components have heavy tails in that limit. The asymptotic pdf of the shear magnitude in the limit of an infinite number of stars is also presented. All the results on the random microlensing shear are given for a general point in the lens plane. Extending to the general random distributions (not necessarily uniform) of the lenses, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to microlensing, we calculate the asymptotic global expected number of minimum images in the limit of an infinite number of stars, where the stars are uniformly distributed. This global expectation is bounded, while the global expected number of images and the global expected number of saddle images diverge as the order of the number of stars. © 2009 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

© 2014, Springer-Verlag Berlin Heidelberg.The frequency and severity of extreme events are tightly associated with the variance of precipitation. As climate warms, the acceleration in hydrological cycle is likely to enhance the variance of precipitation across the globe. However, due to the lack of an effective analysis method, the mechanisms responsible for the changes of precipitation variance are poorly understood, especially on regional scales. Our study fills this gap by formulating a variance partition algorithm, which explicitly quantifies the contributions of atmospheric thermodynamics (specific humidity) and dynamics (wind) to the changes in regional-scale precipitation variance. Taking Southeastern (SE) United States (US) summer precipitation as an example, the algorithm is applied to the simulations of current and future climate by phase 5 of Coupled Model Intercomparison Project (CMIP5) models. The analysis suggests that compared to observations, most CMIP5 models (~60 %) tend to underestimate the summer precipitation variance over the SE US during the 1950–1999, primarily due to the errors in the modeled dynamic processes (i.e. large-scale circulation). Among the 18 CMIP5 models analyzed in this study, six of them reasonably simulate SE US summer precipitation variance in the twentieth century and the underlying physical processes; these models are thus applied for mechanistic study of future changes in SE US summer precipitation variance. In the future, the six models collectively project an intensification of SE US summer precipitation variance, resulting from the combined effects of atmospheric thermodynamics and dynamics. Between them, the latter plays a more important role. Specifically, thermodynamics results in more frequent and intensified wet summers, but does not contribute to the projected increase in the frequency and intensity of dry summers. In contrast, atmospheric dynamics explains the projected enhancement in both wet and dry summers, indicating its importance in understanding future climate change over the SE US. The results suggest that the intensified SE US summer precipitation variance is not a purely thermodynamic response to greenhouse gases forcing, and cannot be explained without the contribution of atmospheric dynamics. Our analysis provides important insights to understand the mechanisms of SE US summer precipitation variance change. The algorithm formulated in this study can be easily applied to other regions and seasons to systematically explore the mechanisms responsible for the changes in precipitation extremes in a warming climate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data from three forest sites in Sumatra (Batang Ule, Pasirmayang and Tebopandak) have been analysed and compared for the effects of sample area cut-off, and tree diameter cut-off. An 'extended inverted exponential model' is shown to be well suited to fitting tree-species-area curves. The model yields species carrying capacities of 680 for Batang Ule, 380 species for Pasirmayang, and 35 for Tebopandak (tree diameter >10cm). It would seem that in terms of species carrying capacity, Tebopandak and Pasirmayang are rather similar, and both less diverse than the hilly Batang Ule site. In terms of conservation policy, this would mean that rather more emphasis should be put on conserving hilly sites on a granite substratum. For Pasirmayang with tree diameter >3cm, the asymptotic species number estimate is 567, considerably higher than the estimate of 387 species for trees with diameter >10cm. It is clear that the diameter cut-off has a major impact on the estimate of the species carrying capacity. A conservative estimate of the total number of tree species in the Pasirmayang region is 632 species! In sampling exercises, the diameter cut-off should not be chosen lightly, and it may be worth adopting field sampling procedures which involve some subsampling of the primary sample area, where the diameter cut-off is set much lower than in the primary plots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The magnetic dipole transitions between fine structure levels in the ground term of Ti-like ions, (3d(4)) D-5(2)-D-5(3), were investigated by observation of visible and near-UV light for several elements with atomic numbers from 51 to 78. The wavelengths are compared with theoretical values we recently calculated. The differences between the present calculations and measurements are less than 0.6%. The anomalous wavelength stability predicted by Feldman, Indelicato and Sugar [J. Opt. Soc. Am. B 8, 3 (1991)] was observed. We attribute this anomalous wavelength stability to the transition from LS to JJ coupling and the asymptotic behavior of the transition energies in the intermediate coupling regime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asymptotic estimates of the norms of orbits of certain operators that commute with the classical Volterra operator V acting on L-P[0,1], with 1 0, but also to operators of the form phi (V), where phi is a holomorphic function at zero. The method to obtain the estimates is based on the fact that the Riemann-Liouville operator as well as the Volterra operator can be related to the Levin-Pfluger theory of holomorphic functions of completely regular growth. Different methods, such as the Denjoy-Carleman theorem, are needed to analyze the behavior of the orbits of I - cV, where c > 0. The results are applied to the study of cyclic properties of phi (V), where phi is a holomorphic function at 0.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generation of an entangled coherent state is one of the most important ingredients of quantum information processing using coherent states. Recently, numerous schemes to achieve this task have been proposed. In order to generate travelling-wave entangled coherent states, cross-phase-modulation, optimized by optical Kerr effect enhancement in a dense medium in an electromagnetically induced transparency (EIT) regime, seems to be very promising. In this scenario, we propose a fully quantized model of a double-EIT scheme recently proposed [D. Petrosyan and G. Kurizki, Phys. Rev. A 65, 33 833 (2002)]: the quantization step is performed adopting a fully Hamiltonian approach. This allows us to write effective equations of motion for two interacting quantum fields of light that show how the dynamics of one field depends on the photon-number operator of the other. The preparation of a Schrodinger cat state, which is a superposition of two distinct coherent states, is briefly exposed. This is based on nonlinear interaction via double EIT of two light fields (initially prepared in coherent states) and on a detection step performed using a 50:50 beam splitter and two photodetectors. In order to show the entanglement of an entangled coherent state, we suggest to measure the joint quadrature variance of the field. We show that the entangled coherent states satisfy the sufficient condition for entanglement based on quadrature variance measurement. We also show how robust our scheme is against a low detection efficiency of homodyne detectors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use many-body theory to find the asymptotic behaviour of second-order correlation corrections to the energies and positron annihilation rates in many- electron systems with respect to the angular momenta l of the single-particle orbitals included. The energy corrections decrease as 1/(l+1/2)4, in agreement with the result of Schwartz, whereas the positron annihilation rate has a slower 1/(l+1/2)2 convergence rate. We illustrate these results by numerical calculations of the energies of Ne and Kr and by examining results from extensive con?guration-interaction calculations of PsH binding and annihilation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spectral signal intensities, especially in 'real-world' applications with nonstandardized sample presentation due to uncontrolled variables/factors, commonly require additional spectral processing to normalize signal intensity in an effective way. In this study, we have demonstrated the complexity of choosing a normalization routine in the presence of multiple spectrally distinct constituents by probing a dataset of Raman spectra. Variation in absolute signal intensity (90.1% of total variance) of the Raman spectra of these complex biological samples swamps the variation in useful signals (9.4% of total variance), degrading its diagnostic and evaluative potential.