950 resultados para Statistical Model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

National Key Technology RD Program [2006BAD03A02]

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The stress transfer from broken fibers to unbroken fibers in fiber-reinforced thermosetting polymer-matrix composites and thermoplastic polymer-matrix composites was studied using a detailed finite element model. In order to check the validity of this approach, an epoxy-matrix monolayer composite was used as thermosetting polymer-matrix composite and a polypropylene (PP)-matrix monolayer composite was used as thermoplastic polymer-matrix composite, respectively. It is found that the stress concentrations near the broken fiber element cause damage to the neighboring epoxy matrix prior to the breakage of other fibers, whereas in the case of PP-matrix composites the fibers nearest to the broken fiber break prior to the PP matrix damage, because the PP matrix around the broken fiber element yields. In order to simulate composite damage evolution, a Monte Carlo technique based on a finite element method has been developed in the paper. The finite element code coupled with statistical model of fiber strength specifically written for this problem was used to determine the stress redistribution. Five hundred samples of numerical simulation were carried out to obtain statistical deformation and failure process of composites with fixed fiber volume fraction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Empirical Orthogonal Function (EOF) analysis is used in this study to generate main eigenvector fields of historical temperature for the China Seas (here referring to Chinese marine territories) and adjacent waters from 1930 to 2002 (510 143 profiles). A good temperature profile is reconstructed based on several subsurface in situ temperature observations and the thermocline was estimated using the model. The results show that: 1) For the study area, the former four principal components can explain 95% of the overall variance, and the vertical distribution of temperature is most stable using the in situ temperature observations near the surface. 2) The model verifications based on the observed CTD data from the East China Sea (ECS), South China Sea (SCS) and the areas around Taiwan Island show that the reconstructed profiles have high correlation with the observed ones with the confidence level > 95%, especially to describe the characteristics of the thermocline well. The average errors between the reconstructed and observed profiles in these three areas are 0.69A degrees C, 0.52A degrees C and 1.18A degrees C respectively. It also shows the model RMS error is less than or close to the climatological error. The statistical model can be used to well estimate the temperature profile vertical structure. 3) Comparing the thermocline characteristics between the reconstructed and observed profiles, the results in the ECS show that the average absolute errors are 1.5m, 1.4 m and 0.17A degrees C/m, and the average relative errors are 24.7%, 8.9% and 22.6% for the upper, lower thermocline boundaries and the gradient, respectively. Although the relative errors are obvious, the absolute error is small. In the SCS, the average absolute errors are 4.1 m, 27.7 m and 0.007A degrees C/m, and the average relative errors are 16.1%, 16.8% and 9.5% for the upper, lower thermocline boundaries and the gradient, respectively. The average relative errors are all < 20%. Although the average absolute error of the lower thermocline boundary is considerable, but contrast to the spatial scale of average depth of the lower thermocline boundary (165 m), the average relative error is small (16.8%). Therefore the model can be used to well estimate the thermocline.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Because of the intrinsic difficulty in determining distributions for wave periods, previous studies on wave period distribution models have not taken nonlinearity into account and have not performed well in terms of describing and statistically analyzing the probability density distribution of ocean waves. In this study, a statistical model of random waves is developed using Stokes wave theory of water wave dynamics. In addition, a new nonlinear probability distribution function for the wave period is presented with the parameters of spectral density width and nonlinear wave steepness, which is more reasonable as a physical mechanism. The magnitude of wave steepness determines the intensity of the nonlinear effect, while the spectral width only changes the energy distribution. The wave steepness is found to be an important parameter in terms of not only dynamics but also statistics. The value of wave steepness reflects the degree that the wave period distribution skews from the Cauchy distribution, and it also describes the variation in the distribution function, which resembles that of the wave surface elevation distribution and wave height distribution. We found that the distribution curves skew leftward and upward as the wave steepness increases. The wave period observations for the SZFII-1 buoy, made off the coast of Weihai (37A degrees 27.6' N, 122A degrees 15.1' E), China, are used to verify the new distribution. The coefficient of the correlation between the new distribution and the buoy data at different spectral widths (nu=0.3-0.5) is within the range of 0.968 6 to 0.991 7. In addition, the Longuet-Higgins (1975) and Sun (1988) distributions and the new distribution presented in this work are compared. The validations and comparisons indicate that the new nonlinear probability density distribution fits the buoy measurements better than the Longuet-Higgins and Sun distributions do. We believe that adoption of the new wave period distribution would improve traditional statistical wave theory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A major problem which is envisaged in the course of man-made climate change is sea-level rise. The global aspect of the thermal expansion of the sea water likely is reasonably well simulated by present day climate models; the variation of sea level, due to variations of the regional atmospheric forcing and of the large-scale oceanic circulation, is not adequately simulated by a global climate model because of insufficient spatial resolution. A method to infer the coastal aspects of sea level change is to use a statistical ''downscaling'' strategy: a linear statistical model is built upon a multi-year data set of local sea level data and of large-scale oceanic and/or atmospheric data such as sea-surface temperature or sea-level air-pressure. We apply this idea to sea level along the Japanese coast. The sea level is related to regional and North Pacific sea-surface temperature and sea-level air pressure. Two relevant processes are identified. One process is the local wind set-up of water due to regional low-frequency wind anomalies; the other is a planetary scale atmosphere-ocean interaction which takes place in the eastern North Pacific.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

温度跃层是反映海洋温度场的重要物理特性指标,对水下通讯、潜艇活动及渔业养殖、捕捞等有重要影响。本文利用中国科学院海洋研究所“中国海洋科学数据库”在中国近海及西北太平洋(110ºE-140ºE,10ºN-40ºN)的多年历史资料(1930-2002年,510143站次),基于一种改进的温跃层判定方法,分析了该海域温跃层特征量的时空分布状况。同时利用Princeton Ocean Model(POM),对中国近海,特别是东南沿海的水文结构进行了模拟,研究了海洋水文环境对逆温跃层的影响。最后根据历史海温观测资料,利用EOF分解统计技术,提出了一种适于我国近海及毗邻海域,基于现场有限层实测海温数据,快速重构海洋水温垂直结构的统计预报方法,以达到对现场温跃层的快速估计。 历史资料分析结果表明,受太阳辐射和风应力的影响,20°N以北研究海域,温跃层季节变化明显,夏季温跃层最浅、最强,冬季相反,温跃层厚度的相位明显滞后于其他变量,其在春季最薄、秋季最厚。12月份到翌年3月份,渤、黄及东海西岸,呈无跃层结构,西北太平洋部分海域从1月到3月份,也基本无跃层结构。在黄海西和东岸以及台湾海峡附近的浅滩海域,由于风力搅拌和潮混合作用,温跃层出现概率常年较低。夏季,海水层化现象在近海陆架海域得到了加强,陆架海域温跃层强度季节性变化幅度(0.31°C/m)明显大于深水区(约0.05°C/m),而前者温跃层深度和厚度的季节性变化幅度小于后者。20°N以南研究海域,温跃层季节变化不明显。逆温跃层主要出现在冬、春季节(10月-翌年5月)。受长江冲淡水和台湾暖流的影响,东南沿海区域逆温跃层持续时间最长,出现概率最大,而在山东半岛北及东沿岸、朝鲜半岛西及北岸,逆温跃层消长过程似乎和黄海暖流有关。多温跃层结构常年出现于北赤道流及对马暖流区。在黑潮入侵黄、东、南海的区域,多温跃层呈现明显不同的季节变化。在黄海中部,春季多温跃层发生概率高于夏季和秋季,在东海西部,多跃层主要出现在夏季,在南海北部,冬季和春季多温跃层发生概率大于夏季和秋季。这些变化可能主要受海表面温度变化和风力驱动的表层流的影响。 利用Princeton Ocean Model(POM),对中国东南沿海逆温跃层结构进行了模拟,模拟结果显示,长江冲淡水的季节性变化以及夏季转向与实际结果符合较好,基本再现了渤、黄、东海海域主要的环流、温盐场以及逆温跃层的分布特征和季节变化。通过数值实验发现,若无长江、黄河淡水输入,则在整个研究海域基本无逆温跃层出现,因此陆源淡水可能是河口附近逆温跃层出现的基本因素之一。长江以及暖流(黑潮和台湾暖流)流量的增加,均可在不同程度上使逆温跃层出现概率及强度、深度和厚度增加,且暖流的影响更加明显。长江对东南沿海逆温跃层的出现,特别是秋季到冬季初期,有明显的影响,使长江口海域逆温跃层位置偏向东南。暖流对于中国东南沿海的逆温跃层结构,特别是初春时期,有较大影响,使长江口海域的逆温跃层位置向东北偏移。 通过对温跃层长期变化分析得出,黄海冷水团区域,夏季温跃层强度存在3.8年左右的年际变化及18.9年左右的年代际变化,此变化可能主要表现为对当年夏季和前冬东亚地区大气气温的热力响应。东海冷涡区域,夏季温跃层强度存在3.7年的年际变化,在El Nino年为正的强度异常,其可能主要受局地气旋式大气环流变异所影响。谱分析同时表明,该海域夏季温跃层强度还存在33.2年的年代际变化,上世纪70年代中期,温跃层强度由弱转强,而此变化可能与黑潮流量的年代际变化有关。 海洋水温垂直结构的统计预报结果显示,EOF分解的前四个主分量即能够解释原空间点温度距平总方差的95%以上,以海洋表层附近观测资料求解的特征系数推断温度垂直结构分布的结果最稳定。利用东海陆架区、南海深水区和台湾周边海域三个不同区域的实测CTD样本廓线资料,对重构模型的检验结果表明,重构与实测廓线的相关程度超过95%的置信水平。三个区重构与实测温度廓线值的平均误差分别为0.69℃,0.52℃,1.18℃,平均重构廓线误差小于平均气候偏差,统计模式可以很好的估算温度廓线垂直结构。东海陆架海区温度垂直重构廓线与CTD观测廓线获得的温跃层结果对比表明,重构温跃层上界、下界深度和强度的平均绝对误差分别为1.51m、1.36m和0.17℃/m,它们的平均相对误差分别为24.7%、8.9%和22.6%,虽然温跃层深度和强度的平均相对误差较大,但其绝对误差量值较小。而在南海海区,模型重构温跃层上界、下界和强度的平均绝对预报误差分别为4.1m、27.7m和0.007℃/m,它们的平均相对误差分别为16.1%、16.8%和9.5%,重构温跃层各特征值的平均相对误差都在20%以内。虽然南海区温跃层下界深度平均绝对预报误差较大,但相对于温跃层下界深度的空间尺度变化而言(平均温跃层下界深度为168m),平均相对误差仅为16.8%。因此说模型重构的温度廓线可以达到对我国陆架海域、深水区温跃层的较好估算。 基于对历史水文温度廓线观测资料的分析及自主温跃层统计预报模型,研制了实时可利用微机简单、快捷地进行温跃层估算及查询的可视化系统,这是迄今进行大范围海域温跃层统计与实时预报研究的较系统成果。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

多向主元分析(MPCA)是利用多变量统计方法从纷杂的海量数据信息中提取出能够准确表征数据信息的几个主元,并通过投影法来降低数据的维数,主要应用于间歇生产过程中.在实际的间歇生产过程中,由于各种原因导致各批次异步造成它们运行时间的不一致,而无法直接建立有效的统计模型,正交函数近似(OFA)是一种基于正交基的投影变换技术,通过对原始数据进行OFA处理后,可以用投影系数来描述原始数据所具有的特征,并且可以达到轨迹同步化和压缩数据量的目的.对OFA法进行了部分改进,并结合MPCA法对典型的间歇过程——青霉素发酵过程进行了仿真研究.结果表明,改进的OFA计算速度有了极大的提高,且改进的OFA-MPCA法能完好地对各批次进行同步、建模并得出准确的监视结果.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of using image contours to infer the shapes and orientations of surfaces is treated as a problem of statistical estimation. The basis for solving this problem lies in an understanding of the geometry of contour formation, coupled with simple statistical models of the contour generating process. This approach is first applied to the special case of surfaces known to be planar. The distortion of contour shape imposed by projection is treated as a signal to be estimated, and variations of non-projective origin are treated as noise. The resulting method is then extended to the estimation of curved surfaces, and applied successfully to natural images. Next, the geometric treatment is further extended by relating countour curvature to surface curvature, using cast shadows as a model for contour generation. This geometric relation, combined with a statistical model, provides a measure of goodness-of-fit between a surface and an image contour. The goodness-of-fit measure is applied to the problem of establishing registration between an image and a surface model. Finally, the statistical estimation strategy is experimentally compared to human perception of orientation: human observers' judgements of tilt correspond closely to the estimates produced by the planar strategy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bewsher, D., Innes, D.E., Parnell, C.E. and Brown, D.S., 2005, Comparison of blinkers and explosive events, Astronomy and Astrophysics, 432, 307. Sponsorship: PPARC

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Genetic association studies are conducted to discover genetic loci that contribute to an inherited trait, identify the variants behind these associations and ascertain their functional role in determining the phenotype. To date, functional annotations of the genetic variants have rarely played more than an indirect role in assessing evidence for association. Here, we demonstrate how these data can be systematically integrated into an association study's analysis plan. RESULTS: We developed a Bayesian statistical model for the prior probability of phenotype-genotype association that incorporates data from past association studies and publicly available functional annotation data regarding the susceptibility variants under study. The model takes the form of a binary regression of association status on a set of annotation variables whose coefficients were estimated through an analysis of associated SNPs in the GWAS Catalog (GC). The functional predictors examined included measures that have been demonstrated to correlate with the association status of SNPs in the GC and some whose utility in this regard is speculative: summaries of the UCSC Human Genome Browser ENCODE super-track data, dbSNP function class, sequence conservation summaries, proximity to genomic variants in the Database of Genomic Variants and known regulatory elements in the Open Regulatory Annotation database, PolyPhen-2 probabilities and RegulomeDB categories. Because we expected that only a fraction of the annotations would contribute to predicting association, we employed a penalized likelihood method to reduce the impact of non-informative predictors and evaluated the model's ability to predict GC SNPs not used to construct the model. We show that the functional data alone are predictive of a SNP's presence in the GC. Further, using data from a genome-wide study of ovarian cancer, we demonstrate that their use as prior data when testing for association is practical at the genome-wide scale and improves power to detect associations. CONCLUSIONS: We show how diverse functional annotations can be efficiently combined to create 'functional signatures' that predict the a priori odds of a variant's association to a trait and how these signatures can be integrated into a standard genome-wide-scale association analysis, resulting in improved power to detect truly associated variants.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The neutron multidetector DéMoN has been used to investigate the symmetric splitting dynamics in the reactions 58.64Ni + 208Pb with excitation energies ranging from 65 to 186 MeV for the composite system. An analysis based on the new backtracing technique has been applied on the neutron data to determine the two-dimensional correlations between the parent composite system initial thermal energy (EthCN) and the total neutron multiplicity (νtot), and between pre- and post-scission neutron multiplicities (νpre and νpost, respectively). The νpre distribution shape indicates the possible coexistence of fast-fission and fusion-fission for the system 58Ni + 208Pb (Ebeam = 8.86 A MeV). The analysis of the neutron multiplicities in the framework of the combined dynamical statistical model (CDSM) gives a reduced friction coefficient β = 23 ± 2512 × 1021 s-1, above the one-body dissipation limit. The corresponding fission time is τf = 40 ± 4620 × 10-21 s. © 1999 Elsevier Science B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Antarctic krill is a cold water species, an increasingly important fishery resource and a major prey item for many fish, birds and mammals in the Southern Ocean. The fishery and the summer foraging sites of many of these predators are concentrated between 0 degrees and 90 degrees W. Parts of this quadrant have experienced recent localised sea surface warming of up to 0.2 degrees C per decade, and projections suggest that further widespread warming of 0.27 degrees to 1.08 degrees C will occur by the late 21st century. We assessed the potential influence of this projected warming on Antarctic krill habitat with a statistical model that links growth to temperature and chlorophyll concentration. The results divide the quadrant into two zones: a band around the Antarctic Circumpolar Current in which habitat quality is particularly vulnerable to warming, and a southern area which is relatively insensitive. Our analysis suggests that the direct effects of warming could reduce the area of growth habitat by up to 20%. The reduction in growth habitat within the range of predators, such as Antarctic fur seals, that forage from breeding sites on South Georgia could be up to 55%, and the habitat's ability to support Antarctic krill biomass production within this range could be reduced by up to 68%. Sensitivity analysis suggests that the effects of a 50% change in summer chlorophyll concentration could be more significant than the direct effects of warming. A reduction in primary production could lead to further habitat degradation but, even if chlorophyll increased by 50%, projected warming would still cause some degradation of the habitat accessible to predators. While there is considerable uncertainty in these projections, they suggest that future climate change could have a significant negative effect on Antarctic krill growth habitat and, consequently, on Southern Ocean biodiversity and ecosystem services.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: To identify demographic and socioeconomic determinants of need for acute hospital treatment at small area level. To establish whether there is a relation between poverty and use of inpatient services. To devise a risk adjustment formula for distributing public funds for hospital services using, as far as possible, variables that can be updated between censuses. Design: Cross sectional analysis. Spatial interactive modelling was used to quantify the proximity of the population to health service facilities. Two stage weighted least squares regression was used to model use against supply of hospital and community services and a wide range of potential needs drivers including health, socioeconomic census variables, uptake of income support and family credit, and religious denomination. Setting: Northern Ireland. Main outcome measure: Intensity of use of inpatient services. Results: After endogeneity of supply and use was taken into account, a statistical model was produced that predicted use based on five variables: income support, family credit, elderly people living alone, all ages standardised mortality ratio, and low birth weight. The main effect of the formula produced is to move resources from urban to rural areas. Conclusions: This work has produced a population risk adjustment formula for acute hospital treatment in which four of the five variables can be updated annually rather than relying on census derived data. Inclusion of the social security data makes a substantial difference to the model and to the results produced by the formula.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper the parameters of cement grout affecting rheological behaviour and compressive strength are investigated. Factorial experimental design was adopted in this investigation to assess the combined effects of the following factors on fluidity, rheological properties, induced bleeding and compressive strength: water/binder ratio (W/B), dosage of superplasticiser (SP), dosage of viscosity agent (VA), and proportion of limestone powder as replacement of cement (LSP). Mini-slump test, Marsh cone, Lombardi plate cohesion meter, induced bleeding test, coaxial rotating cylinder viscometer were used to evaluate the rheology of the cement grout and the compressive strengths at 7 and 28 days were measured. A two-level fractional factorial statistical model was used to model the influence of key parameters on properties affecting the fluidity, the rheology and compressive strength. The models are valid for mixes with 0.35-0.42 W/B, 0.3-1.2% SP, 0.02-0.7% VA (percentage of binder) and 12-45% LSP as replacement of cement. The influences of W/B, SP, VA and LSP were characterised and analysed using polynomial regression which can identify the primary factors and their interactions on the measured properties. Mathematical polynomials were developed for mini-slump, plate cohesion meter, inducing bleeding, yield value, plastic viscosity and compressive strength as function of W/B, SP, VA and proportion of LSP. The statistical approach used highlighted the limestone powder effect and the dosage of SP and VA on the various rheological characteristics of cement grout