949 resultados para Geo-statistical model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this letter is to propose an analytical model to study the performance of Software-Defined Network (SDN) switches. Here, SDN switch performance is defined as the time that an SDN switch needs to process packet without the interaction of controller. We exploit the capabilities of queueing theory based M/Geo/1 model to analyze the key factors, flowtable size, packet arrival rate, number of rules, and position of rules. The analytical model is validated using extensive simulations. Our study reveals that these factors have significant influence on the performance of an SDN switch.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

研究了新疆阜康地区森林植被资源与环境的特征和其30年来的变化,利用Arcinfo强大的空间分析功能,对资源、DEM模型、景观指数、环境价值和新疆降水量的地统计学规律进行较全面的分析。本文分为五个部分: 1、新疆阜康地区森林资源与环境空间数据库的建立森林资源与环境空间数据库的建立是它们空问分析的基础。利用多期的遥感图象和该区的地形图,建立森林分类图形和属性库(包括森林和环境自变量集)一体化的GIS空间数据库。为了提高TM遥感图象的分类精度,利用ERDAS图象处理软件,对它进行包括主成分、降噪、去条带和自然色彩变换等增强处理,采用监督分类和人工判读相结合的方法进行分类,采用R2V、ERDAS、Arcview、Arcinfo等软件的集成,使得小班面层与某些线层的无缝联接。成功地形成一套适于西部GIS的森林资源与环境空间数据库的技术路径。此外,对新疆阜康北部地区森林资源动态进行初步分析。 2、新疆阜康地区数字高程模型(DEM)及其粗差检测分析为了提高生态建模的精度,模拟和提取该区的地面特征至关重要。在已建立的森林资源与环境空间数据库的支持下,利用Arcinfo和ERDAS,建立了新疆阜康地区的1:5万数字高程模型(DEM)。通过提取地形的海拔、坡度、坡向特征因子,分析森林植被的垂直分布。通过对DEM的粗差检测分析,分析阜康地区的数字高程模型精度。 3、新疆阜康地区景观格局变化分析在1977年、1987年、1999年森林资源与环境空间数据库的支持下,利用景观分析软件编制三个时段的新疆阜康地区植被景观类型图,并分析了近30年来新疆阜康地区景观动态与景观格局变化。结果表明:①在此期间整个研究区的斑块数减少,斑块平均面积扩大,景观中面积在不同景观要素类型之间的分配更加不均衡,景观面积向少数几种类型聚集。说明了在这期间阜康地区的景观类型有向单一化方向发展的趋势;②农耕地分布呈破碎化的趋势,斑块平均面积变小,斑块间离散程度也更高:这些变化说明人为的经济活动在阜康地区的加剧,③天然林面积减少较多,水域的面积却呈现上升的趋势,冰川及永久积雪的面积呈下降趋势, 4、新疆阜康地区森林生态效益的初步分析从广义森林生态效益定义出发,针对12种森林生态效益因变量不完全独立、且各自的自变量集不完全相同,引入具有多对多特征且整体上相容的似乎不相关广义线性模型。通过构造12种森林生态效益的“有效面积系数”和“市场逼近系数”,在森林资源与环境空间数据库的支持下,对新疆阜康地区两期的森林生态效益进行科学的计量。结果表明:新疆阜康地区的森林生态效益货币量1987年是90673.8万元,1999年是84134.4万元,总体上呈下降趋势。 5、利用新疆气象站资料研究年降雨量的空间分布规律利用ArcGIS地统计学模块,在2000年新疆气候信息空间数据库和新疆DEM模型的支持下,做出了新疆地区的年降水量空间分布图。根据新疆气候资料建立趋势而分析模型、模拟了新疆降水量空间分布的趋势值。采用3种算法(距离权重法、普通Kriging法、协同Kriging方法)计算并比较分析了研究区多年的平均降水量的时空变化。利用模拟产生的精度最优的栅格降水空间数据库,建立的多年平均降水资源信息系统,可快速计算研究区内任一地域单元中降水的总量及其空间变化,可以生成高精度的气候要素空间分布图。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A review is presented of the statistical bootstrap model of Hagedorn and Frautschi. This model is an attempt to apply the methods of statistical mechanics in high-energy physics, while treating all hadron states (stable or unstable) on an equal footing. A statistical calculation of the resonance spectrum on this basis leads to an exponentially rising level density ρ(m) ~ cm-3 eβom at high masses.

In the present work, explicit formulae are given for the asymptotic dependence of the level density on quantum numbers, in various cases. Hamer and Frautschi's model for a realistic hadron spectrum is described.

A statistical model for hadron reactions is then put forward, analogous to the Bohr compound nucleus model in nuclear physics, which makes use of this level density. Some general features of resonance decay are predicted. The model is applied to the process of NN annihilation at rest with overall success, and explains the high final state pion multiplicity, together with the low individual branching ratios into two-body final states, which are characteristic of the process. For more general reactions, the model needs modification to take account of correlation effects. Nevertheless it is capable of explaining the phenomenon of limited transverse momenta, and the exponential decrease in the production frequency of heavy particles with their mass, as shown by Hagedorn. Frautschi's results on "Ericson fluctuations" in hadron physics are outlined briefly. The value of βo required in all these applications is consistently around [120 MeV]-1 corresponding to a "resonance volume" whose radius is very close to ƛπ. The construction of a "multiperipheral cluster model" for high-energy collisions is advocated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A statistical model of linear-confined quarks is applied to obtain the flavor asymmetry of the nucleon sea. The model parametrization is fixed by the experimental available data, where a temperature parameter is used to fit the Gottfried sum rule violation. Results are presented for the ratios of light quark and antiquark distributions, d/u and (d) over bar/(u) over bar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The strangeness content of the nucleon is determined from a statistical model using confined quark levels, and is shown to have a good agreement with the corresponding values extracted from experimental data. The quark levels are generated in a Dirac equation that uses a linear confining potential (scalar plus vector). With the requirement that the result for the Gottfried sum rule violation, given by the New Muon Collaboration (NMC), is well reproduced, we also obtain the difference between the structure functions of the proton and neutron, and the corresponding sea quark contributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a framework for statistical finite element analysis combining shape and material properties, and allowing performing statistical statements of biomechanical performance across a given population. In this paper, we focus on the design of orthopaedic implants that fit a maximum percentage of the target population, both in terms of geometry and biomechanical stability. CT scans of the bone under consideration are registered non-rigidly to obtain correspondences in position and intensity between them. A statistical model of shape and intensity (bone density) is computed by means of principal component analysis. Afterwards, finite element analysis (FEA) is performed to analyse the biomechanical performance of the bones. Realistic forces are applied on the bones and the resulting displacement and bone stress distribution are calculated. The mechanical behaviour of different PCA bone instances is compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structural genomics aims to solve a large number of protein structures that represent the protein space. Currently an exhaustive solution for all structures seems prohibitively expensive, so the challenge is to define a relatively small set of proteins with new, currently unknown folds. This paper presents a method that assigns each protein with a probability of having an unsolved fold. The method makes extensive use of protomap, a sequence-based classification, and scop, a structure-based classification. According to protomap, the protein space encodes the relationship among proteins as a graph whose vertices correspond to 13,354 clusters of proteins. A representative fold for a cluster with at least one solved protein is determined after superposition of all scop (release 1.37) folds onto protomap clusters. Distances within the protomap graph are computed from each representative fold to the neighboring folds. The distribution of these distances is used to create a statistical model for distances among those folds that are already known and those that have yet to be discovered. The distribution of distances for solved/unsolved proteins is significantly different. This difference makes it possible to use Bayes' rule to derive a statistical estimate that any protein has a yet undetermined fold. Proteins that score the highest probability to represent a new fold constitute the target list for structural determination. Our predicted probabilities for unsolved proteins correlate very well with the proportion of new folds among recently solved structures (new scop 1.39 records) that are disjoint from our original training set.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multicarrier code division multiple access (MC-CDMA) is a very promising candidate for the multiple access scheme in fourth generation wireless communi- cation systems. During asynchronous transmission, multiple access interference (MAI) is a major challenge for MC-CDMA systems and significantly affects their performance. The main objectives of this thesis are to analyze the MAI in asyn- chronous MC-CDMA, and to develop robust techniques to reduce the MAI effect. Focus is first on the statistical analysis of MAI in asynchronous MC-CDMA. A new statistical model of MAI is developed. In the new model, the derivation of MAI can be applied to different distributions of timing offset, and the MAI power is modelled as a Gamma distributed random variable. By applying the new statistical model of MAI, a new computer simulation model is proposed. This model is based on the modelling of a multiuser system as a single user system followed by an additive noise component representing the MAI, which enables the new simulation model to significantly reduce the computation load during computer simulations. MAI reduction using slow frequency hopping (SFH) technique is the topic of the second part of the thesis. Two subsystems are considered. The first sub- system involves subcarrier frequency hopping as a group, which is referred to as GSFH/MC-CDMA. In the second subsystem, the condition of group hopping is dropped, resulting in a more general system, namely individual subcarrier frequency hopping MC-CDMA (ISFH/MC-CDMA). This research found that with the introduction of SFH, both of GSFH/MC-CDMA and ISFH/MC-CDMA sys- tems generate less MAI power than the basic MC-CDMA system during asyn- chronous transmission. Because of this, both SFH systems are shown to outper- form MC-CDMA in terms of BER. This improvement, however, is at the expense of spectral widening. In the third part of this thesis, base station polarization diversity, as another MAI reduction technique, is introduced to asynchronous MC-CDMA. The com- bined system is referred to as Pol/MC-CDMA. In this part a new optimum com- bining technique namely maximal signal-to-MAI ratio combining (MSMAIRC) is proposed to combine the signals in two base station antennas. With the applica- tion of MSMAIRC and in the absents of additive white Gaussian noise (AWGN), the resulting signal-to-MAI ratio (SMAIR) is not only maximized but also in- dependent of cross polarization discrimination (XPD) and antenna angle. In the case when AWGN is present, the performance of MSMAIRC is still affected by the XPD and antenna angle, but to a much lesser degree than the traditional maximal ratio combining (MRC). Furthermore, this research found that the BER performance for Pol/MC-CDMA can be further improved by changing the angle between the two receiving antennas. Hence the optimum antenna angles for both MSMAIRC and MRC are derived and their effects on the BER performance are compared. With the derived optimum antenna angle, the Pol/MC-CDMA system is able to obtain the lowest BER for a given XPD.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chatrooms, for example Internet Relay Chat, are generally multi-user, multi-channel and multiserver chat-systems which run over the Internet and provide a protocol for real-time text-based conferencing between users all over the world. While a well-trained human observer is able to understand who is chatting with whom, there are no efficient and accurate automated tools to determine the groups of users conversing with each other. A precursor to analysing evolving cyber-social phenomena is to first determine what the conversations are and which groups of chatters are involved in each conversation. We consider this problem in this paper. We propose an algorithm to discover all groups of users that are engaged in conversation. Our algorithms are based on a statistical model of a chatroom that is founded on our experience with real chatrooms. Our approach does not require any semantic analysis of the conversations, rather it is based purely on the statistical information contained in the sequence of posts. We improve the accuracy by applying some graph algorithms to clean the statistical information. We present some experimental results which indicate that one can automatically determine the conversing groups in a chatroom, purely on the basis of statistical analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Bus Rapid Transit (BRT) station is the interface between passengers and services. The station is crucial to line operation as it is typically the only location where buses can pass each other. Congestion may occur here when buses maneuvering into and out of the platform lane interfere with bus flow, or when a queue of buses forms upstream of the platform lane blocking the passing lane. Further, some systems include operation where express buses do not observe the station, resulting in a proportion of non-stopping buses. It is important to understand the operation of the station under this type of operation and its effect on BRT line capacity. This study uses microscopic traffic simulation modeling to treat the BRT station operation and to analyze the relationship between station bus capacity and BRT line bus capacity. First, the simulation model is developed for the limit state scenario and then a statistical model is defined and calibrated for a specified range of controlled scenarios of dwell time characteristics. A field survey was conducted to verify the parameters such as dwell time, clearance time and coefficient of variation of dwell time to obtain relevant station bus capacity. The proposed model for BRT bus capacity provides a better understanding of BRT line capacity and is useful to transit authorities in BRT planning, design and operation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electricity network investment and asset management require accurate estimation of future demand in energy consumption within specified service areas. For this purpose, simple models are typically developed to predict future trends in electricity consumption using various methods and assumptions. This paper presents a statistical model to predict electricity consumption in the residential sector at the Census Collection District (CCD) level over the state of New South Wales, Australia, based on spatial building and household characteristics. Residential household demographic and building data from the Australian Bureau of Statistics (ABS) and actual electricity consumption data from electricity companies are merged for 74 % of the 12,000 CCDs in the state. Eighty percent of the merged dataset is randomly set aside to establish the model using regression analysis, and the remaining 20 % is used to independently test the accuracy of model prediction against actual consumption. In 90 % of the cases, the predicted consumption is shown to be within 5 kWh per dwelling per day from actual values, with an overall state accuracy of -1.15 %. Given a future scenario with a shift in climate zone and a growth in population, the model is used to identify the geographical or service areas that are most likely to have increased electricity consumption. Such geographical representation can be of great benefit when assessing alternatives to the centralised generation of energy; having such a model gives a quantifiable method to selecting the 'most' appropriate system when a review or upgrade of the network infrastructure is required.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the commercial food industry, demonstration of microbiological safety and thermal process equivalence often involves a mathematical framework that assumes log-linear inactivation kinetics and invokes concepts of decimal reduction time (DT), z values, and accumulated lethality. However, many microbes, particularly spores, exhibit inactivation kinetics that are not log linear. This has led to alternative modeling approaches, such as the biphasic and Weibull models, that relax strong log-linear assumptions. Using a statistical framework, we developed a novel log-quadratic model, which approximates the biphasic and Weibull models and provides additional physiological interpretability. As a statistical linear model, the log-quadratic model is relatively simple to fit and straightforwardly provides confidence intervals for its fitted values. It allows a DT-like value to be derived, even from data that exhibit obvious "tailing." We also showed how existing models of non-log-linear microbial inactivation, such as the Weibull model, can fit into a statistical linear model framework that dramatically simplifies their solution. We applied the log-quadratic model to thermal inactivation data for the spore-forming bacterium Clostridium botulinum and evaluated its merits compared with those of popular previously described approaches. The log-quadratic model was used as the basis of a secondary model that can capture the dependence of microbial inactivation kinetics on temperature. This model, in turn, was linked to models of spore inactivation of Sapru et al. and Rodriguez et al. that posit different physiological states for spores within a population. We believe that the log-quadratic model provides a useful framework in which to test vitalistic and mechanistic hypotheses of inactivation by thermal and other processes. Copyright © 2009, American Society for Microbiology. All Rights Reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In images with low contrast-to-noise ratio (CNR), the information gain from the observed pixel values can be insufficient to distinguish foreground objects. A Bayesian approach to this problem is to incorporate prior information about the objects into a statistical model. A method for representing spatial prior information as an external field in a hidden Potts model is introduced. This prior distribution over the latent pixel labels is a mixture of Gaussian fields, centred on the positions of the objects at a previous point in time. It is particularly applicable in longitudinal imaging studies, where the manual segmentation of one image can be used as a prior for automatic segmentation of subsequent images. The method is demonstrated by application to cone-beam computed tomography (CT), an imaging modality that exhibits distortions in pixel values due to X-ray scatter. The external field prior results in a substantial improvement in segmentation accuracy, reducing the mean pixel misclassification rate for an electron density phantom from 87% to 6%. The method is also applied to radiotherapy patient data, demonstrating how to derive the external field prior in a clinical context.