859 resultados para Analytical Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this letter, we propose an analytical approach to model uplink intercell interference (ICI) in hexagonal grid based orthogonal frequency division multiple access (OFMDA) cellular networks. The key idea is that the uplink ICI from individual cells is approximated with a lognormal distribution with statistical parameters being determined analytically. Accordingly, the aggregated uplink ICI is approximated with another lognormal distribution and its statistical parameters can be determined from those of individual cells using Fenton-Wilkson method. Analytic expressions of uplink ICI are derived with two traditional frequency reuse schemes, namely integer frequency reuse schemes with factor 1 (IFR-1) and factor 3 (IFR-3). Uplink fractional power control and lognormal shadowing are modeled. System performances in terms of signal to interference plus noise ratio (SINR) and spectrum efficiency are also derived. The proposed model has been validated by simulations. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Starting from a continuum description, we study the nonequilibrium roughening of a thermal re-emission model for etching in one and two spatial dimensions. Using standard analytical techniques, we map our problem to a generalized version of an earlier nonlocal KPZ (Kardar-Parisi-Zhang) model. In 2 + 1 dimensions, the values of the roughness and the dynamic exponents calculated from our theory go like α ≈ z ≈ 1 and in 1 + 1 dimensions, the exponents resemble the KPZ values for low vapor pressure, supporting experimental results. Interestingly, Galilean invariance is maintained throughout.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Batch-mode reverse osmosis (batch-RO) operation is considered a promising desalination method due to its low energy requirement compared to other RO system arrangements. To improve and predict batch-RO performance, studies on concentration polarization (CP) are carried out. The Kimura-Sourirajan mass-transfer model is applied and validated by experimentation with two different spiral-wound RO elements. Explicit analytical Sherwood correlations are derived based on experimental results. For batch-RO operation, a new genetic algorithm method is developed to estimate the Sherwood correlation parameters, taking into account the effects of variation in operating parameters. Analytical procedures are presented, then the mass transfer coefficient models are developed for different operation processes, i.e., batch-RO and continuous RO. The CP related energy loss in batch-RO operation is quantified based on the resulting relationship between feed flow rates and mass transfer coefficients. It is found that CP increases energy consumption in batch-RO by about 25% compared to the ideal case in which CP is absent. For continuous RO process, the derived Sherwood correlation predicted CP accurately. In addition, we determined the optimum feed flow rate of our batch-RO system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new mesoscale simulation model for solids dissolution based on an computationally efficient and versatile digital modelling approach (DigiDiss) is considered and validated against analytical solutions and published experimental data for simple geometries. As the digital model is specifically designed to handle irregular shapes and complex multi-component structures, use of the model is explored for single crystals (sugars) and clusters. Single crystals and the cluster were first scanned using X-ray microtomography to obtain a digital version of their structures. The digitised particles and clusters were used as a structural input to digital simulation. The same particles were then dissolved in water and the dissolution process was recorded by a video camera and analysed yielding: the overall dissolution times and images of particle size and shape during the dissolution. The results demonstrate the coherence of simulation method to reproduce experimental behaviour, based on known chemical and diffusion properties of constituent phase. The paper discusses how further sophistications to the modelling approach will need to include other important effects such as complex disintegration effects (particle ejection, uncertainties in chemical properties). The nature of the digital modelling approach is well suited to for future implementation with high speed computation using hybrid conventional (CPU) and graphical processor (GPU) systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research (e.g., Jüttner et al, 2013, Developmental Psychology, 49, 161-176) has shown that object recognition may develop well into late childhood and adolescence. The present study extends that research and reveals novel di erences in holistic and analytic recognition performance in 7-11 year olds compared to that seen in adults. We interpret our data within Hummel’s hybrid model of object recognition (Hummel, 2001, Visual Cognition, 8, 489-517) that proposes two parallel routes for recognition (analytic vs. holistic) modulated by attention. Using a repetition-priming paradigm, we found in Experiment 1 that children showed no holistic priming, but only analytic priming. Given that holistic priming might be thought to be more ‘primitive’, we confirmed in Experiment 2 that our surprising finding was not because children’s analytic recognition was merely a result of name repetition. Our results suggest a developmental primacy of analytic object recognition. By contrast, holistic object recognition skills appear to emerge with a much more protracted trajectory extending into late adolescence

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data Envelopment Analysis (DEA) is a powerful analytical technique for measuring the relative efficiency of alternatives based on their inputs and outputs. The alternatives can be in the form of countries who attempt to enhance their productivity and environmental efficiencies concurrently. However, when desirable outputs such as productivity increases, undesirable outputs increase as well (e.g. carbon emissions), thus making the performance evaluation questionable. In addition, traditional environmental efficiency has been typically measured by crisp input and output (desirable and undesirable). However, the input and output data, such as CO2 emissions, in real-world evaluation problems are often imprecise or ambiguous. This paper proposes a DEA-based framework where the input and output data are characterized by symmetrical and asymmetrical fuzzy numbers. The proposed method allows the environmental evaluation to be assessed at different levels of certainty. The validity of the proposed model has been tested and its usefulness is illustrated using two numerical examples. An application of energy efficiency among 23 European Union (EU) member countries is further presented to show the applicability and efficacy of the proposed approach under asymmetric fuzzy numbers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work introduces a model in which agents of a network act upon one another according to three different kinds of moral decisions. These decisions are based on an increasing level of sophistication in the empathy capacity of the agent, a hierarchy which we name Piaget's ladder. The decision strategy of the agents is non-rational, in the sense they are arbitrarily fixed, and the model presents quenched disorder given by the distribution of its defining parameters. An analytical solution for this model is obtained in the large system limit as well as a leading order correction for finite-size systems which shows that typical realisations of the model develop a phase structure with both continuous and discontinuous non-thermal transitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to develop a model to predict transport and fate of gasoline components of environmental concern in the Miami River by mathematically simulating the movement of dissolved benzene, toluene, xylene (BTX), and methyl-tertiary-butyl ether (MTBE) occurring from minor gasoline spills in the inter-tidal zone of the river. Computer codes were based on mathematical algorithms that acknowledge the role of advective and dispersive physical phenomena along the river and prevailing phase transformations of BTX and MTBE. Phase transformations included volatilization and settling. ^ The model used a finite-difference scheme of steady-state conditions, with a set of numerical equations that was solved by two numerical methods: Gauss-Seidel and Jacobi iterations. A numerical validation process was conducted by comparing the results from both methods with analytical and numerical reference solutions. Since similar trends were achieved after the numerical validation process, it was concluded that the computer codes algorithmically were correct. The Gauss-Seidel iteration yielded at a faster convergence rate than the Jacobi iteration. Hence, the mathematical code was selected to further develop the computer program and software. The model was then analyzed for its sensitivity. It was found that the model was very sensitive to wind speed but not to sediment settling velocity. ^ A computer software was developed with the model code embedded. The software was provided with two major user-friendly visualized forms, one to interface with the database files and the other to execute and present the graphical and tabulated results. For all predicted concentrations of BTX and MTBE, the maximum concentrations were over an order of magnitude lower than current drinking water standards. It should be pointed out, however, that smaller concentrations than the latter reported standards and values, although not harmful to humans, may be very harmful to organisms of the trophic levels of the Miami River ecosystem and associated waters. This computer model can be used for the rapid assessment and management of the effects of minor gasoline spills on inter-tidal riverine water quality. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stable isotope analysis has emerged as one of the primary means for examining the structure and dynamics of food webs, and numerous analytical approaches are now commonly used in the field. Techniques range from simple, qualitative inferences based on the isotopic niche, to Bayesian mixing models that can be used to characterize food-web structure at multiple hierarchical levels. We provide a comprehensive review of these techniques, and thus a single reference source to help identify the most useful approaches to apply to a given data set. We structure the review around four general questions: (1) what is the trophic position of an organism in a food web?; (2) which resource pools support consumers?; (3) what additional information does relative position of consumers in isotopic space reveal about food-web structure?; and (4) what is the degree of trophic variability at the intrapopulation level? For each general question, we detail different approaches that have been applied, discussing the strengths and weaknesses of each. We conclude with a set of suggestions that transcend individual analytical approaches, and provide guidance for future applications in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Community ecology seeks to understand and predict the characteristics of communities that can develop under different environmental conditions, but most theory has been built on analytical models that are limited in the diversity of species traits that can be considered simultaneously. We address that limitation with an individual-based model to simulate assembly of fish communities characterized by life history and trophic interactions with multiple physiological tradeoffs as constraints on species performance. Simulation experiments were carried out to evaluate the distribution of 6 life history and 4 feeding traits along gradients of resource productivity and prey accessibility. These experiments revealed that traits differ greatly in importance for species sorting along the gradients. Body growth rate emerged as a key factor distinguishing community types and defining patterns of community stability and coexistence, followed by egg size and maximum body size. Dominance by fast-growing, relatively large, and fecund species occurred more frequently in cases where functional responses were saturated (i.e. high productivity and/or prey accessibility). Such dominance was associated with large biomass fluctuations and priority effects, which prevented richness from increasing with productivity and may have limited selection on secondary traits, such as spawning strategies and relative size at maturation. Our results illustrate that the distribution of species traits and the consequences for community dynamics are intimately linked and strictly dependent on how the benefits and costs of these traits are balanced across different conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation. In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data. For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiple linear regression model plays a key role in statistical inference and it has extensive applications in business, environmental, physical and social sciences. Multicollinearity has been a considerable problem in multiple regression analysis. When the regressor variables are multicollinear, it becomes difficult to make precise statistical inferences about the regression coefficients. There are some statistical methods that can be used, which are discussed in this thesis are ridge regression, Liu, two parameter biased and LASSO estimators. Firstly, an analytical comparison on the basis of risk was made among ridge, Liu and LASSO estimators under orthonormal regression model. I found that LASSO dominates least squares, ridge and Liu estimators over a significant portion of the parameter space for large dimension. Secondly, a simulation study was conducted to compare performance of ridge, Liu and two parameter biased estimator by their mean squared error criterion. I found that two parameter biased estimator performs better than its corresponding ridge regression estimator. Overall, Liu estimator performs better than both ridge and two parameter biased estimator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.