13 resultados para Application to model approach
em Aston University Research Archive
Resumo:
The article deals with the CFD modelling of fast pyrolysis of biomass in an Entrained Flow Reactor (EFR). The Lagrangian approach is adopted for the particle tracking, while the flow of the inert gas is treated with the standard Eulerian method for gases. The model includes the thermal degradation of biomass to char with simultaneous evolution of gases and tars from a discrete biomass particle. The chemical reactions are represented using a two-stage, semi-global model. The radial distribution of the pyrolysis products is predicted as well as their effect on the particle properties. The convective heat transfer to the surface of the particle is computed using the Ranz-Marshall correlation.
Resumo:
When applying multivariate analysis techniques in information systems and social science disciplines, such as management information systems (MIS) and marketing, the assumption that the empirical data originate from a single homogeneous population is often unrealistic. When applying a causal modeling approach, such as partial least squares (PLS) path modeling, segmentation is a key issue in coping with the problem of heterogeneity in estimated cause-and-effect relationships. This chapter presents a new PLS path modeling approach which classifies units on the basis of the heterogeneity of the estimates in the inner model. If unobserved heterogeneity significantly affects the estimated path model relationships on the aggregate data level, the methodology will allow homogenous groups of observations to be created that exhibit distinctive path model estimates. The approach will, thus, provide differentiated analytical outcomes that permit more precise interpretations of each segment formed. An application on a large data set in an example of the American customer satisfaction index (ACSI) substantiates the methodology’s effectiveness in evaluating PLS path modeling results.
Resumo:
In this paper we develop an index and an indicator of productivity change that can be used with negative data. For that purpose the range directional model (RDM), a particular case of the directional distance function, is used for computing efficiency in the presence of negative data. We use RDM efficiency measures to arrive at a Malmquist-type index, which can reflect productivity change, and we use RDM inefficiency measures to arrive at a Luenberger productivity indicator, and relate the two. The productivity index and indicator are developed relative to a fixed meta-technology and so they are referred to as a meta-Malmquist index and meta-Luenberger indicator. We also address the fact that VRS technologies are used for computing the productivity index and indicator (a requirement under negative data), which raises issues relating to the interpretability of the index. We illustrate how the meta-Malmquist index can be used, not only for comparing the performance of a unit in two time periods, but also for comparing the performance of two different units at the same or different time periods. The proposed approach is then applied to a sample of bank branches where negative data were involved. The paper shows how the approach yields information from a variety of perspectives on performance which management can use.
Resumo:
We propose the use of stochastic frontier approach to modelling financial constraints of firms. The main advantage of the stochastic frontier approach over the stylised approaches that use pooled OLS or fixed effects panel regression models is that we can not only decide whether or not the average firm is financially constrained, but also estimate a measure of the degree of the constraint for each firm and for each time period, and also the marginal impact of firm characteristics on this measure. We then apply the stochastic frontier approach to a panel of Indian manufacturing firms, for the 1997–2006 period. In our application, we highlight and discuss the aforementioned advantages, while also demonstrating that the stochastic frontier approach generates regression estimates that are consistent with the stylised intuition found in the literature on financial constraint and the wider literature on the Indian credit/capital market.
Resumo:
The Multiple Pheromone Ant Clustering Algorithm (MPACA) models the collective behaviour of ants to find clusters in data and to assign objects to the most appropriate class. It is an ant colony optimisation approach that uses pheromones to mark paths linking objects that are similar and potentially members of the same cluster or class. Its novelty is in the way it uses separate pheromones for each descriptive attribute of the object rather than a single pheromone representing the whole object. Ants that encounter other ants frequently enough can combine the attribute values they are detecting, which enables the MPACA to learn influential variable interactions. This paper applies the model to real-world data from two domains. One is logistics, focusing on resource allocation rather than the more traditional vehicle-routing problem. The other is mental-health risk assessment. The task for the MPACA in each domain was to predict class membership where the classes for the logistics domain were the levels of demand on haulage company resources and the mental-health classes were levels of suicide risk. Results on these noisy real-world data were promising, demonstrating the ability of the MPACA to find patterns in the data with accuracy comparable to more traditional linear regression models. © 2013 Polish Information Processing Society.
Resumo:
Aggregation and caking of particles are common severe problems in many operations and processing of granular materials, where granulated sugar is an important example. Prevention of aggregation and caking of granular materials requires a good understanding of moisture migration and caking mechanisms. In this paper, the modeling of solid bridge formation between particles is introduced, based on moisture migration of atmospheric moisture into containers packed with granular materials through vapor evaporation and condensation. A model for the caking process is then developed, based on the growth of liquid bridges (during condensation), and their hardening and subsequent creation of solid bridges (during evaporation). The predicted caking strengths agree well with some available experimental data on granulated sugar under storage conditions.
Resumo:
Batch-mode reverse osmosis (batch-RO) operation is considered a promising desalination method due to its low energy requirement compared to other RO system arrangements. To improve and predict batch-RO performance, studies on concentration polarization (CP) are carried out. The Kimura-Sourirajan mass-transfer model is applied and validated by experimentation with two different spiral-wound RO elements. Explicit analytical Sherwood correlations are derived based on experimental results. For batch-RO operation, a new genetic algorithm method is developed to estimate the Sherwood correlation parameters, taking into account the effects of variation in operating parameters. Analytical procedures are presented, then the mass transfer coefficient models are developed for different operation processes, i.e., batch-RO and continuous RO. The CP related energy loss in batch-RO operation is quantified based on the resulting relationship between feed flow rates and mass transfer coefficients. It is found that CP increases energy consumption in batch-RO by about 25% compared to the ideal case in which CP is absent. For continuous RO process, the derived Sherwood correlation predicted CP accurately. In addition, we determined the optimum feed flow rate of our batch-RO system.
Resumo:
This paper, addresses the problem of novelty detection in the case that the observed data is a mixture of a known 'background' process contaminated with an unknown other process, which generates the outliers, or novel observations. The framework we describe here is quite general, employing univariate classification with incomplete information, based on knowledge of the distribution (the 'probability density function', 'pdf') of the data generated by the 'background' process. The relative proportion of this 'background' component (the 'prior' 'background' 'probability), the 'pdf' and the 'prior' probabilities of all other components are all assumed unknown. The main contribution is a new classification scheme that identifies the maximum proportion of observed data following the known 'background' distribution. The method exploits the Kolmogorov-Smirnov test to estimate the proportions, and afterwards data are Bayes optimally separated. Results, demonstrated with synthetic data, show that this approach can produce more reliable results than a standard novelty detection scheme. The classification algorithm is then applied to the problem of identifying outliers in the SIC2004 data set, in order to detect the radioactive release simulated in the 'oker' data set. We propose this method as a reliable means of novelty detection in the emergency situation which can also be used to identify outliers prior to the application of a more general automatic mapping algorithm. © Springer-Verlag 2007.
Resumo:
Traditional approaches to calculate total factor productivity (TFP) change through Malmquist indexes rely on distance functions. In this paper we show that the use of distance functions as a means to calculate TFP change may introduce some bias in the analysis, and therefore we propose a procedure that calculates TFP change through observed values only. Our total TFP change is then decomposed into efficiency change, technological change, and a residual effect. This decomposition makes use of a non-oriented measure in order to avoid problems associated with the traditional use of radial oriented measures, especially when variable returns to scale technologies are to be compared. The proposed approach is applied in this paper to a sample of Portuguese bank branches.
Resumo:
The present investigation is based on a linguistic analysis of the 'Housing Act 1980' and attempts to examine the role of qualifications in the structuring of the legislative statement. The introductory chapter isolates legislative writing as a "sub-variety “of legal language and provides an overview of the controversies surrounding the way it is written and the problems it poses to its readers. Chapter two emphasizes the limitations of the available work on the description of language-varieties for the analysis of legislative writing and outlines the approach adopted for the present analysis. This chapter also gives some idea of the information-structuring of legislative provisions and establishes qualification as a key element in their textualisation. The next three chapters offer a detailed account of the ten major qualification-types identified in the corpus, concentrating on the surface form they take, the features of legislative statements they textualize and the syntactic positions to which they are generally assigned in the statement of legislative provisions. The emerging hypotheses in these chapters have often been verified through a specialist reaction from a Parliamentary Counsel, largely responsible for the writing of the ‘Housing Act 1980’• The findings suggest useful correlations between a number of qualificational initiators and the various aspects of the legislative statement. They also reveal that many of these qualifications typically occur in those clause-medial syntactic positions which are sparingly used in other specialist discourse, thus creating syntactic discontinuity in the legislative sentence. Such syntactic discontinuities, on the evidence from psycholinguistic experiments reported in chapter six, create special problems in the processing and comprehension of legislative statements. The final chapter converts the main linguistic findings into a series of pedagogical generalizations, offers indications of how this may be applied in EALP situations and concludes with other considerations of possible applications.
Resumo:
This paper explains some drawbacks on previous approaches for detecting influential observations in deterministic nonparametric data envelopment analysis models as developed by Yang et al. (Annals of Operations Research 173:89-103, 2010). For example efficiency scores and relative entropies obtained in this model are unimportant to outlier detection and the empirical distribution of all estimated relative entropies is not a Monte-Carlo approximation. In this paper we developed a new method to detect whether a specific DMU is truly influential and a statistical test has been applied to determine the significance level. An application for measuring efficiency of hospitals is used to show the superiority of this method that leads to significant advancements in outlier detection. © 2014 Springer Science+Business Media New York.
Resumo:
Health care organizations must continuously improve their productivity to sustain long-term growth and profitability. Sustainable productivity performance is mostly assumed to be a natural outcome of successful health care management. Data envelopment analysis (DEA) is a popular mathematical programming method for comparing the inputs and outputs of a set of homogenous decision making units (DMUs) by evaluating their relative efficiency. The Malmquist productivity index (MPI) is widely used for productivity analysis by relying on constructing a best practice frontier and calculating the relative performance of a DMU for different time periods. The conventional DEA requires accurate and crisp data to calculate the MPI. However, the real-world data are often imprecise and vague. In this study, the authors propose a novel productivity measurement approach in fuzzy environments with MPI. An application of the proposed approach in health care is presented to demonstrate the simplicity and efficacy of the procedures and algorithms in a hospital efficiency study conducted for a State Office of Inspector General in the United States. © 2012, IGI Global.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. We use non-linear, artificial intelligence techniques, namely, recurrent neural networks, evolution strategies and kernel methods in our forecasting experiment. In the experiment, these three methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. There is evidence in the literature that evolutionary methods can be used to evolve kernels hence our future work should combine the evolutionary and kernel methods to get the benefits of both.