19 resultados para nonparametric demand model
em Aston University Research Archive
Resumo:
In order to generate sales promotion response predictions, marketing analysts estimate demand models using either disaggregated (consumer-level) or aggregated (store-level) scanner data. Comparison of predictions from these demand models is complicated by the fact that models may accommodate different forms of consumer heterogeneity depending on the level of data aggregation. This study shows via simulation that demand models with various heterogeneity specifications do not produce more accurate sales response predictions than a homogeneous demand model applied to store-level data, with one major exception: a random coefficients model designed to capture within-store heterogeneity using store-level data produced significantly more accurate sales response predictions (as well as better fit) compared to other model specifications. An empirical application to the paper towel product category adds additional insights. This article has supplementary material online.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.
Resumo:
We propose that problem-solving demand (PSD) is an important job attribute for employees' creative performance. Applying job design theory, we examined the relationship between PSD and employee creativity. The theorised model was tested with data obtained from a sample of 270 employees and their supervisors from three Chinese organisations. Regression results revealed that PSD was positively related to creativity, and this relationship was mediated by creative self-efficacy. Additionally, intrinsic motivation moderated the relationship between PSD and creative self-efficacy such that the relationship was stronger for individuals with high rather than low intrinsic motivation. We discuss our findings, implications for practice, and future research.
Resumo:
This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.
Resumo:
The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.
Resumo:
This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.
Resumo:
The topic of my research is consumer brand equity (CBE). My thesis is that the success or otherwise of a brand is better viewed from the consumers’ perspective. I specifically focus on consumers as a unique group of stakeholders whose involvement with brands is crucial to the overall success of branding strategy. To this end, this research examines the constellation of ideas on brand equity that have hitherto been offered by various scholars. Through a systematic integration of the concepts and practices identified but these scholars (concepts and practices such as: competitiveness, consumer searching, consumer behaviour, brand image, brand relevance, consumer perceived value, etc.), this research identifies CBE as a construct that is shaped, directed and made valuable by the beliefs, attitudes and the subjective preferences of consumers. This is done by examining the criteria on the basis of which the consumers evaluate brands and make brand purchase decisions. Understanding the criteria by which consumers evaluate brands is crucial for several reasons. First, as the basis upon which consumers select brands changes with consumption norms and technology, understanding the consumer choice process will help in formulating branding strategy. Secondly, an understanding of these criteria will help in formulating a creative and innovative agenda for ‘new brand’ propositions. Thirdly, it will also influence firms’ ability to simulate and mould the plasticity of demand for existing brands. In examining these three issues, this thesis presents a comprehensive account of CBE. This is because the first issue raised in the preceding paragraph deals with the content of CBE. The second issue addresses the problem of how to develop a reliable and valid measuring instrument for CBE. The third issue examines the structural and statistical relationships between the factors of CBE and the consequences of CBE on consumer perceived value (CPV). Using LISREL-SIMPLIS 8.30, the study finds direct and significant influential links between consumer brand equity and consumer value perception.
Resumo:
Prior research suggests management can employ cognitively demanding job attributes to promote employee creativity. However, it is not clear what specific type of cognitive demand is particularly important for creativity, what processes underpin the relationship between demanding job conditions and creativity and what factors lead to employee perceptions of demanding job attributes. This research sets out to address the aforementioned issues by examining: (i) problem-solving demand (PDS), a specific type of cognitive demand, and the processes that link PSD to creativity, and (ii) antecedents to PSD. Based on social cognitive theory, PSD was hypothesized to be positively related to creativity through the motivational mechanism of creative self-efficacy. However, the relationship between PSD and creative self-efficacy was hypothesized to be contingent on levels of intrinsic motivation. Social information processing perspective and the job crafting model were used to identify antecedents of PSD. Consequently, two social-contextual factors (supervisor developmental feedback and job autonomy) and one individual factor (proactive personality) were hypothesized to be precursors to PSD perceptions. The theorized model was tested with data obtained from a sample of 270 employees and their supervisors from 3 organisations in the People’s Republic of China. Regression results revealed that PSD was positively related to creativity but this relationship was partially mediated by creative self-efficacy. Additionally, intrinsic motivation moderated the relationship between PSD and creative self-efficacy such that the relationship was stronger for individuals high rather than low in intrinsic motivation. The findings represent a productive first step in identifying a specific cognitive demand that is conducive to employee creativity. In addition, the findings contribute to the literature by identifying a psychological mechanism that may link cognitively demanding job attributes and creativity.
Resumo:
The recent explosive growth in advanced manufacturing technology (AMT) and continued development of sophisticated information technologies (IT) is expected to have a profound effect on the way we design and operate manufacturing businesses. Furthermore, the escalating capital requirements associated with these developments have significantly increased the level of risk associated with initial design, ongoing development and operation. This dissertation has examined the integration of two key sub-elements of the Computer Integrated Manufacturing (CIM) system, namely the manufacturing facility and the production control system. This research has concentrated on the interactions between production control (MRP) and an AMT based production facility. The disappointing performance of such systems has been discussed in the context of a number of potential technological and performance incompatibilities between these two elements. It was argued that the design and selection of operating policies for both is the key to successful integration. Furthermore, policy decisions are shown to play an important role in matching the performance of the total system to the demands of the marketplace. It is demonstrated that a holistic approach to policy design must be adopted if successful integration is to be achieved. It is shown that the complexity of the issues resulting from such an approach required the formulation of a structured design methodology. Such a methodology was subsequently developed and discussed. This combined a first principles approach to the behaviour of system elements with the specification of a detailed holistic model for use in the policy design environment. The methodology aimed to make full use of the `low inertia' characteristics of AMT, whilst adopting a JIT configuration of MRP and re-coupling the total system to the market demands. This dissertation discussed the application of the methodology to an industrial case study and the subsequent design of operational policies. Consequently a novel approach to production control resulted. A central feature of which was a move toward reduced manual intervention in the MRP processing and scheduling logic with increased human involvement and motivation in the management of work-flow on the shopfloor. Experimental results indicated that significant performance advantages would result from the adoption of the recommended policy set.
Resumo:
Shropshire Energy Team initiated this study to examine consumption and associated emissions in the predominantly rural county of Shropshire. Current use of energy is not sustainable in the long term and there are various approaches to dealing with the environmental problems it creates. Energy planning by a local authority for a sustainable future requires detailed energy consumption and environmental information. This information would enable target setting and the implementation of policies designed to encourage energy efficiency improvements and exploitation of renewable energy resources. This could aid regeneration strategies by providing new employment opportunities. Associated reductions in carbon dioxide and other emissions would help to meet national and international environmental targets. In the absence of this detailed information, the objective was to develop a methodology to assess energy consumption and emissions on a regional basis from 1990 onwards for all local planning authorities. This would enable a more accurate assessment of the relevant issues, such that plans are more appropriate and longer lasting. A first comprehensive set of data has been gathered from a wide range of sources and a strong correlation was found between population and energy consumption for a variety of regions across the UK. In this case the methodology was applied to the county of Shropshire to give, for the first time, estimates of primary fuel consumption, electricity consumption and associated emissions in Shropshire for 1990 to 2025. The estimates provide a suitable baseline for assessing the potential contribution renewable energy could play in meeting electricity demand in the country and in reducing emissions. The assessment indicated that in 1990 total primary fuel consumption was 63,518,018 GJ/y increasing to 119,956,465 GJ/y by 2025. This is associated with emissions of 1,129,626 t/y of carbon in 1990 rising to 1,303,282 t/y by 2025. In 1990, 22,565,713 GJ/y of the primary fuel consumption was used for generating electricity rising to 23,478,050 GJ/y in 2025. If targets to reduce primary fuel consumption are reached, then emissions of carbon would fall to 1,042,626 by 2025, if renewable energy targets were also reached then emissions of carbon would fall to 988,638 t/y by 2025.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
In examining bank cost efficiency in banking inclusion of risk-taking of banks is very important. In this paper we depart from the standard modeling approach and view risk intimately related to the technology. Thus, instead of controlling for risk by viewing them as covariates in the standard cost function we argue that the technology differs with risk, thereby meaning that the parameters of the parametric cost function changes with risk in a fully flexible manner. This is accomplished by viewing the parameters of the cost function as nonparametric functions of risk. We also control for country-specific effects in a fully flexible manner by using them as arguments of the nonparametric functions along with the risk variable. The resulting cost function then becomes semiparametric. The standard parametric model becomes a special case of our semiparametric model. We use the above modeling approach for banks in the EU countries. Actually, European financial integration is seen as a stepping stone for the development of a competitive single EU market that promotes efficiency and increases consumer welfare, changing the risk profile of the European banks. Particularly, financial integration allows more risk diversification and permits banks to use more advanced risk management instruments and systems, however it has at the same time increased the probability of systematic risks. Financial integration has increased the risk of contagion and changed its nature and scope. Consequently the bank’s risk seems to be an important issue to be investigated.
Resumo:
Recent research has highlighted several job characteristics salient to employee well-being and behavior for which there are no adequate generally applicable measures. These include timing and method control, monitoring and problem-solving demand, and production responsibility. In this article, an attempt to develop measures of these constructs provided encouraging results. Confirmatory factor analyses applied to data from 2 samples of shop-floor employees showed a consistent fit to a common 5-factor measurement model. Scales corresponding to each of the dimensions showed satisfactory internal and test–retest reliabilities. As expected, the scales also discriminated between employees in different jobs and employees working with contrasting technologies.
The micro-politics of operational adjustment:veto players and the consolidation of demand in the NHS
Resumo:
Recent reports about procurement within the NHS have been highly critical. One problem identified in the reports is the fragmentation of NHS demand across an unnecessarily large number of suppliers. This fragmentation is said to increase transaction costs, reduce opportunities for scale economies and reduce NHS leverage over suppliers. It has been suggested, therefore, that an important way of improving procurement in the NHS is the better consolidation of demand with a lower number of preferred suppliers. However, such a policy, because it will create ‘winners’ and ‘losers’ within NHS organisations, has political as well as technical and practical ramifications. In this article, the authors present a model, the Veto Players Model, in order to assist managers to address these political ramifications. In the article, the authors not only demonstrate the utility of this model with regard to demand consolidation policies, but also argue that the model provides useful lessons for change management initiatives more generally.
Resumo:
A simulation model has been constructed of a valve manufacturing plant with the aim of assessing capacity requirements in response to a forecast increase in demand. The plant provides a weekly cycle of valves of varying types, based on a yearly production plan. Production control is provided by a just-in-time type system to minimise inventory. The simulation model investigates the effect on production lead time of a range of valve sequences into the plant. The study required the collection of information from a variety of sources, and a model that reflected the true capabilities of the production system. The simulation results convinced management that substantial changes were needed in order to meet demand. The case highlights the use of simulation in enabling a manager to quantify operational scenarios and thus provide a rational basis on which to take decisions on meeting performance criteria.