44 resultados para Multiperiod mixed-integer convex model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports the findings from a discrete-choice experiment designed to estimate the economic benefits associated with rural landscape improvements in Ireland. Using a mixed logit model, the panel nature of the dataset is exploited to retrieve willingness-to-pay values for every individual in the sample. This departs from customary approaches in which the willingness-to-pay estimates are normally expressed as measures of central tendency of an a priori distribution. Random-effects models for panel data are subsequently used to identify the determinants of the individual-specific willingness-to-pay estimates. In comparison with the standard methods used to incorporate individual-specific variables into the analysis of discrete-choice experiments, the analytical approach outlined in this paper is shown to add considerable explanatory power to the welfare estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sugars and amino acids were removed from potato slices by soaking in water and ethanol. They were then infused with various combinations of sugars (glucose and/or fructose) and amino acids (asparagine, glutamine, leucine, isoleucine, phenylalanine, and/or methionine) and fried. Volatile compounds were trapped onto Tenax prior to gas chromatography-mass spectrometry. Relative amounts of compounds (relative to the internal standard) and relative yields (per mole of amino acid infused into the slices) were determined. Amounts of 10 pyrazines, 4 Strecker aldehydes, and 4 other compounds were monitored. Relative amounts and relative yields of compounds varied according to the composition of the system. For the single amino acid-glucose systems, leucine gave the highest relative amount and relative yield of its Strecker aldehyde. Asparagine and phenylalanine gave the highest total relative amount and total relative Yield, respectively, of pyrazines. In the system containing all of the amino acids and glucose, the relative amount of 3-methylbutanal was higher, whereas the amounts of the other monitored Strecker aldehydes were lower. Most of the relative amounts of individual pyrazines were lower compared to the glucose-asparagine system, whereas the total relative yield of pyrazines was lower, compared to all of the single amino acid-glucose mixtures. Addition of fructose to the mixed amino acid-glucose model system generated Strecker aldehydes and pyrazines in ratios that were more similar to those of untreated potato chips than to those from the same system but without fructose. Both the sugars and the amino acids present in potato are crucial to the development of flavor compounds in fried potato slices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The production of hydrogen by steam reforming of bio-oils obtained from the fast pyrolysis of biomass requires the development of efficient catalysts able to cope with the complex chemical nature of the reactant. The present work focuses on the use of noble metal-based catalysts for the steam reforming of a few model compounds and that of an actual bio-oil. The steam reforming of the model compounds was investigated in the temperature range 650-950 degrees C over Pt, Pd and Rh supported on alumina and a ceria-zirconia sample. The model compounds used were acetic acid, phenol, acetone and ethanol. The nature of the support appeared to play a significant role in the activity of these catalysts. The use of ceria-zirconia, a redox mixed oxide, lead to higher H-2 yields as compared to the case of the alumina-supported catalysts. The supported Rh and Pt catalysts were the most active for the steam reforming of these compounds, while Pd-based catalysts poorly performed. The activity of the promising Pt and Rh catalysts was also investigated for the steam reforming of a bio-oil obtained from beech wood fast pyrolysis. Temperatures close to, or higher than, 800 degrees C were required to achieve significant conversions to COx and H-2 (e.g., H-2 yields around 70%). The ceria-zirconia materials showed a higher activity than the corresponding alumina samples. A Pt/ceria-zirconia sample used for over 9 h showed essentially constant activity, while extensive carbonaceous deposits were observed on the quartz reactor walls from early time on stream. In the present case, no benefit was observed by adding a small amount of O-2 to the steam/bio-oil feed (autothermal reforming, ATR), probably partly due to the already high concentration of oxygen in the bio-oil composition. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many finite element analysis models it would be desirable to combine reduced- or lower-dimensional element types with higher-dimensional element types in a single model. In order to achieve compatibility of displacements and stress equilibrium at the junction or interface between the differing element types, it is important in such cases to integrate into the analysis some scheme for coupling the element types. A novel and effective scheme for establishing compatibility and equilibrium at the dimensional interface is described and its merits and capabilities are demonstrated. Copyright (C) 2000 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The motivation for this paper is to present procedures for automatically creating idealised finite element models from the 3D CAD solid geometry of a component. The procedures produce an accurate and efficient analysis model with little effort on the part of the user. The technique is applicable to thin walled components with local complex features and automatically creates analysis models where 3D elements representing the complex regions in the component are embedded in an efficient shell mesh representing the mid-faces of the thin sheet regions. As the resulting models contain elements of more than one dimension, they are referred to as mixed dimensional models. Although these models are computationally more expensive than some of the idealisation techniques currently employed in industry, they do allow the structural behaviour of the model to be analysed more accurately, which is essential if appropriate design decisions are to be made. Also, using these procedures, analysis models can be created automatically whereas the current idealisation techniques are mostly manual, have long preparation times, and are based on engineering judgement. In the paper the idealisation approach is first applied to 2D models that are used to approximate axisymmetric components for analysis. For these models 2D elements representing the complex regions are embedded in a 1D mesh representing the midline of the cross section of the thin sheet regions. Also discussed is the coupling, which is necessary to link the elements of different dimensionality together. Analysis results from a 3D mixed dimensional model created using the techniques in this paper are compared to those from a stiffened shell model and a 3D solid model to demonstrate the improved accuracy of the new approach. At the end of the paper a quantitative analysis of the reduction in computational cost due to shell meshing thin sheet regions demonstrates that the reduction in degrees of freedom is proportional to the square of the aspect ratio of the region, and for long slender solids, the reduction can be proportional to the aspect ratio of the region if appropriate meshing algorithms are used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we extend the earlier work of Freeland and McCabe [Journal of time Series Analysis (2004) Vol. 25, pp. 701–722] and develop a general framework for maximum likelihood (ML) analysis of higher-order integer-valued autoregressive processes. Our exposition includes the case where the innovation sequence has a Poisson distribution and the thinning is binomial. A recursive representation of the transition probability of the model is proposed. Based on this transition probability, we derive expressions for the score function and the Fisher information matrix, which form the basis for ML estimation and inference. Similar to the results in Freeland and McCabe (2004), we show that the score function and the Fisher information matrix can be neatly represented as conditional expectations. Using the INAR(2) speci?cation with binomial thinning and Poisson innovations, we examine both the asymptotic e?ciency and ?nite sample properties of the ML estimator in relation to the widely used conditional least
squares (CLS) and Yule–Walker (YW) estimators. We conclude that, if the Poisson assumption can be justi?ed, there are substantial gains to be had from using ML especially when the thinning parameters are large.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the performance of the tests proposed by Hadri and by Hadri and Larsson for testing for stationarity in heterogeneous panel data under model misspecification. The panel tests are based on the well known KPSS test (cf. Kwiatkowski et al.) which considers two models: stationarity around a deterministic level and stationarity around a deterministic trend. There is no study, as far as we know, on the statistical properties of the test when the wrong model is used. We also consider the case of the simultaneous presence of the two types of models in a panel. We employ two asymptotics: joint asymptotic, T, N -> infinity simultaneously, and T fixed and N allowed to grow indefinitely. We use Monte Carlo experiments to investigate the effects of misspecification in sample sizes usually used in practice. The results indicate that the assumption that T is fixed rather than asymptotic leads to tests that have less size distortions, particularly for relatively small T with large N panels (micro-panels) than the tests derived under the joint asymptotics. We also find that choosing a deterministic trend when a deterministic level is true does not significantly affect the properties of the test. But, choosing a deterministic level when a deterministic trend is true leads to extreme over-rejections. Therefore, when unsure about which model has generated the data, it is suggested to use the model with a trend. We also propose a new statistic for testing for stationarity in mixed panel data where the mixture is known. The performance of this new test is very good for both cases of T asymptotic and T fixed. The statistic for T asymptotic is slightly undersized when T is very small (

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is based on research into the transition of young people leaving public care in Romania. Using this specific country example, the paper aims to contribute to present understandings of the psycho-social transition of young people from care to independent living by introducing the use of Bridges (2002) to build on existing theories and literature. The research discussed involved mixed methods design and was implemented in three phases: semi-structured interviews with 34 care leavers, focus groups with 32 professionals, and a professional-service user working group. The overall findings confirmed that young people experience two different, but interconnected transitions - social and psychological - which take place at different paces. A number of theoretical perpectives are explored to make sense of this transition including attachment theory, focal theory and identity. In addition, a new model for understanding the complex process of transitions was adapted from Bridges’ (2002) to capture the clear complexity of transition which the findings demonstrated in terms of their psycho-social transition. The paper concludes with messages for leaving and after care services with an emphasis on managing the psycho-social transition from care to independent living.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modifying the surfaces of metal nanoparticles with self-assembled monolayers of functionalized thiols provides a simple and direct method to alter their surface properties. Mixed self-assembled monolayers can extend this approach since, in principle, the surfaces can be tuned by altering the proportion of each modifier that is adsorbed. However, this works best if the composition and microstructure of the monolayers can be controlled. Here, we have modified preprepared silver colloids with binary mixtures of thiols at varying concentrations and modifier ratios. Surface-enhanced Raman spectroscopy was then used to determine the effect of altering these parameters on the composition of the resulting mixed monolayers. The data could be explained using a new model based on a modified competitive Langmuir approach. It was found that the composition of the mixed monolayer only reflected the ratio of modifiers in the feedstock when the total amount of modifier was sufficient for approximately one monolayer coverage. At higher modifier concentrations the thermodynamically favored modifier dominated, but working at near monolayer concentrations allowed the surface composition to be controlled by changing the ratios of modifiers. Finally, a positively charged porphyrin probe molecule was used to investigate the microstructure of the mixed monolayers, i.e., homogeneous versus domains. In this case the modifier domains were found to be <2 nm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. A more general contingency model of optimal diet choice is developed, allowing for simultaneous searching and handling, which extends the theory to include grazing and browsing by large herbivores.</p><p>2. Foraging resolves into three modes: purely encounter-limited, purely handling-limited and mixed-process, in which either a handling-limited prey type is added to an encounter-limited diet, or the diet becomes handling-limited as it expands.</p><p>3. The purely encounter-limited diet is, in general, broader than that predicted by the conventional contingency model,</p><p>4. As the degree of simultaneity of searching and handling increases, the optimal diet expands to the point where it is handling-limited, at which point all inferior prey types are rejected,</p><p>5. Inclusion of a less profitable prey species is not necessarily independent of its encounter rate and the zero-one rule does not necessarily hold: some of the less profitable prey may be included in the optimal diet. This gives an optimal foraging explanation for herbivores' mixed diets.</p><p>6. Rules are shown for calculating the boundary between encounter-limited and handling-limited diets and for predicting the proportion of inferior prey to be included in a two-species diet,</p><p>7. The digestive rate model is modified to include simultaneous searching and handling, showing that the more they overlap, the more the predicted diet-breadth is likely to be reduced.</p>

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A model based on the postreceptor channels followed by a Minkowski norm (Minkowski model) is widely used to fit experimental data on colour discrimination. This model predicts that contours of equal discrimination in colour space are convex and balanced (symmetrical). We have tested these predictions in an experiment. Two new statistical tests have been developed to analyse convexity and balancedness of experimental curves. Using these tests we have found that while they are in line with the convexity prediction, our experimental contours strongly testify against balancedness. It follows that the Minkowski model is, generally, inappropriate to model colour discrimination data. © 2002 Elsevier Science (USA).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2002cx-like supernovae are a sub-class of sub-luminous Type Ia supernovae (SNe). Their light curves and spectra are characterized by distinct features that indicate strong mixing of the explosion ejecta. Pure turbulent deflagrations have been shown to produce such mixed ejecta. Here, we present hydrodynamics, nucleosynthesis and radiative-transfer calculations for a 3D full-star deflagration of a Chandrasekhar-mass white dwarf. Our model is able to reproduce the characteristic observational features of SN 2005hk (a prototypical 2002cx-like supernova), not only in the optical, but also in the near-infrared. For that purpose we present, for the first time, five near-infrared spectra of SN 2005hk from -0.2 to 26.6 d with respect to B-band maximum. Since our model burns only small parts of the initial white dwarf, it fails to completely unbind the white dwarf and leaves behind a bound remnant of ~1.03Mconsisting mainly of unburned carbon and oxygen, but also enriched by some amount of intermediate-mass and iron-group elements from the explosion products that fall back on the remnant.We discuss possibilities for detecting this bound remnant and how it might influence the late-time observables of 2002cx-like SNe. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emotion research has long been dominated by the “standard method” of displaying posed or acted static images of facial expressions of emotion. While this method has been useful it is unable to investigate the dynamic nature of emotion expression. Although continuous self-report traces have enabled the measurement of dynamic expressions of emotion, a consensus has not been reached on the correct statistical techniques that permit inferences to be made with such measures. We propose Generalized Additive Models and Generalized Additive Mixed Models as techniques that can account for the dynamic nature of such continuous measures. These models allow us to hold constant shared components of responses that are due to perceived emotion across time, while enabling inference concerning linear differences between groups. The mixed model GAMM approach is preferred as it can account for autocorrelation in time series data and allows emotion decoding participants to be modelled as random effects. To increase confidence in linear differences we assess the methods that address interactions between categorical variables and dynamic changes over time. In addition we provide comments on the use of Generalized Additive Models to assess the effect size of shared perceived emotion and discuss sample sizes. Finally we address additional uses, the inference of feature detection, continuous variable interactions, and measurement of ambiguity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of accurate structural/thermal numerical models of complex systems, such as aircraft fuselage barrels, is often limited and determined by the smallest scales that need to be modelled. The development of reduced order models of the smallest scales and consequently their integration with higher level models can be a way to minimise the bottle neck present, while still having efficient, robust and accurate numerical models. In this paper a methodology on how to develop compact thermal fluid models (CTFMs) for compartments where mixed convection regimes are present is demonstrated. Detailed numerical simulations (CFD) have been developed for an aircraft crown compartment and validated against experimental data obtained from a 1:1 scale compartment rig. The crown compartment is defined as the confined area between the upper fuselage and the passenger cabin in a single aisle commercial aircraft. CFD results were utilised to extract average quantities (temperature and heat fluxes) and characteristic parameters (heat transfer coefficients) to generate CTFMs. The CTFMs have then been compared with the results obtained from the detailed models showing average errors for temperature predictions lower than 5%. This error can be deemed acceptable when compared to the nominal experimental error associated with the thermocouple measurements.

The CTFMs methodology developed allows to generate accurate reduced order models where accuracy is restricted to the region of Boundary Conditions applied. This limitation arises from the sensitivity of the internal flow structures to the applied boundary condition set. CTFMs thus generated can be then integrated in complex numerical modelling of whole fuselage sections.

Further steps in the development of an exhaustive methodology would be the implementation of a logic ruled based approach to extract directly from the CFD simulations numbers and positions of the nodes for the CTFM.