41 resultados para Maximum Ratio Combining


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, glyoxalated alkaline lignins with a non-volatile and non-toxic aldehyde, which can be obtained from several natural resources, namely glyoxal, were prepared and characterized for its use in wood adhesives. The preparation method consisted of the reaction of lignin with glyoxal under an alkaline medium. The influence of reaction conditions such as the molar ratio of sodium hydroxide-to-lignin and reaction time were studied relative to the properties of the prepared adducts. The analytical techniques used were FTIR and 1H-NMR spectroscopies, gel permeation chromatography (GPC), differential scanning calorimetry (DSC), and thermogravimetric analysis (TGA). Results from both the FTIR and 1H-NMR spectroscopies showed that the amount of introduced aliphatic hydroxyl groups onto the lignin molecule increased with increasing reaction time and reached a maximum value at 10 h, and after they began to decrease. The molecular weights remained unchanged until 10 h of reaction time, and then started to increase, possibly due to the repolymerization reactions. DSC analysis showed that the glass transition temperature (Tg) decreased with the introduction of glyoxal onto the lignin molecule due to the increase in free volume of the lignin molecules. TGA analysis showed that the thermal stability of glyoxalated lignin is not influenced and remained suitable for wood adhesives. Compared to the original lignin, the improved lignin is reactive and a suitable raw material for adhesive formula

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An overview is given on a study which showed that not only in chemical reactions but also in the favorable case of nontotally symmetric vibrations where the chemical and external potentials keep approximately constant, the generalized maximum hardness principle (GMHP) and generalized minimum polarizability principle (GMPP) may not be obeyed. A method that allows an accurate determination of the nontotally symmetric molecular distortions with more marked GMPP or anti-GMPP character through diagonalization of the polarizability Hessian matrix is introduced

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A number of studies have used protein interaction data alone for protein function prediction. Here, we introduce a computational approach for annotation of enzymes, based on the observation that similar protein sequences are more likely to perform the same function if they share similar interacting partners. Results: The method has been tested against the PSI-BLAST program using a set of 3,890 protein sequences from which interaction data was available. For protein sequences that align with at least 40% sequence identity to a known enzyme, the specificity of our method in predicting the first three EC digits increased from 80% to 90% at 80% coverage when compared to PSI-BLAST. Conclusion: Our method can also be used in proteins for which homologous sequences with known interacting partners can be detected. Thus, our method could increase 10% the specificity of genome-wide enzyme predictions based on sequence matching by PSI-BLAST alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an Optimised Search Heuristic that combines a tabu search method with the verification of violated valid inequalities. The solution delivered by the tabu search is partially destroyed by a randomised greedy procedure, and then the valid inequalities are used to guide the reconstruction of a complete solution. An application of the new method to the Job-Shop Scheduling problem is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We compare two methods for visualising contingency tables and developa method called the ratio map which combines the good properties of both.The first is a biplot based on the logratio approach to compositional dataanalysis. This approach is founded on the principle of subcompositionalcoherence, which assures that results are invariant to considering subsetsof the composition. The second approach, correspondence analysis, isbased on the chi-square approach to contingency table analysis. Acornerstone of correspondence analysis is the principle of distributionalequivalence, which assures invariance in the results when rows or columnswith identical conditional proportions are merged. Both methods may bedescribed as singular value decompositions of appropriately transformedmatrices. Correspondence analysis includes a weighting of the rows andcolumns proportional to the margins of the table. If this idea of row andcolumn weights is introduced into the logratio biplot, we obtain a methodwhich obeys both principles of subcompositional coherence and distributionalequivalence.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper combines multivariate density forecasts of output growth, inflationand interest rates from a suite of models. An out-of-sample weighting scheme based onthe predictive likelihood as proposed by Eklund and Karlsson (2005) and Andersson andKarlsson (2007) is used to combine the models. Three classes of models are considered: aBayesian vector autoregression (BVAR), a factor-augmented vector autoregression (FAVAR)and a medium-scale dynamic stochastic general equilibrium (DSGE) model. Using Australiandata, we find that, at short forecast horizons, the Bayesian VAR model is assignedthe most weight, while at intermediate and longer horizons the factor model is preferred.The DSGE model is assigned little weight at all horizons, a result that can be attributedto the DSGE model producing density forecasts that are very wide when compared withthe actual distribution of observations. While a density forecast evaluation exercise revealslittle formal evidence that the optimally combined densities are superior to those from thebest-performing individual model, or a simple equal-weighting scheme, this may be a resultof the short sample available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that unconditionally efficient returns do not achieve the maximum unconditionalSharpe ratio, neither display zero unconditional Jensen s alphas, when returns arepredictable. Next, we define a new type of efficient returns that is characterized by thoseunconditional properties. We also study a different type of efficient returns that is rationalizedby standard mean-variance preferences and motivates new Sharpe ratios and Jensen salphas. We revisit the testable implications of asset pricing models from the perspective ofthe three sets of efficient returns. We also revisit the empirical evidence on the conditionalvariants of the CAPM and the Fama-French model from a portfolio perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents a new model based on the basic Maximum Capture model,MAXCAP. The New Chance Constrained Maximum Capture modelintroduces astochastic threshold constraint, which recognises the fact that a facilitycan be open only if a minimum level of demand is captured. A metaheuristicbased on MAX MIN ANT system and TABU search procedure is presented tosolve the model. This is the first time that the MAX MIN ANT system isadapted to solve a location problem. Computational experience and anapplication to 55 node network are also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A national survey designed for estimating a specific population quantity is sometimes used for estimation of this quantity also for a small area, such as a province. Budget constraints do not allow a greater sample size for the small area, and so other means of improving estimation have to be devised. We investigate such methods and assess them by a Monte Carlo study. We explore how a complementary survey can be exploited in small area estimation. We use the context of the Spanish Labour Force Survey (EPA) and the Barometer in Spain for our study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Precise estimation of propagation parameters inprecipitation media is of interest to improve the performanceof communications systems and in remote sensing applications.In this paper, we present maximum-likelihood estimators ofspecific attenuation and specific differential phase in rain. Themodel used for obtaining the cited estimators assumes coherentpropagation, reflection symmetry of the medium, and Gaussianstatistics of the scattering matrix measurements. No assumptionsabout the microphysical properties of the medium are needed.The performance of the estimators is evaluated through simulateddata. Results show negligible estimators bias and variances closeto Cramer–Rao bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The long-term mean properties of the global climate system and those of turbulent fluid systems are reviewed from a thermodynamic viewpoint. Two general expressions are derived for a rate of entropy production due to thermal and viscous dissipation (turbulent dissipation) in a fluid system. It is shown with these expressions that maximum entropy production in the Earth s climate system suggested by Paltridge, as well as maximum transport properties of heat or momentum in a turbulent system suggested by Malkus and Busse, correspond to a state in which the rate of entropy production due to the turbulent dissipation is at a maximum. Entropy production due to absorption of solar radiation in the climate system is found to be irrelevant to the maximized properties associated with turbulence. The hypothesis of maximum entropy production also seems to be applicable to the planetary atmospheres of Mars and Titan and perhaps to mantle convection. Lorenz s conjecture on maximum generation of available potential energy is shown to be akin to this hypothesis with a few minor approximations. A possible mechanism by which turbulent fluid systems adjust themselves to the states of maximum entropy production is presented as a selffeedback mechanism for the generation of available potential energy. These results tend to support the hypothesis of maximum entropy production that underlies a wide variety of nonlinear fluid systems, including our planet as well as other planets and stars

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the hypothesis that the atmosphere is constrained to maximize its entropy production by using a one-dimensional (1-D) vertical model. We prescribe the lapse rate in the convective layer as that of the standard troposphere. The assumption that convection sustains a critical lapse rate was absent in previous studies, which focused on the vertical distribution of climatic variables, since such a convective adjustment reduces the degrees of freedom of the system and may prevent the application of the maximum entropy production (MEP) principle. This is not the case in the radiative–convective model (RCM) developed here, since we accept a discontinuity of temperatures at the surface similar to that adopted in many RCMs. For current conditions, the MEP state gives a difference between the ground temperature and the air temperature at the surface ≈10 K. In comparison, conventional RCMs obtain a discontinuity ≈2 K only. However, the surface boundary layer velocity in the MEP state appears reasonable (≈3 m s-¹). Moreover, although the convective flux at the surface in MEP states is almost uniform in optically thick atmospheres, it reaches a maximum value for an optical thickness similar to current conditions. This additional result may support the maximum convection hypothesis suggested by Paltridge (1978)