986 resultados para generalized assignment problem
Resumo:
We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.
Resumo:
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.
Resumo:
The inverse Weibull distribution has the ability to model failure rates which are quite common in reliability and biological studies. A three-parameter generalized inverse Weibull distribution with decreasing and unimodal failure rate is introduced and studied. We provide a comprehensive treatment of the mathematical properties of the new distribution including expressions for the moment generating function and the rth generalized moment. The mixture model of two generalized inverse Weibull distributions is investigated. The identifiability property of the mixture model is demonstrated. For the first time, we propose a location-scale regression model based on the log-generalized inverse Weibull distribution for modeling lifetime data. In addition, we develop some diagnostic tools for sensitivity analysis. Two applications of real data are given to illustrate the potentiality of the proposed regression model.
Resumo:
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model.
Resumo:
Estimation of Taylor`s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating function. Furthermore, we investigate a more general regression model allowing for site-specific covariates. This method may be efficiently implemented using a Newton scoring algorithm, with standard errors calculated from the inverse Godambe information matrix. The method is applied to a set of biomass data for benthic macrofauna from two Danish estuaries. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A four parameter generalization of the Weibull distribution capable of modeling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone as well as non-monotone failure rates, which are quite common in lifetime problems and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the Weibull, extreme value, exponentiated Weibull, generalized Rayleigh and modified Weibull distributions, among others. We derive two infinite sum representations for its moments. The density of the order statistics is obtained. The method of maximum likelihood is used for estimating the model parameters. Also, the observed information matrix is obtained. Two applications are presented to illustrate the proposed distribution. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A four-parameter extension of the generalized gamma distribution capable of modelling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma and generalized Rayleigh, among others. We derive two infinite sum representations for its moments. We calculate the density of the order statistics and two expansions for their moments. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is obtained. Finally, a real data set from the medical area is analysed.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
A complete analysis of H-1 and C-13 NMR spectra of the trypanocidal sesquiterpene lactone eremantholide C and two of its analogues is described. These structurally similar sesquiterpene lactones were submitted to H-1 NMR, C-13 (H-1) NMR, gCOSY, gHSQC, gHMBC, J-resolved and DPFGSE-NOE NMR techniques. The detailed analysis of those results, correlated to some computational calculations (molecular mechanics), led to the total and unequivocal assignment of all H-1 and C-13 NMR data. The determination of all H-1/H-1 coupling constants and all signal multiplicities, together with the elimination of previous ambiguities were also achieved. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
Thioridazine (THD) is a commonly prescribed phenotiazine neuroleptic drug, which is extensively biotransformed in the organism producing as main metabolites sulfoxides and a sulfone by sulfur oxidation Significant differences have been observed in the activity of the THD enantiomers as well as for its main metabolites, and enantioselectivity phenomena have been proved in the metabolic pathway. Here the assignment of the absolute configuration at the sulfur atom of enantiomeric THD-2-sulfoxide (THD-2-SO) has been carried out by circular dichroism (CD) spectroscopy The stereoisomers were separated by HPLC on Chiralpak AS column, recording the CD spectra for the two collected enantiomeric fractions The theoretical electronic CD spectrum has been obtained by the TDDFT/B3LYP/6-31G*. as Boltzmann averaging of the contributions calculated for the most stable conformations of the drug The comparison of the simulated and experimental spectra allowed the absolute configuration at the sulfur atom of the four THD-2-SO stereoisomers to be assigned The developed method should be useful for a reliable correlation between stereochemistry and activity and/or toxicity
Resumo:
This paper is concerned with the problem of argument-function mismatch observed in the apparent subject-object inversion in Chinese consumption verbs, e.g., chi 'eat' and he 'drink', and accommodation verbs, e.g., zhu 'live' and shui 'sleep'. These verbs seem to allow the linking of [agent-SUBJ theme-OBJ] as well as [agent-OBJ theme-SUBJ], but only when the agent is also the semantic role denoting the measure or extent of the action. The account offered is formulated within LFG's lexical mapping theory. Under the simplest and also the strictest interpretation of the one-to-one argument-function mapping principle (or the theta-criterion), a composite role such as ag-ext receives syntactic assignment via one composing role only. One-to-one linking thus entails the suppression of the other composing role. Apparent subject-object inversion occurs when the more prominent agent role is suppressed and thus allows the less prominent extent role to dictate the linking of the entire ag-ext composite role. This LMT account also potentially facilitates a natural explanation of markedness among the competing syntactic structures.
Resumo:
We investigate the effect of the coefficient of the critical nonlinearity for the Neumann problem on the existence of least energy solutions. As a by-product we establish a Sobolev inequality with interior norm.
Resumo:
The received view of an ad hoc hypothesis is that it accounts for only the observation(s) it was designed to account for, and so non-adhocness is generally held to be necessary or important for an introduced hypothesis or modification to a theory. Attempts by Popper and several others to convincingly explicate this view, however, prove to be unsuccessful or of doubtful value, and familiar and firmer criteria for evaluating the hypotheses or modified theories so classified are characteristically available. These points are obscured largely because the received view fails to adequately separate psychology from methodology or to recognise ambiguities in the use of 'ad hoc'.
Resumo:
Watkins proposes a neo-Popperian solution to the pragmatic problem of induction. He asserts that evidence can be used non-inductively to prefer the principle that corroboration is more successful over all human history than that, say, counter-corroboration is more successful either over this same period or in the future. Watkins's argument for rejecting the first counter-corroborationist alternative is beside the point. However, as whatever is the best strategy over all human history is irrelevant to the pragmatic problem of induction since we are not required to act in the past, and his argument for rejecting the second presupposes induction.