986 resultados para Multiplicative linear secret sharing schemes
Resumo:
A Investigação Operacional vem demonstrando ser uma valiosa ferramenta de gestão nos dias de hoje em que se vive num mercado cada vez mais competitivo. Através da Programação Linear pode-se reproduzir matematicamente um problema de maximização dos resultados ou minimização dos custos de produção com o propósito de auxiliar os gestores na tomada de decisão. A Programação Linear é um método matemático em que a função objectivo e as restrições assumem características lineares, com diversas aplicações no controlo de gestão, envolvendo normalmente problemas de utilização dos recursos disponíveis sujeitos a limitações impostas pelo processo produtivo ou pelo mercado. O objectivo geral deste trabalho é o de propor um modelo de Programação Linear para a programação ou produção e alocação de recursos necessários. Optimizar uma quantidade física designada função objectivo, tendo em conta um conjunto de condicionalismos endógenas às actividades em gestão. O objectivo crucial é dispor um modelo de apoio à gestão contribuindo assim para afectação eficiente de recursos escassos à disposição da unidade económica. Com o trabalho desenvolvido ficou patente a importância da abordagem quantitativa como recurso imprescindível de apoio ao processo de decisão. The operational research has proven to be a valuable management tool today we live in an increasingly competitive market. Through Linear Programming can be mathematically reproduce a problem of maximizing performance or minimizing production costs in order to assist managers in decision making. The Linear Programming is a mathematical method in which the objective function and constraints are linear features, with several applications in the control of management, usually involving problems of resource use are available subject to limitations imposed by the production process or the market. The overall objective of this work is to propose a Linear Programming model for scheduling or production and allocation of necessary resources. Optimizing a physical quantity called the objective function, given a set of endogenous constraints on management thus contributing to efficient allocation of scarce resources available to the economic unit. With the work has demonstrated the importance of the quantitative approach as essential resource to support the decision process.
Resumo:
The huge conservation interest that mammals attract and the large datasets that have been collected on them have propelled a diversity of global mammal prioritization schemes, but no comprehensive global mammal conservation strategy. We highlight some of the potential discrepancies between the schemes presented in this theme issue, including: conservation of species or areas, reactive and proactive conservation approaches, conservation knowledge and action, levels of aggregation of indicators of trend and scale issues. We propose that recently collected global mammal data and many of the mammal prioritization schemes now available could be incorporated into a comprehensive global strategy for the conservation of mammals. The task of developing such a strategy should be coordinated by a super-partes, authoritative institution (e.g. the International Union for Conservation of Nature, IUCN). The strategy would facilitate funding agencies, conservation organizations and national institutions to rapidly identify a number of short-term and long-term global conservation priorities, and act complementarily to achieve them.
Monseigneur, par P. Saunière, et le Secret d'or. Le Secret d'or. Cent ans après, ou le Legs du pendu
Resumo:
The mathematical representation of Brunswik s lens model has been usedextensively to study human judgment and provides a unique opportunity to conduct ameta-analysis of studies that covers roughly five decades. Specifically, we analyzestatistics of the lens model equation (Tucker, 1964) associated with 259 different taskenvironments obtained from 78 papers. In short, we find on average fairly high levelsof judgmental achievement and note that people can achieve similar levels of cognitiveperformance in both noisy and predictable environments. Although overall performancevaries little between laboratory and field studies, both differ in terms of components ofperformance and types of environments (numbers of cues and redundancy). An analysisof learning studies reveals that the most effective form of feedback is information aboutthe task. We also analyze empirically when bootstrapping is more likely to occur. Weconclude by indicating shortcomings of the kinds of studies conducted to date, limitationsin the lens model methodology, and possibilities for future research.
Resumo:
Manipulation of government finances for the benefit of narrowly defined groups is usuallythought to be limited to the part of the budget over which politicians exercise discretion inthe short run, such as earmarks. Analyzing a revenue-sharing program between the centraland local governments in Brazil that uses an allocation formula based on local population estimates,I document two main results: first, that the population estimates entering the formulawere manipulated and second, that this manipulation was political in nature. Consistent withswing-voter targeting by the right-wing central government, I find that municipalities withroughly equal right-wing and non-right-wing vote shares benefited relative to opposition orconservative core support municipalities. These findings suggest that the exclusive focus ondiscretionary transfers in the extant empirical literature on special-interest politics may understatethe true scope of tactical redistribution that is going on under programmatic disguise.
Resumo:
We study the effects of globalization on risk sharing and welfare. Like previous literature, weassume that countries cannot commit to repay their debts. Unlike previous literature, we assumethat countries cannot discriminate between domestic and foreign creditors when repaying theirdebts. This creates novel interactions between domestic and international trade in assets. (i)Increases in domestic trade raise the bene.ts of enforcement and facilitate international trade.In fact, in our setup countries can obtain international risk sharing even in the absence of defaultpenalties. (ii) Increases in foreign trade .i.e. globalization.raise the costs of enforcement andhamper domestic trade. As a result, globalization may worsen domestic risk sharing and lowerwelfare. We show how these e¤ects depend on various characteristics of tradable goods andexplore the roles of borrowing limits, debt renegotiations, and trade policy.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
Contingent sovereign debt can create important welfare gains. Nonetheless,there is almost no issuance today. Using hand-collected archival data, we examine thefirst known case of large-scale use of state-contingent sovereign debt in history. Philip IIof Spain entered into hundreds of contracts whose value and due date depended onverifiable, exogenous events such as the arrival of silver fleets. We show that this allowedfor effective risk-sharing between the king and his bankers. The data also stronglysuggest that the defaults that occurred were excusable they were simply contingenciesover which Crown and bankers had not contracted previously.
Resumo:
The network revenue management (RM) problem arises in airline, hotel, media,and other industries where the sale products use multiple resources. It can be formulatedas a stochastic dynamic program but the dynamic program is computationallyintractable because of an exponentially large state space, and a number of heuristicshave been proposed to approximate it. Notable amongst these -both for their revenueperformance, as well as their theoretically sound basis- are approximate dynamic programmingmethods that approximate the value function by basis functions (both affinefunctions as well as piecewise-linear functions have been proposed for network RM)and decomposition methods that relax the constraints of the dynamic program to solvesimpler dynamic programs (such as the Lagrangian relaxation methods). In this paperwe show that these two seemingly distinct approaches coincide for the network RMdynamic program, i.e., the piecewise-linear approximation method and the Lagrangianrelaxation method are one and the same.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
We consider an entrepreneur that is the sole producer of a costreducing skill, but the entrepreneur that hires a team to usethe skill cannot prevent collusive trade for the innovation related knowledge between employees and competitors. We showthat there are two types of diffusion avoiding strategies forthe entrepreneur to preempt collusive communication i) settingup a large productive capacity (the traditional firm) and ii)keeping a small team (the lean firm). The traditional firm ischaracterized by its many "marginal" employees that work shortdays, receive flat wages and are incompletely informed about the innovation. The lean firm is small in number of employees,engages in complete information sharing among members, that are paid with stock option schemes. We find that the lean firm is superior to the traditional firm when technological entry costsare low and when the sector is immature.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.