976 resultados para location-allocation problem
Resumo:
In the Hamiltonian formulation of predictive relativistic systems, the canonical coordinates cannot be the physical positions. The relation between them is given by the individuality differential equations. However, due to the arbitrariness in the choice of Cauchy data, there is a wide family of solutions for these equations. In general, those solutions do not satisfy the condition of constancy of velocities moduli, and therefore we have to reparametrize the world lines into the proper time. We derive here a condition on the Cauchy data for the individuality equations which ensures the constancy of the velocities moduli and makes the reparametrization unnecessary.
Resumo:
This article investigates the allocation of demand risk within an incomplete contract framework. We consider an incomplete contractual relationship between a public authority and a private provider (i.e. a public-private partnership), in which the latter invests in non-verifiable cost-reducing efforts and the former invests in non-verifiable adaptation efforts to respond to changing consumer demand over time. We show that the party that bears the demand risk has fewer hold-up opportunities and that this leads the other contracting party to make more effort. Thus, in our model, bearing less risk can lead to more effort, which we describe as a new example of âeuro~counter-incentivesâeuro?. We further show that when the benefits of adaptation are important, it is socially preferable to design a contract in which the demand risk remains with the private provider, whereas when the benefits of cost-reducing efforts are important, it is socially preferable to place the demand risk on the public authority. We then apply these results to explain two well-known case studies.
Resumo:
Summary
Resumo:
Durant el segle XIX, l'economia espanyola va transitar per les primeres etapes de la industrialització. Aquest procés es va donar en paral·lel a la integració del mercat domèstic de béns i factors, en un moment en què les reformes liberals i la construcció de la xarxa ferroviària, entre d'altres, van generar una important caiguda en els costos detransport. Al mateix temps que es donava aquesta progressiva integració del mercat domèstic espanyol, es van produir canvis significatius en la pauta de localització industrial. D'una banda, hi hagué un augment considerable de la concentració espacial de la indústria des de mitjans de segle XIX i fins a la Guerra Civil, i d¿altra, un increment de l'especialització regional. Ara bé, quines van ser les forces que van generar aquests canvis? Des d¿un punt de vista teòric, el model de Heckscher-Ohlin suggereix que la distribució a l'espai de l¿activitat econòmica ve determinada per l'avantatge comparativa dels territoris en funció de la dotació relativa de factors. Al seu torn, els models de Nova Geografia Econòmica (NEG) mostren l'existència d'una relació en forma de campana entre el procés d'integració econòmica i el grau de concentració geogràfica de l'activitat industrial. Aquest article examina empíricament els determinants de la localització industrial a Espanya entre 1856 i 1929, mitjançant l'estimació d¿un model que combina els elements de tipus Heckscher-Ohlin i els factors apuntats des de la NEG, amb l'objectiu de contrastar la força relativa dels arguments vinculats a aquestes dues interpretacions a l'hora de modular la localització de la indústria a Espanya. L'anàlisi dels resultats obtinguts mostra que tant la dotació de factors com els mecanismes de tipus NEG van ser elements determinants que expliquen la distribució geogràfica de la indústria des del segle XIX, tot i que la seva força relativa va anar variant amb el temps.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
The pion spectrum for charged and neutral pions is investigated in pure neutron matter, by letting the pions interact with a neutron Fermi sea in a self-consistent scheme that renormalizes simultaneously the mesons, considered the source of the interaction, and the nucleons. The possibility of obtaining different kinds of pion condensates is investigated with the result that they cannot be reached even for values of the spin-spin correlation parameter, g', far below the range commonly accepted.
Resumo:
A subclass of games with population monotonic allocation schemes is studied, namelygames with regular population monotonic allocation schemes (rpmas). We focus on theproperties of these games and we prove the coincidence between the core and both theDavis-Maschler bargaining set and the Mas-Colell bargaining set
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.
Resumo:
We show that the dispersal routes reconstruction problem can be stated as an instance of a graph theoretical problem known as the minimum cost arborescence problem, for which there exist efficient algorithms. Furthermore, we derive some theoretical results, in a simplified setting, on the possible optimal values that can be obtained for this problem. With this, we place the dispersal routes reconstruction problem on solid theoretical grounds, establishing it as a tractable problem that also lends itself to formal mathematical and computational analysis. Finally, we present an insightful example of how this framework can be applied to real data. We propose that our computational method can be used to define the most parsimonious dispersal (or invasion) scenarios, which can then be tested using complementary methods such as genetic analysis.
Resumo:
This article describes the new organ allocation system for liver transplantation introduced in Switzerland on July 1, 2007. In its newly adopted transplantation law, Switzerland chose the MELD score (Model for end-stage liver disease), based on three laboratory values: total bilirubin, serum creatinine and INR. Advantages and limitations of the MELD score are discussed. Finally the West Switzerland joint liver transplantation program is briefly introduced. Cet article décrit le nouveau système d'allocation des organes pour la transplantation hépatique, qui a été introduit en Suisse depuis le 1er juillet 2007. En se dotant d'une nouvelle loi sur la transplantation en 2007, la Suisse a en effet opté pour le score MELD (Model for end-stage liver disease), c'est-à-dire un score calculable à partir de trois valeurs de laboratoire : bilirubine totale, créatinine et INR. Les avantages du MELD, mais aussi quelques inconvénients potentiels, sont brièvement décrits. Afin de clarifier le parcours du patient pour lequel une évaluation prétransplantation hépatique spécialisée est indiquée, une brève description du programme romand de transplantation hépatique est présentée.
Resumo:
This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.
Resumo:
Our mental representation of the world is far from objective. For example, western Canadians estimate the locations of North American cities to be too far to the west. This bias could be due to a reference point effect, in which people estimate more space between places close to them than far from them, or to representational pseudoneglect, in which neurologically intact individuals favor the left side of space when asked to image a scene.We tested whether either or both of these biases influence the geographic world representations of neurologically intact young adults from Edmonton and Ottawa, which are in western and eastern Canada, respectively. Individuals were asked to locate NorthAmerican cities on a two-dimensional grid. Both groups revealed effects of representational pseudoneglect in this novel paradigm, but they also each exhibited reference point effects. These results inform theories in both cognitive psychology and neuroscience.