884 resultados para Optimal control problem
Resumo:
Executive control refers to a set of abilities enabling us to plan, control and implement our behavior to rapidly and flexibly adapt to environmental requirements. These adaptations notably involve the suppression of intended or ongoing cognitive or motor processes, a skill referred to as "inhibitory control". To implement efficient executive control of behavior, one must monitor our performance following errors to adjust our behavior accordingly. Deficits in inhibitory control have been associated with the emergènce of a wide range of psychiatric disorders, ranging from drug addiction to attention deficit/hyperactivity disorders. Inhibitory control deficits could, however, be remediated- The brain has indeed the amazing possibility to reorganize following training to allow for behavioral improvements. This mechanism is referred to as neural and behavioral plasticity. Here, our aim is to investigate training-induced plasticity in inhibitory control and propose a model of inhibitory control explaining the spatio- temporal brain mechanisms supporting inhibitory control processes and their plasticity. In the two studies entitled "Brain dynamics underlying training-induced improvement in suppressing inappropriate action" (Manuel et al., 2010) and "Training-induced neuroplastic reinforcement óf top-down inhibitory control" (Manuel et al., 2012c), we investigated the neurophysiological and behavioral changes induced by inhibitory control training with two different tasks and populations of healthy participants. We report that different inhibitory control training developed either automatic/bottom-up inhibition in parietal areas or reinforced controlled/top-down inhibitory control in frontal brain regions. We discuss the results of both studies in the light of a model of fronto-basal inhibition processes. In "Spatio-temporal brain dynamics mediating post-error behavioral adjustments" (Manuel et al., 2012a), we investigated how error detection modulates the processing of following stimuli and in turn impact behavior. We showed that during early integration of stimuli, the activity of prefrontal and parietal areas is modulated according to previous performance and impacts the post-error behavioral adjustments. We discuss these results in terms of a shift from an automatic to a controlled form of inhibition induced by the detection of errors, which in turn influenced response speed. In "Inter- and intra-hemispheric dissociations in ideomotor apraxia: a large-scale lesion- symptom mapping study in subacute brain-damaged patients" (Manuel et al., 2012b), we investigated ideomotor apraxia, a deficit in performing pantomime gestures of object use, and identified the anatomical correlates of distinct ideomotor apraxia error types in 150 subacute brain-damaged patients. Our results reveal a left intra-hemispheric dissociation for different pantomime error types, but with an unspecific role for inferior frontal areas. Les fonctions exécutives désignent un ensemble de processus nous permettant de planifier et contrôler notre comportement afin de nous adapter de manière rapide et flexible à l'environnement. L'une des manières de s'adapter consiste à arrêter un processus cognitif ou moteur en cours ; le contrôle de l'inhibition. Afin que le contrôle exécutif soit optimal il est nécessaire d'ajuster notre comportement après avoir fait des erreurs. Les déficits du contrôle de l'inhibition sont à l'origine de divers troubles psychiatriques tels que l'addiction à la drogue ou les déficits d'attention et d'hyperactivité. De tels déficits pourraient être réhabilités. En effet, le cerveau a l'incroyable capacité de se réorganiser après un entraînement et ainsi engendrer des améliorations comportementales. Ce mécanisme s'appelle la plasticité neuronale et comportementale. Ici, notre but èst d'étudier la plasticité du contrôle de l'inhibition après un bref entraînement et de proposer un modèle du contrôle de l'inhibition qui permette d'expliquer les mécanismes cérébraux spatiaux-temporels sous-tendant l'amélioration du contrôle de l'inhibition et de leur plasticité. Dans les deux études intitulées "Brain dynamics underlying training-induced improvement in suppressing inappropriate action" (Manuel et al., 2010) et "Training-induced neuroplastic reinforcement of top-down inhibitory control" (Manuel et al., 2012c), nous nous sommes intéressés aux changements neurophysiologiques et comportementaux liés à un entraînement du contrôle de l'inhibition. Pour ce faire, nous avons étudié l'inhibition à l'aide de deux différentes tâches et deux populations de sujets sains. Nous avons démontré que différents entraînements pouvaient soit développer une inhibition automatique/bottom-up dans les aires pariétales soit renforcer une inhibition contrôlée/top-down dans les aires frontales. Nous discutons ces résultats dans le contexte du modèle fronto-basal du contrôle de l'inhibition. Dans "Spatio-temporal brain dynamics mediating post-error behavioral adjustments" (Manuel et al., 2012a), nous avons investigué comment la détection d'erreurs influençait le traitement du prochain stimulus et comment elle agissait sur le comportement post-erreur. Nous avons montré que pendant l'intégration précoce des stimuli, l'activité des aires préfrontales et pariétales était modulée en fonction de la performance précédente et avait un impact sur les ajustements post-erreur. Nous proposons que la détection d'erreur ait induit un « shift » d'un mode d'inhibition automatique à un mode contrôlé qui a à son tour influencé le temps de réponse. Dans "Inter- and intra-hemispheric dissociations in ideomotor apraxia: a large-scale lesion-symptom mapping study in subacute brain-damaged patients" (Manuel et al., 2012b), nous avons examiné l'apraxie idémotrice, une incapacité à exécuter des gestes d'utilisation d'objets, chez 150 patients cérébro-lésés. Nous avons mis en avant une dissociation intra-hémisphérique pour différents types d'erreurs avec un rôle non spécifique pour les aires frontales inférieures.
Resumo:
Removal of introns during pre-mRNA splicing is a critical process in gene expression, and understanding its control at both single-gene and genomic levels is one of the great challenges in Biology. Splicing takes place in a dynamic, large ribonucleoprotein complex known as the spliceosome. Combining Genetics and Biochemistry, Saccharomyces cerevisiae provides insights into its mechanisms, including its regulation by RNA-protein interactions. Recent genome-wide analyses indicate that regulated splicing is broad and biologically relevant even in organisms with a relatively simple intronic structure, such as yeast. Furthermore, the possibility of coordination in splicing regulation at genomic level is becoming clear in this model organism. This should provide a valuable system to approach the complex problem of the role of regulated splicing in genomic expression.
Resumo:
The paper proposes an approach aimed at detecting optimal model parameter combinations to achieve the most representative description of uncertainty in the model performance. A classification problem is posed to find the regions of good fitting models according to the values of a cost function. Support Vector Machine (SVM) classification in the parameter space is applied to decide if a forward model simulation is to be computed for a particular generated model. SVM is particularly designed to tackle classification problems in high-dimensional space in a non-parametric and non-linear way. SVM decision boundaries determine the regions that are subject to the largest uncertainty in the cost function classification, and, therefore, provide guidelines for further iterative exploration of the model space. The proposed approach is illustrated by a synthetic example of fluid flow through porous media, which features highly variable response due to the parameter values' combination.
Resumo:
It has long been standard in agency theory to search for incentive-compatible mechanisms on the assumption that people care only about their own material wealth. However, this assumption is clearly refuted by numerous experiments, and we feel that it may be useful to consider nonpecuniary utility in mechanism design and contract theory. Accordingly, we devise an experiment to explore optimal contracts in an adverse-selection context. A principal proposes one of three contract menus, each of which offers a choice of two incentive-compatible contracts, to two agents whose types are unknown to the principal. The agents know the set of possible menus, and choose to either accept one of the two contracts offered in the proposed menu or to reject the menu altogether; a rejection by either agent leads to lower (and equal) reservation payoffs for all parties. While all three possible menus favor the principal, they do so to varying degrees. We observe numerous rejections of the more lopsided menus, and approach an equilibrium where one of the more equitable contract menus (which one depends on the reservation payoffs) is proposed and agents accept a contract, selecting actions according to their types. Behavior is largely consistent with all recent models of social preferences, strongly suggesting there is value in considering nonpecuniary utility in agency theory.
Resumo:
The Helvetic nappe system in Western Switzerland is a stack of fold nappes and thrust sheets em-placed at low grade metamorphism. Fold nappes and thrust sheets are also some of the most common features in orogens. Fold nappes are kilometer scaled recumbent folds which feature a weakly deformed normal limb and an intensely deformed overturned limb. Thrust sheets on the other hand are characterized by the absence of overturned limb and can be defined as almost rigid blocks of crust that are displaced sub-horizontally over up to several tens of kilometers. The Morcles and Doldenhom nappe are classic examples of fold nappes and constitute the so-called infra-Helvetic complex in Western and Central Switzerland, respectively. This complex is overridden by thrust sheets such as the Diablerets and Wildhörn nappes in Western Switzerland. One of the most famous example of thrust sheets worldwide is the Glariis thrust sheet in Central Switzerland which features over 35 kilometers of thrusting which are accommodated by a ~1 m thick shear zone. Since the works of the early Alpine geologist such as Heim and Lugeon, the knowledge of these nappes has been steadily refined and today the geometry and kinematics of the Helvetic nappe system is generally agreed upon. However, despite the extensive knowledge we have today of the kinematics of fold nappes and thrust sheets, the mechanical process leading to the emplacement of these nappe is still poorly understood. For a long time geologist were facing the so-called 'mechanical paradox' which arises from the fact that a block of rock several kilometers high and tens of kilometers long (i.e. nappe) would break internally rather than start moving on a low angle plane. Several solutions were proposed to solve this apparent paradox. Certainly the most successful is the theory of critical wedges (e.g. Chappie 1978; Dahlen, 1984). In this theory the orogen is considered as a whole and this change of scale allows thrust sheet like structures to form while being consistent with mechanics. However this theoiy is intricately linked to brittle rheology and fold nappes, which are inherently ductile structures, cannot be created in these models. When considering the problem of nappe emplacement from the perspective of ductile rheology the problem of strain localization arises. The aim of this thesis was to develop and apply models based on continuum mechanics and integrating heat transfer to understand the emplacement of nappes. Models were solved either analytically or numerically. In the first two papers of this thesis we derived a simple model which describes channel flow in a homogeneous material with temperature dependent viscosity. We applied this model to the Morcles fold nappe and to several kilometer-scale shear zones worldwide. In the last paper we zoomed out and studied the tectonics of (i) ductile and (ii) visco-elasto-plastic and temperature dependent wedges. In this last paper we focused on the relationship between basement and cover deformation. We demonstrated that during the compression of a ductile passive margin both fold nappes and thrust sheets can develop and that these apparently different structures constitute two end-members of a single structure (i.e. nappe). The transition from fold nappe to thrust sheet is to first order controlled by the deformation of the basement. -- Le système des nappes helvétiques en Suisse occidentale est un empilement de nappes de plis et de nappes de charriage qui se sont mis en place à faible grade métamorphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement défor-mé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Mordes et la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glariis en Suisse centrale se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. Aujourd'hui la géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général. Malgré cela, les processus mécaniques par lesquels ces nappes se sont mises en place restent mal compris. Pendant toute la première moitié du vingtième siècle les géologues les géologues ont été confrontés au «paradoxe mécanique». Celui-ci survient du fait qu'un bloc de roche haut de plusieurs kilomètres et long de plusieurs dizaines de kilomètres (i.e., une nappe) se fracturera de l'intérieur plutôt que de se déplacer sur une surface frictionnelle. Plusieurs solutions ont été proposées pour contourner cet apparent paradoxe. La solution la plus populaire est la théorie des prismes d'accrétion critiques (par exemple Chappie, 1978 ; Dahlen, 1984). Dans le cadre de cette théorie l'orogène est considéré dans son ensemble et ce simple changement d'échelle solutionne le paradoxe mécanique (la fracturation interne de l'orogène correspond aux nappes). Cette théorie est étroitement lié à la rhéologie cassante et par conséquent des nappes de plis ne peuvent pas créer au sein d'un prisme critique. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la méca-nique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous avons dérivé un modèle d'écoulement dans un chenal d'un matériel homogène dont la viscosité dépend de la température. Nous avons appliqué ce modèle à la nappe de Mordes et à plusieurs zone de cisaillement d'échelle kilométrique provenant de différents orogènes a travers le monde. Dans le dernier article nous avons considéré le problème à l'échelle de l'orogène et avons étudié la tectonique de prismes (i) ductiles, et (ii) visco-élasto-plastiques en considérant les transferts de chaleur. Nous avons démontré que durant la compression d'une marge passive ductile, a la fois des nappes de plis et des nappes de charriages peuvent se développer. Nous avons aussi démontré que nappes de plis et de charriages sont deux cas extrêmes d'une même structure (i.e. nappe) La transition entre le développement d'une nappe de pli ou d'une nappe de charriage est contrôlé au premier ordre par la déformation du socle. -- Le système des nappes helvétiques en Suisse occidentale est un emblement de nappes de plis et de nappes de chaînage qui se sont mis en place à faible grade métamoiphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement déformé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Morcles and la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glarüs en Suisse centrale est certainement l'exemple de nappe de charriage le plus célèbre au monde. Elle se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. La géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général parmi les géologues. Au contraire les processus physiques par lesquels ces nappes sont mises en place reste mal compris. Les sédiments qui forment les nappes alpines se sont déposés à l'ère secondaire et à l'ère tertiaire sur le socle de la marge européenne qui a été étiré durant l'ouverture de l'océan Téthys. Lors de la fermeture de la Téthys, qui donnera naissance aux Alpes, le socle et les sédiments de la marge européenne ont été déformés pour former les nappes alpines. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la mécanique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous nous sommes intéressés à la localisation de la déformation à l'échelle d'une nappe. Nous avons appliqué le modèle développé à la nappe de Morcles et à plusieurs zones de cisaillement provenant de différents orogènes à travers le monde. Dans le dernier article nous avons étudié la relation entre la déformation du socle et la défonnation des sédiments. Nous avons démontré que nappe de plis et nappes de charriages constituent les cas extrêmes d'un continuum. La transition entre nappe de pli et nappe de charriage est intrinsèquement lié à la déformation du socle sur lequel les sédiments reposent.
Resumo:
Revenue management practices often include overbooking capacity to account for customerswho make reservations but do not show up. In this paper, we consider the network revenuemanagement problem with no-shows and overbooking, where the show-up probabilities are specificto each product. No-show rates differ significantly by product (for instance, each itinerary andfare combination for an airline) as sale restrictions and the demand characteristics vary byproduct. However, models that consider no-show rates by each individual product are difficultto handle as the state-space in dynamic programming formulations (or the variable space inapproximations) increases significantly. In this paper, we propose a randomized linear program tojointly make the capacity control and overbooking decisions with product-specific no-shows. Weestablish that our formulation gives an upper bound on the optimal expected total profit andour upper bound is tighter than a deterministic linear programming upper bound that appearsin the existing literature. Furthermore, we show that our upper bound is asymptotically tightin a regime where the leg capacities and the expected demand is scaled linearly with the samerate. We also describe how the randomized linear program can be used to obtain a bid price controlpolicy. Computational experiments indicate that our approach is quite fast, able to scale to industrialproblems and can provide significant improvements over standard benchmarks.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
The optimal location of services is one of the most important factors that affects service quality in terms of consumer access. On theother hand, services in general need to have a minimum catchment area so as to be efficient. In this paper a model is presented that locates the maximum number of services that can coexist in a given region without having losses, taking into account that they need a minimum catchment area to exist. The objective is to minimize average distance to the population. The formulation presented belongs to the class of discrete P--median--like models. A tabu heuristic method is presented to solve the problem. Finally, the model is applied to the location of pharmacies in a rural region of Spain.
Resumo:
Therapeutic goal of vitamin D: optimal serum level and dose requirements Results of randomized controlled trials and meta-analyses investigating the effect of vitamin D supplementation on falls and fractures are inconsistent. The optimal serum level 25(OH) vitamin D for musculoskeletal and global health is > or = 30 ng/ml (75 nmol/l) for some experts and 20 ng/ml (50 nmol/l) for some others. A daily dose of vitamin D is better than high intermittent doses to reach this goal. High dose once-yearly vitamin D therapy may increase the incidence of fractures and falls. High serum level of vitamin D is probably harmful for the musculoskeletal system and health at large. The optimal benefits for musculoskeletal health are obtained with an 800 UI daily dose and a serum level of near 30 ng/ml (75 nmol/l).
Resumo:
The Drivers Scheduling Problem (DSP) consists of selecting a set of duties for vehicle drivers, for example buses, trains, plane or boat drivers or pilots, for the transportation of passengers or goods. This is a complex problem because it involves several constraints related to labour and company rules and can also present different evaluation criteria and objectives. Being able to develop an adequate model for this problem that can represent the real problem as close as possible is an important research area.The main objective of this research work is to present new mathematical models to the DSP problem that represent all the complexity of the drivers scheduling problem, and also demonstrate that the solutions of these models can be easily implemented in real situations. This issue has been recognized by several authors and as important problem in Public Transportation. The most well-known and general formulation for the DSP is a Set Partition/Set Covering Model (SPP/SCP). However, to a large extend these models simplify some of the specific business aspects and issues of real problems. This makes it difficult to use these models as automatic planning systems because the schedules obtained must be modified manually to be implemented in real situations. Based on extensive passenger transportation experience in bus companies in Portugal, we propose new alternative models to formulate the DSP problem. These models are also based on Set Partitioning/Covering Models; however, they take into account the bus operator issues and the perspective opinions and environment of the user.We follow the steps of the Operations Research Methodology which consist of: Identify the Problem; Understand the System; Formulate a Mathematical Model; Verify the Model; Select the Best Alternative; Present the Results of theAnalysis and Implement and Evaluate. All the processes are done with close participation and involvement of the final users from different transportation companies. The planner s opinion and main criticisms are used to improve the proposed model in a continuous enrichment process. The final objective is to have a model that can be incorporated into an information system to be used as an automatic tool to produce driver schedules. Therefore, the criteria for evaluating the models is the capacity to generate real and useful schedules that can be implemented without many manual adjustments or modifications. We have considered the following as measures of the quality of the model: simplicity, solution quality and applicability. We tested the alternative models with a set of real data obtained from several different transportation companies and analyzed the optimal schedules obtained with respect to the applicability of the solution to the real situation. To do this, the schedules were analyzed by the planners to determine their quality and applicability. The main result of this work is the proposition of new mathematical models for the DSP that better represent the realities of the passenger transportation operators and lead to better schedules that can be implemented directly in real situations.
Resumo:
Most cases of cost overruns in public procurement are related to important changes in the initial project design. This paper deals with the problem of design specification in public procurement and provides a rationale for design misspecification. We propose a model in which the sponsor decides how much to invest in design specification and awards competitively the project to a contractor. After the project has been awarded the sponsor engages in bilateral renegotiation with the contractor, in order to accommodate changes in the initial project s design that new information makes desirable. When procurement takes place in the presence of horizontally differentiated contractors, the design s specification level is seen to affect the resulting degree of competition. The paper highlights this interaction between market competition and design specification and shows that the sponsor s optimal strategy, when facing an imperfectly competitive market supply, is to underinvest in design specification so as to make significant cost overruns likely. Since no such misspecification occurs in a perfectly competitive market, cost overruns are seen to arise as a consequence of lack of competition in the procurement market.
Resumo:
It has long been standard in agency theory to search for incentive-compatible mechanisms on the assumption that people care only about their own material wealth. However, this assumption is clearly refuted by numerous experiments, and we feel that it may be useful to consider nonpecuniary utility in mechanism design and contract theory. Accordingly, we devise an experiment to explore optimal contracts in an adverse-selection context. A principal proposes one of three contract menus, each of which offers a choice of two incentive-compatible contracts, to two agents whose types are unknown to the principal. The agents know the set of possible menus, and choose to either accept one of the two contracts offered in the proposed menu or to reject the menu altogether; a rejection by either agent leads to lower (and equal) reservation payoffs for all parties. While all three possible menus favor the principal, they do so to varying degrees. We observe numerous rejections of the more lopsided menus, and approach an equilibrium where one of the more equitable contract menus (which one depends on the reservation payoffs) is proposed and agents accept a contract, selecting actions according to their types. Behavior is largely consistent with all recent models of social preferences, strongly suggesting there is value in considering nonpecuniary utility in agency theory.
Resumo:
To recover a version of Barro's (1979) `random walk'tax smoothing outcome, we modify Lucas and Stokey's (1983) economyto permit only risk--free debt. This imparts near unit root like behaviorto government debt, independently of the government expenditureprocess, a realistic outcome in the spirit of Barro's. We showhow the risk--free--debt--only economy confronts the Ramsey plannerwith additional constraints on equilibrium allocations thattake the form of a sequence of measurability conditions.We solve the Ramsey problem by formulating it in terms of a Lagrangian,and applying a Parameterized Expectations Algorithm tothe associated first--order conditions. The first--order conditions andnumerical impulse response functions partially affirmBarro's random walk outcome. Though the behaviors oftax rates, government surpluses, and government debts differ, allocationsare very close for computed Ramsey policies across incomplete and completemarkets economies.
Resumo:
We propose a stylized model of a problem-solving organization whoseinternal communication structure is given by a fixed network. Problemsarrive randomly anywhere in this network and must find their way to theirrespective specialized solvers by relying on local information alone.The organization handles multiple problems simultaneously. For this reason,the process may be subject to congestion. We provide a characterization ofthe threshold of collapse of the network and of the stock of foatingproblems (or average delay) that prevails below that threshold. We buildupon this characterization to address a design problem: the determinationof what kind of network architecture optimizes performance for any givenproblem arrival rate. We conclude that, for low arrival rates, the optimalnetwork is very polarized (i.e. star-like or centralized ), whereas it islargely homogenous (or decentralized ) for high arrival rates. We also showthat, if an auxiliary assumption holds, the transition between these twoopposite structures is sharp and they are the only ones to ever qualify asoptimal.