92 resultados para heory of constraints
Resumo:
This paper discusses predictive motion control of a MiRoSoT robot. The dynamic model of the robot is deduced by taking into account the whole process - robot, vision, control and transmission systems. Based on the obtained dynamic model, an integrated predictive control algorithm is proposed to position precisely with either stationary or moving obstacle avoidance. This objective is achieved automatically by introducing distant constraints into the open-loop optimization of control inputs. Simulation results demonstrate the feasibility of such control strategy for the deduced dynamic model
Resumo:
The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown
Resumo:
This Technical Report presents a tentative protocol used to assess the viability of powersupply systems. The viability of power-supply systems can be assessed by looking at the production factors (e.g. paid labor, power capacity, fossil-fuels) – needed for the system to operate and maintain itself – in relation to the internal constraints set by the energetic metabolism of societies. In fact, by using this protocol it becomes possible to link assessments of technical coefficients performed at the level of the power-supply systems with assessments of benchmark values performed at the societal level throughout the relevant different sectors. In particular, the example provided here in the case of France for the year 2009 makes it possible to see that in fact nuclear energy is not viable in terms of labor requirements (both direct and indirect inputs) as well as in terms of requirements of power capacity, especially when including reprocessing operations.
Resumo:
There are many factors that influence the day-ahead market bidding strategies of a generation company (GenCo) in the current energy market framework. Environmental policy issues have become more and more important for fossil-fuelled power plants and they have to be considered in their management, giving rise to emission limitations. This work allows to investigate the influence of both the allowances and emission reduction plan, and the incorporation of the derivatives medium-term commitments in the optimal generation bidding strategy to the day-ahead electricity market. Two different technologies have been considered: the coal thermal units, high-emission technology, and the combined cycle gas turbine units, low-emission technology. The Iberian Electricity Market and the Spanish National Emissions and Allocation Plans are the framework to deal with the environmental issues in the day-ahead market bidding strategies. To address emission limitations, some of the standard risk management methodologies developed for financial markets, such as Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR), have been extended. This study offers to electricity generation utilities a mathematical model to determinate the individual optimal generation bid to the wholesale electricity market, for each one of their generation units that maximizes the long-run profits of the utility abiding by the Iberian Electricity Market rules, the environmental restrictions set by the EU Emission Trading Scheme, as well as the restrictions set by the Spanish National Emissions Reduction Plan. The economic implications for a GenCo of including the environmental restrictions of these National Plans are analyzed and the most remarkable results will be presented.
Resumo:
In 2000 the European Statistical Office published the guidelines for developing theHarmonized European Time Use Surveys system. Under such a unified framework,the first Time Use Survey of national scope was conducted in Spain during 2002–03. The aim of these surveys is to understand human behavior and the lifestyle ofpeople. Time allocation data are of compositional nature in origin, that is, they aresubject to non-negativity and constant-sum constraints. Thus, standard multivariatetechniques cannot be directly applied to analyze them. The goal of this work is toidentify homogeneous Spanish Autonomous Communities with regard to the typicalactivity pattern of their respective populations. To this end, fuzzy clustering approachis followed. Rather than the hard partitioning of classical clustering, where objects areallocated to only a single group, fuzzy method identify overlapping groups of objectsby allowing them to belong to more than one group. Concretely, the probabilistic fuzzyc-means algorithm is conveniently adapted to deal with the Spanish Time Use Surveymicrodata. As a result, a map distinguishing Autonomous Communities with similaractivity pattern is drawn.Key words: Time use data, Fuzzy clustering; FCM; simplex space; Aitchison distance
Resumo:
Large projects evaluation rises well known difficulties because -by definition- they modify the current price system; their public evaluation presents additional difficulties because they modify too existing shadow prices without the project. This paper analyzes -first- the basic methodologies applied until late 80s., based on the integration of projects in optimization models or, alternatively, based on iterative procedures with information exchange between two organizational levels. New methodologies applied afterwards are based on variational inequalities, bilevel programming and linear or nonlinear complementarity. Their foundations and different applications related with project evaluation are explored. As a matter of fact, these new tools are closely related among them and can treat more complex cases involving -for example- the reaction of agents to policies or the existence of multiple agents in an environment characterized by common functions representing demands or constraints on polluting emissions.
Resumo:
Coffee and cocoa represent the main sources of income for small farmers in the Northern Amazon Region of Ecuador. The provinces of Orellana and Sucumbios, as border areas, have benefited from investments made by many public and private institutions. Many of the projects carried out in the area have been aimed at energising the production of coffee and cocoa, strengthening the producers’ associations and providing commercialisation infrastructure. Improving the quality of life of this population threatened by poverty and high migration flows mainly from Colombia is a significant challenge. This paper presents research highlighting the importance of associative commercialisation to raising income from coffee and cocoa. The research draws on primary information obtained during field work, and from official information from the Ministry of Agriculture. The study presents an overview of current organisational structures, initiatives of associative commercialisation, stockpiling of infrastructure and ownership regimes, as well as estimates for both ‘robusta’ coffee and national cocoa production and income. The analysis of the main constraints presents different alternatives for the implementation of public land policies. These policies are aimed at mitigating the problems associated with the organisational structure of the producers, with processes of commercialisation and with environmental aspects, among others.
Resumo:
Background: Asparagine N-Glycosylation is one of the most important forms of protein post-translational modification in eukaryotes. This metabolic pathway can be subdivided into two parts: an upstream sub-pathway required for achieving proper folding for most of the proteins synthesized in the secretory pathway, and a downstream sub-pathway required to give variability to trans-membrane proteins, and involved in adaptation to the environment andinnate immunity. Here we analyze the nucleotide variability of the genes of this pathway in human populations, identifying which genes show greater population differentiation and which genes show signatures of recent positive selection. We also compare how these signals are distributed between the upstream and the downstream parts of the pathway, with the aim of exploring how forces of population differentiation and positive selection vary among genes involved in the same metabolic pathway but subject to different functional constraints. Results:Our results show that genes in the downstream part of the pathway are more likely to show a signature of population differentiation, while events of positive selection are equally distributed among the two parts of the pathway. Moreover, events of positive selection arefrequent on genes that are known to be at bifurcation points, and that are identified as beingin key position by a network-level analysis such as MGAT3 and GCS1.Conclusions: These findings indicate that the upstream part of the Asparagine N-Glycosylation pathway has lower diversity among populations, while the downstream part is freer to tolerate diversity among populations. Moreover, the distribution of signatures of population differentiation and positive selection can change between parts of a pathway, especially between parts that are exposed to different functional constraints. Our results support the hypothesis that genes involved in constitutive processes can be expected to show lower population differentiation, while genes involved in traits related to the environment should show higher variability. Taken together, this work broadens our knowledge on how events of population differentiation and of positive selection are distributed among different parts of a metabolic pathway.
Resumo:
Distance and blended collaborative learning settings are usually characterized by different social structures defined in terms of groups' number, dimension, and composition; these structures are variable and can change within the same activity. This variability poses additional complexity to instructional designers, when they are trying to develop successful experiences from existing designs. This complexity is greatly associated with the fact that learning designs do not render explicit how social structures influenced the decisions of the original designer, and thus whether the social structures of the new setting could preclude the effectiveness of the reused design. This article proposes the usage of new representations (social structure representations, SSRs) able to support unskilled designers in reusing existing learning designs, through the explicit characterization of the social structures and constraints embedded either by the original designers or the reusing teachers, according to well-known principles of good collaborative learning practice. The article also describes an evaluation process that involved university professors, as well as the main findings derived from it. This process supported the initial assumptions about the effectiveness of SSRs, with significant evidence from both qualitative and qualitative data.
Resumo:
In the last few years, there has been a growing focus on faster computational methods to support clinicians in planning stenting procedures. This study investigates the possibility of introducing computational approximations in modelling stent deployment in aneurysmatic cerebral vessels to achieve simulations compatible with the constraints of real clinical workflows. The release of a self-expandable stent in a simplified aneurysmatic vessel was modelled in four different initial positions. Six progressively simplified modelling approaches (based on Finite Element method and Fast Virtual Stenting – FVS) have been used. Comparing accuracy of the results, the final configuration of the stent is more affected by neglecting mechanical properties of materials (FVS) than by adopting 1D instead of 3D stent models. Nevertheless, the differencesshowed are acceptable compared to those achieved by considering different stent initial positions. Regarding computationalcosts, simulations involving 1D stent features are the only ones feasible in clinical context.
Resumo:
We develop a model of an industry with many heterogeneous firms that face both financing constraints and irreversibility constraints. The financing constraint implies that firms cannot borrow unless the debt is secured by collateral; the irreversibility constraint that they can only sell their fixed capital by selling their business. We use this model to examine the cyclical behavior of aggregate fixed investment, variable capital investment, and output in the presence of persistent idiosyncratic and aggregate shocks. Our model yields three main results. First, the effect of the irreversibility constraint on fixed capital investment is reinforced by the financing constraint. Second, the effect of the financing constraint on variable capital investment is reinforced by the irreversibility constraint. Finally, the interaction between the two constraints is key for explaining why input inventories and material deliveries of US manufacturing firms are so volatile and procyclical, and also why they are highly asymmetrical over the business cycle.
Resumo:
This paper studies the macroeconomic implications of firms' precautionary investment behavior in response to the anticipation of future financing constraints. Firms increase their demand for liquid and safe investments in order to alleviate future borrowing constraints and decrease the probability of having to forego future profitable investment opportunities. This results in an increase in the share of short-term projects that produces a temporary increase in output, at the expense of lower long-run investment and future output. I show in a calibrated model that this behavior is at the source of a novel and powerful channel of shock transmission of productivity shocks that produces short-run dampening and long-run propagation. Furthermore, it can account for the observed business cycle patterns of the aggregate and firm-level composition of investment.
Resumo:
The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.
Resumo:
We extend Aumann's theorem [Aumann 1987], deriving correlated equilibria as a consequence of common priors and common knowledge of rationality, by explicitly allowing for non-rational behavior. Wereplace the assumption of common knowledge of rationality with a substantially weaker one, joint p-belief of rationality, where agents believe the other agents are rational with probability p or more. We show that behavior in this case constitutes a kind of correlated equilibrium satisfying certain p-belief constraints, and that it varies continuously in the parameters p and, for p sufficiently close to one,with high probability is supported on strategies that survive the iterated elimination of strictly dominated strategies. Finally, we extend the analysis to characterizing rational expectations of interimtypes, to games of incomplete information, as well as to the case of non-common priors.
Resumo:
The network revenue management (RM) problem arises in airline, hotel, media,and other industries where the sale products use multiple resources. It can be formulatedas a stochastic dynamic program but the dynamic program is computationallyintractable because of an exponentially large state space, and a number of heuristicshave been proposed to approximate it. Notable amongst these -both for their revenueperformance, as well as their theoretically sound basis- are approximate dynamic programmingmethods that approximate the value function by basis functions (both affinefunctions as well as piecewise-linear functions have been proposed for network RM)and decomposition methods that relax the constraints of the dynamic program to solvesimpler dynamic programs (such as the Lagrangian relaxation methods). In this paperwe show that these two seemingly distinct approaches coincide for the network RMdynamic program, i.e., the piecewise-linear approximation method and the Lagrangianrelaxation method are one and the same.