884 resultados para Graph-Based Linear Programming Modelling
Resumo:
It is important to help researchers find valuable papers from a large literature collection. To this end, many graph-based ranking algorithms have been proposed. However, most of these algorithms suffer from the problem of ranking bias. Ranking bias hurts the usefulness of a ranking algorithm because it returns a ranking list with an undesirable time distribution. This paper is a focused study on how to alleviate ranking bias by leveraging the heterogeneous network structure of the literature collection. We propose a new graph-based ranking algorithm, MutualRank, that integrates mutual reinforcement relationships among networks of papers, researchers, and venues to achieve a more synthetic, accurate, and less-biased ranking than previous methods. MutualRank provides a unified model that involves both intra- and inter-network information for ranking papers, researchers, and venues simultaneously. We use the ACL Anthology Network as the benchmark data set and construct the gold standard from computer linguistics course websites of well-known universities and two well-known textbooks. The experimental results show that MutualRank greatly outperforms the state-of-the-art competitors, including PageRank, HITS, CoRank, Future Rank, and P-Rank, in ranking papers in both improving ranking effectiveness and alleviating ranking bias. Rankings of researchers and venues by MutualRank are also quite reasonable.
Resumo:
In this paper we evaluate and compare two representativeand popular distributed processing engines for large scalebig data analytics, Spark and graph based engine GraphLab. Wedesign a benchmark suite including representative algorithmsand datasets to compare the performances of the computingengines, from performance aspects of running time, memory andCPU usage, network and I/O overhead. The benchmark suite istested on both local computer cluster and virtual machines oncloud. By varying the number of computers and memory weexamine the scalability of the computing engines with increasingcomputing resources (such as CPU and memory). We also runcross-evaluation of generic and graph based analytic algorithmsover graph processing and generic platforms to identify thepotential performance degradation if only one processing engineis available. It is observed that both computing engines showgood scalability with increase of computing resources. WhileGraphLab largely outperforms Spark for graph algorithms, ithas close running time performance as Spark for non-graphalgorithms. Additionally the running time with Spark for graphalgorithms over cloud virtual machines is observed to increaseby almost 100% compared to over local computer clusters.
Resumo:
Healthy brain functioning depends on efficient communication of information between brain regions, forming complex networks. By quantifying synchronisation between brain regions, a functionally connected brain network can be articulated. In neurodevelopmental disorders, where diagnosis is based on measures of behaviour and tasks, a measure of the underlying biological mechanisms holds promise as a potential clinical tool. Graph theory provides a tool for investigating the neural correlates of neuropsychiatric disorders, where there is disruption of efficient communication within and between brain networks. This research aimed to use recent conceptualisation of graph theory, along with measures of behaviour and cognitive functioning, to increase understanding of the neurobiological risk factors of atypical development. Using magnetoencephalography to investigate frequency-specific temporal dynamics at rest, the research aimed to identify potential biological markers derived from sensor-level whole-brain functional connectivity. Whilst graph theory has proved valuable for insight into network efficiency, its application is hampered by two limitations. First, its measures have hardly been validated in MEG studies, and second, graph measures have been shown to depend on methodological assumptions that restrict direct network comparisons. The first experimental study (Chapter 3) addressed the first limitation by examining the reproducibility of graph-based functional connectivity and network parameters in healthy adult volunteers. Subsequent chapters addressed the second limitation through adapted minimum spanning tree (a network analysis approach that allows for unbiased group comparisons) along with graph network tools that had been shown in Chapter 3 to be highly reproducible. Network topologies were modelled in healthy development (Chapter 4), and atypical neurodevelopment (Chapters 5 and 6). The results provided support to the proposition that measures of network organisation, derived from sensor-space MEG data, offer insights helping to unravel the biological basis of typical brain maturation and neurodevelopmental conditions, with the possibility of future clinical utility.
Resumo:
Energy crops production is considered as environmentally benign and socially acceptable, offering ecological benefits over fossil fuels through their contribution to the reduction of greenhouse gases and acidifying emissions. Energy crops are subjected to persistent policy support by the EU, despite their limited or even marginally negative impact on the greenhouse effect. The present study endeavors to optimize the agricultural income generated by energy crops in a remote and disadvantageous region, with the assistance of linear programming. The optimization concerns the income created from soybean, sunflower (proxy for energy crop), and corn. Different policy scenarios imposed restrictions on the value of the subsidies as a proxy for EU policy tools, the value of inputs (costs of capital and labor) and different irrigation conditions. The results indicate that the area and the imports per energy crop remain unchanged, independently of the policy scenario enacted. Furthermore, corn cultivation contributes the most to iFncome maximization, whereas the implemented CAP policy plays an incremental role in uptaking an energy crop. A key implication is that alternative forms of motivation should be provided to the farmers beyond the financial ones in order the extensive use of energy crops to be achieved.
Resumo:
Koopmans gyakorlati problémák megoldása során szerzett tapasztalatait általánosítva fogott hozzá a lineáris tevékenységelemzési modell kidolgozásához. Meglepődve tapasztalta, hogy a korabeli közgazdaságtan nem rendelkezett egységes, kellően egzakt termeléselmélettel és fogalomrendszerrel. Úttörő dolgozatában ezért - mintegy a lineáris tevékenységelemzési modell elméleti kereteként - lerakta a technológiai halmazok fogalmán nyugvó axiomatikus termeléselmélet alapjait is. Nevéhez fűződik a termelési hatékonyság és a hatékonysági árak fogalmának egzakt definíciója, s az egymást kölcsönösen feltételező viszonyuk igazolása a lineáris tevékenységelemzési modell keretében. A hatékonyság manapság használatos, pusztán műszaki szempontból értelmezett definícióját Koopmans csak sajátos esetként tárgyalta, célja a gazdasági hatékonyság fogalmának a bevezetése és elemzése volt. Dolgozatunkban a lineáris programozás dualitási tételei segítségével rekonstruáljuk ez utóbbira vonatkozó eredményeit. Megmutatjuk, hogy egyrészt bizonyításai egyenértékűek a lineáris programozás dualitási tételeinek igazolásával, másrészt a gazdasági hatékonysági árak voltaképpen a mai értelemben vett árnyékárak. Rámutatunk arra is, hogy a gazdasági hatékonyság értelmezéséhez megfogalmazott modellje az Arrow-Debreu-McKenzie-féle általános egyensúlyelméleti modellek közvetlen előzményének tekinthető, tartalmazta azok szinte minden lényeges elemét és fogalmát - az egyensúlyi árak nem mások, mint a Koopmans-féle hatékonysági árak. Végezetül újraértelmezzük Koopmans modelljét a vállalati technológiai mikroökonómiai leírásának lehetséges eszközeként. Journal of Economic Literature (JEL) kód: B23, B41, C61, D20, D50. /===/ Generalizing from his experience in solving practical problems, Koopmans set about devising a linear model for analysing activity. Surprisingly, he found that economics at that time possessed no uniform, sufficiently exact theory of production or system of concepts for it. He set out in a pioneering study to provide a theoretical framework for a linear model for analysing activity by expressing first the axiomatic bases of production theory, which rest on the concept of technological sets. He is associated with exact definition of the concept of production efficiency and efficiency prices, and confirmation of their relation as mutual postulates within the linear model of activity analysis. Koopmans saw the present, purely technical definition of efficiency as a special case; he aimed to introduce and analyse the concept of economic efficiency. The study uses the duality precepts of linear programming to reconstruct the results for the latter. It is shown first that evidence confirming the duality precepts of linear programming is equal in value, and secondly that efficiency prices are really shadow prices in today's sense. Furthermore, the model for the interpretation of economic efficiency can be seen as a direct predecessor of the Arrow–Debreu–McKenzie models of general equilibrium theory, as it contained almost every essential element and concept of them—equilibrium prices are nothing other than Koopmans' efficiency prices. Finally Koopmans' model is reinterpreted as a necessary tool for microeconomic description of enterprise technology.
Resumo:
In this paper we consider a primal-dual infinite linear programming problem-pair, i.e. LPs on infinite dimensional spaces with infinitely many constraints. We present two duality theorems for the problem-pair: a weak and a strong duality theorem. We do not assume any topology on the vector spaces, therefore our results are algebraic duality theorems. As an application, we consider transferable utility cooperative games with arbitrarily many players.
Resumo:
It has widely been agreed that the distorted price system is one of the causes of inefficient ecooomic decisions in centrally planned economies. The paper investigates the possible effect of a price reform on the allocation of resources in a situation where micro-efficiency remains unchanged. Foreign trade and endogenously induced terms-of-trade changes are focal points ín the multisectoral applied general equilibrium analysis. Special attention is paid to some methodological problems connected to the representation of foreign trade in such models. The adoption of Armington's assumption leads to an export demand function and this in turn gives rise to the question of optimal export structure, different from the equilibrium one-an aspect so far neglected in the related literature. The results show, that the applied model allows for a more flexible handling of the overspecialization problem, than the linear programming models. It also becomes evident that the use of export demand functions brings unwanted terms-of-trade changes into the model, to be avoided by a suitable reformulation of the model. The analysis also suggests, that a price reform alone does not significantly increase global economic efficiency. Thus the effect of an economic reform on micro-efficiency appears to be a more crucial factor. The author raises in conclusion some rather general questions related to the foreign trade practice of small open economies.
Resumo:
Personalized recommender systems aim to assist users in retrieving and accessing interesting items by automatically acquiring user preferences from the historical data and matching items with the preferences. In the last decade, recommendation services have gained great attention due to the problem of information overload. However, despite recent advances of personalization techniques, several critical issues in modern recommender systems have not been well studied. These issues include: (1) understanding the accessing patterns of users (i.e., how to effectively model users' accessing behaviors); (2) understanding the relations between users and other objects (i.e., how to comprehensively assess the complex correlations between users and entities in recommender systems); and (3) understanding the interest change of users (i.e., how to adaptively capture users' preference drift over time). To meet the needs of users in modern recommender systems, it is imperative to provide solutions to address the aforementioned issues and apply the solutions to real-world applications. ^ The major goal of this dissertation is to provide integrated recommendation approaches to tackle the challenges of the current generation of recommender systems. In particular, three user-oriented aspects of recommendation techniques were studied, including understanding accessing patterns, understanding complex relations and understanding temporal dynamics. To this end, we made three research contributions. First, we presented various personalized user profiling algorithms to capture click behaviors of users from both coarse- and fine-grained granularities; second, we proposed graph-based recommendation models to describe the complex correlations in a recommender system; third, we studied temporal recommendation approaches in order to capture the preference changes of users, by considering both long-term and short-term user profiles. In addition, a versatile recommendation framework was proposed, in which the proposed recommendation techniques were seamlessly integrated. Different evaluation criteria were implemented in this framework for evaluating recommendation techniques in real-world recommendation applications. ^ In summary, the frequent changes of user interests and item repository lead to a series of user-centric challenges that are not well addressed in the current generation of recommender systems. My work proposed reasonable solutions to these challenges and provided insights on how to address these challenges using a simple yet effective recommendation framework.^
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments
Resumo:
Thiosalt species are unstable, partially oxidized sulfur oxyanions formed in sulfur-rich environments but also during the flotation and milling of sulfidic minerals especially those containing pyrite (FeS₂) and pyrrhotite (Fe₍₁₋ₓ₎S, x = 0 to 0.2). Detecting and quantifying the major thiosalt species such as sulfate (SO₄²⁻), thiosulfate (S₂O₃²⁻), trithionate (S₃O₆²⁻), tetrathionate (S₄O₆²⁻) and higher polythionates (SₓO₆²⁻, where 3 ≤ x ≤ 10) in the milling process and in the treated tailings is important to understand how thiosalts are generated and provides insight into potential treatment. As these species are unstable, a fast and reliable analytical technique is required for their analysis. Three capillary zone electrophoresis (CZE) methods using indirect UV-vis detection were developed for the simultaneous separation and determination of five thiosalt anions: SO₄²⁻, S₂O₃²⁻, S₃O₆²⁻, S₄O₆²⁻ and S₅O₆²⁻. Both univariate and multivariate experimental design approaches were used to optimize the most critical factors (background electrolyte (BGE) and instrumental conditions) to achieve fast separation and quantitative analysis of the thiosalt species. The mathematically predicted responses for the multivariate experiments were in good agreement with the experimental results. Limits of detection (LODs) (S/N = 3) for the methods were between 0.09 and 0.34 μg/mL without a sample stacking technique and nearly four-fold increase in LODs with the application of field-amplified sample stacking. As direct analysis of thiosalts by mass spectrometry (MS) is limited by their low m/z values and detection in negative mode electrospray ionization (ESI), which is typically less sensitive than positive ESI, imidazolium-based (IP-L-Imid and IP-T-Imid) and phosphonium-based (IP-T-Phos) tricationic ion-pairing reagents were used to form stable high mass ions non-covalent +1 ion-pairs with these species for ESI-MS analysis and the association constants (Kassoc) determined for these ion-pairs. Kassoc values were between 6.85 × 10² M⁻¹ and 3.56 × 10⁵ M⁻¹ with the linear IP-L-Imid; 1.89 ×10³ M⁻¹ and 1.05 × 10⁵ M⁻¹ with the trigonal IP-T-Imid ion-pairs; and 7.51×10² M⁻¹ and 4.91× 10⁴ M⁻¹ with the trigonal IP-T-Phos ion-pairs. The highest formation constants were obtained for S₃O₆²⁻ and the imidazolium-based linear ion-pairing reagent (IP-L-Imid), whereas the lowest were for IP-L-Imid: SO₄²⁻ ion-pair.
Resumo:
This paper is concerned with strategic optimization of a typical industrial chemical supply chain, which involves a material purchase and transportation network, several manufacturing plants with on-site material and product inventories, a product transportation network and several regional markets. In order to address large uncertainties in customer demands at the different regional markets, a novel robust scenario formulation, which has been developed by the authors recently, is tailored and applied for the strategic optimization. Case study results show that the robust scenario formulation works well for this real industrial supply chain system, and it outperforms the deterministic formulation and the classical scenario-based stochastic programming formulation by generating better expected economic performance and solutions that are guaranteed to be feasible for all uncertainty realizations. The robust scenario problem exhibits a decomposable structure that can be taken advantage of by Benders decomposition for efficient solution, so the application of Benders decomposition to the solution of the strategic optimization is also discussed. The case study results show that Benders decomposition can reduce the solution time by almost an order of magnitude when the number of scenarios in the problem is large.
Resumo:
There are two types of work typically performed in services which differ in the degree of control management has over when the work must be done. Serving customers, an activity that can occur only when customers are in the system is, by its nature, uncontrollable work. In contrast, the execution of controllable work does not require the presence of customers, and is work over which management has some degree of temporal control. This paper presents two integer programming models for optimally scheduling controllable work simultaneously with shifts. One model explicitly defines variables for the times at which controllable work may be started, while the other uses implicit modeling to reduce the number of variables. In an initial experiment of 864 test problems, the latter model yielded optimal solutions in approximately 81 percent of the time required by the former model. To evaluate the impact on customer service of having front-line employees perform controllable work, a second experiment was conducted simulating 5,832 service delivery systems. The results show that controllable work offers a useful means of improving labor utilization. Perhaps more important, it was found that having front-line employees perform controllable work did not degrade the desired level of customer service.
Resumo:
Il trasporto marittimo è una delle modalità più utilizzate soprattutto per la movimentazione di grandi volumi di prodotti tra i continenti in quanto è a basso costo, sicuro e meno inquinante rispetto ad altri mezzi di movimentazione. Ai giorni nostri è responsabile di circa l’80% del commercio globale (in volume di carichi trasportati). Il settore del trasporto marittimo ha avuto una lunga tradizione di pianificazione manuale effettuata da progettisti esperti. L’obiettivo principale di questa trattazione è stato quello di implementare un modello matematico lineare (MILP, Mixed-Integer Linear Programming Model) per l’ottimizzazione delle rotte marittime nell’ambito del mercato orto-frutticolo che si sviluppa nel bacino del Mediterraneo (problema di Ship-Scheduling). Il modello fornito in questa trattazione è un valido strumento di supporto alle decisioni che può utilizzare uno spedizioniere nell’ambito della pianificazione delle rotte marittime della flotta di navi in suo possesso. Consente di determinare l’insieme delle rotte ottimali che devono essere svolte da un insieme di vettori al fine di massimizzare il profitto complessivo dello spedizioniere, generato nell’arco di tempo considerato. Inoltre, permette di ottenere, per ogni nave considerata, la ripartizione ottimale della merce (carico ottimale).
Resumo:
The aim of this paper is to provide an efficient control design technique for discrete-time positive periodic systems. In particular, stability, positivity and periodic invariance of such systems are studied. Moreover, the concept of periodic invariance with respect to a collection of boxes is introduced and investigated with connection to stability. It is shown how such concept can be used for deriving a stabilizing state-feedback control that maintains the positivity of the closed-loop system and respects states and control signals constraints. In addition, all the proposed results can be efficiently solved in terms of linear programming.
Resumo:
Cette thèse est une contribution à la modélisation, la planification et l’optimisation du transport pour l’approvisionnement en bois de forêt des industries de première transformation. Dans ce domaine, les aléas climatiques (mise au sol des bois par les tempêtes), sanitaires (attaques bactériologiques et fongiques des bois) et commerciaux (variabilité et exigence croissante des marchés) poussent les divers acteurs du secteur (entrepreneurs et exploitants forestiers, transporteurs) à revoir l’organisation de la filière logistique d’approvisionnement, afin d’améliorer la qualité de service (adéquation offre-demande) et de diminuer les coûts. L’objectif principal de cette thèse était de proposer un modèle de pilotage améliorant la performance du transport forestier, en respectant les contraintes et les pratiques du secteur. Les résultats établissent une démarche de planification hiérarchique des activités de transport à deux niveaux de décision, tactique et opérationnel. Au niveau tactique, une optimisation multi-périodes permet de répondre aux commandes en minimisant l’activité globale de transport, sous contrainte de capacité agrégée des moyens de transport accessibles. Ce niveau permet de mettre en œuvre des politiques de lissage de charge et d’organisation de sous-traitance ou de partenariats entre acteurs de transport. Au niveau opérationnel, les plans tactiques alloués à chaque transporteur sont désagrégés, pour permettre une optimisation des tournées des flottes, sous contrainte des capacités physiques de ces flottes. Les modèles d’optimisation de chaque niveau sont formalisés en programmation linéaire mixte avec variables binaires. L’applicabilité des modèles a été testée en utilisant un jeu de données industrielles en région Aquitaine et a montré des améliorations significatives d’exploitation des capacités de transport par rapport aux pratiques actuelles. Les modèles de décision ont été conçus pour s’adapter à tout contexte organisationnel, partenarial ou non : la production du plan tactique possède un caractère générique sans présomption de l’organisation, celle-ci étant prise en compte, dans un deuxième temps, au niveau de l’optimisation opérationnelle du plan de transport de chaque acteur.