940 resultados para Multi- Choice mixed integer goal programming


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low temperature is one of the main environmental constraints for rice ( Oryza sativa L.) grain production yield. It is known that multi-environment studies play a critical role in the sustainability of rice production across diverse environments. However, there are few studies based on multi-environment studies of rice in temperate climates. The aim was to study the performance of rice plants in cold environments. Four experimental lines and six cultivars were evaluated at three locations during three seasons. The grain yield data were analyzed with ANOVA, mixed models based on the best linear unbiased predictors (BLUPs), and genotype plus Genotype × Environment interaction (GGE) biplot. High genotype contribution (> 25%) was observed in grain yield and the interaction between genotype and locations was not very important. Results also showed that ‘Quila 241319’ was the best experimental line with the highest grain yield (11.3 t ha-1) and grain yield stability across the environments; commercial cultivars were classified as medium grain yield genotypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forested areas within cities host a large number of species, responsible for many ecosystem services in urban areas. The biodiversity in these areas is influenced by human disturbances such as atmospheric pollution and urban heat island effect. To ameliorate the effects of these factors, an increase in urban green areas is often considered sufficient. However, this approach assumes that all types of green cover have the same importance for species. Our aim was to show that not all forested green areas are equal in importance for species, but that based on a multi-taxa and functional diversity approach it is possible to value green infrastructure in urban environments. After evaluating the diversity of lichens, butterflies and other-arthropods, birds and mammals in 31 Mediterranean urban forests in south-west Europe (Almada, Portugal), bird and lichen functional groups responsive to urbanization were found. A community shift (tolerant species replacing sensitive ones) along the urbanization gradient was found, and this must be considered when using these groups as indicators of the effect of urbanization. Bird and lichen functional groups were then analyzed together with the characteristics of the forests and their surroundings. Our results showed that, contrary to previous assumptions, vegetation density and more importantly the amount of urban areas around the forest (matrix), are more important for biodiversity than forest quantity alone. This indicated that not all types of forested green areas have the same importance for biodiversity. An index of forest functional diversity was then calculated for all sampled forests of the area. This could help decision-makers to improve the management of urban green infrastructures with the goal of increasing functionality and ultimately ecosystem services in urban areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

De nombreux problèmes liés aux domaines du transport, des télécommunications et de la logistique peuvent être modélisés comme des problèmes de conception de réseaux. Le problème classique consiste à transporter un flot (données, personnes, produits, etc.) sur un réseau sous un certain nombre de contraintes dans le but de satisfaire la demande, tout en minimisant les coûts. Dans ce mémoire, on se propose d'étudier le problème de conception de réseaux avec coûts fixes, capacités et un seul produit, qu'on transforme en un problème équivalent à plusieurs produits de façon à améliorer la valeur de la borne inférieure provenant de la relaxation continue du modèle. La méthode que nous présentons pour la résolution de ce problème est une méthode exacte de branch-and-price-and-cut avec une condition d'arrêt, dans laquelle nous exploitons à la fois la méthode de génération de colonnes, la méthode de génération de coupes et l'algorithme de branch-and-bound. Ces méthodes figurent parmi les techniques les plus utilisées en programmation linéaire en nombres entiers. Nous testons notre méthode sur deux groupes d'instances de tailles différentes (gran-des et très grandes), et nous la comparons avec les résultats donnés par CPLEX, un des meilleurs logiciels permettant de résoudre des problèmes d'optimisation mathématique, ainsi qu’avec une méthode de branch-and-cut. Il s'est avéré que notre méthode est prometteuse et peut donner de bons résultats, en particulier pour les instances de très grandes tailles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The evidence base on end-of-life care in acute stroke is limited, particularly with regard to recognising dying and related decision-making. There is also limited evidence to support the use of end-of-life care pathways (standardised care plans) for patients who are dying after stroke. Aim: This study aimed to explore the clinical decision-making involved in placing patients on an end-of-life care pathway, evaluate predictors of care pathway use, and investigate the role of families in decision-making. The study also aimed to examine experiences of end-of-life care pathway use for stroke patients, their relatives and the multi-disciplinary health care team. Methods: A mixed methods design was adopted. Data were collected in four Scottish acute stroke units. Case-notes were identified prospectively from 100 consecutive stroke deaths and reviewed. Multivariate analysis was performed on case-note data. Semi-structured interviews were conducted with 17 relatives of stroke decedents and 23 healthcare professionals, using a modified grounded theory approach to collect and analyse data. The VOICES survey tool was also administered to the bereaved relatives and data were analysed using descriptive statistics and thematic analysis of free-text responses. Results: Relatives often played an important role in influencing aspects of end-of-life care, including decisions to use an end-of-life care pathway. Some relatives experienced enduring distress with their perceived responsibility for care decisions. Relatives felt unprepared for and were distressed by prolonged dying processes, which were often associated with severe dysphagia. Pro-active information-giving by staff was reported as supportive by relatives. Healthcare professionals generally avoided discussing place of care with families. Decisions to use an end-of-life care pathway were not predicted by patients’ demographic characteristics; decisions were generally made in consultation with families and the extended health care team, and were made within regular working hours. Conclusion: Distressing stroke-related issues were more prominent in participants’ accounts than concerns with the end-of-life care pathway used. Relatives sometimes perceived themselves as responsible for important clinical decisions. Witnessing prolonged dying processes was difficult for healthcare professionals and families, particularly in relation to the management of persistent major swallowing difficulties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The SimProgramming teaching approach has the goal to help students overcome their learning difficulties in the transition from entry-level to advanced computer programming and prepare them for real-world labour environments, adopting learning strategies. It immerses learners in a businesslike learning environment, where students develop a problem-based learning activity with a specific set of tasks, one of which is filling weekly individual forms. We conducted thematic analysis of 401 weekly forms, to identify the students’ strategies for self-regulation of learning during assignment. The students are adopting different strategies in each phase of the approach. The early phases are devoted to organization and planning, later phases focus on applying theoretical knowledge and hands-on programming. Based on the results, we recommend the development of educational practices to help students conduct self-reflection of their performance during tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Macroeconomic policy makers are typically concerned with several indicators of economic performance. We thus propose to tackle the design of macroeconomic policy using Multicriteria Decision Making (MCDM) techniques. More specifically, we employ Multiobjective Programming (MP) to seek so-called efficient policies. The MP approach is combined with a computable general equilibrium (CGE) model. We chose use of a CGE model since they have the dual advantage of being consistent with standard economic theory while allowing one to measure the effect(s) of a specific policy with real data. Applying the proposed methodology to Spain (via the 1995 Social Accounting Matrix) we first quantified the trade-offs between two specific policy objectives: growth and inflation, when designing fiscal policy. We then constructed a frontier of efficient policies involving real growth and inflation. In doing so, we found that policy in 1995 Spain displayed some degree of inefficiency with respect to these two policy objectives. We then offer two sets of policy recommendations that, ostensibly, could have helped Spain at the time. The first deals with efficiency independent of the importance given to both growth and inflation by policy makers (we label this set: general policy recommendations). A second set depends on which policy objective is seen as more important by policy makers: increasing growth or controlling inflation (we label this one: objective-specific recommendations).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta investigação tem como objectivo a construção e validação de uma Escala MultiFactorial de Motivação no Trabalho para a população portuguesa. A ausência de instrumentos para medir várias dimensões da motivação levou ao desenvolvimento e elaboração desta escala de motivação. A escala integra 28 itens decorrentes de uma pesquisa teórica que contempla algumas teorias da motivação. Participaram nos estudos de validação da escala 444 colaboradores de empresas de novas tecnologias de ambos os sexos, com idades compreendidas entre os 19 e os 34 anos. A escala apresenta bons índices de consistência interna (valores entre 0.72 e 0.84) e uma análise factorial que revelou a existência de uma estrutura tetrafactorial com 49% de variância explicada: motivação com a organização do trabalho, motivação com realização e poder, motivação de desempenho e motivação associada ao envolvimento. Esperase ainda que novos estudos possam ser desenvolvidos a partir desta mesma escala.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electoral researchers are so much accustomed to analyzing the choice of the single most preferred party as the left-hand side variable of their models of electoral behavior that they often ignore revealed preference data. Drawing on random utility theory, their models predict electoral behavior at the extensive margin of choice. Since the seminal work of Luce and others on individual choice behavior, however, many social science disciplines (consumer research, labor market research, travel demand, etc.) have extended their inventory of observed preference data with, for instance, multiple paired comparisons, complete or incomplete rankings, and multiple ratings. Eliciting (voter) preferences using these procedures and applying appropriate choice models is known to considerably increase the efficiency of estimates of causal factors in models of (electoral) behavior. In this paper, we demonstrate the efficiency gain when adding additional preference information to first preferences, up to full ranking data. We do so for multi-party systems of different sizes. We use simulation studies as well as empirical data from the 1972 German election study. Comparing the practical considerations for using ranking and single preference data results in suggestions for choice of measurement instruments in different multi-candidate and multi-party settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Doutoramento em Economia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free-riding behaviors exist in tourism and they should be analyzed from a comprehensive perspective; while the literature has mainly focused on free riders operating in a destination, the destinations themselves might also free ride when they are under the umbrella of a collective brand. The objective of this article is to detect potential free-riding destinations by estimating the contribution of the different individual destinations to their collective brands, from the point of view of consumer perception. We argue that these individual contributions can be better understood by reflecting the various stages that tourists follow to reach their final decision. A hierarchical choice process is proposed in which the following choices are nested (not independent): “whether to buy,” “what collective brand to buy,” and “what individual brand to buy.” A Mixed Logit model confirms this sequence, which permits estimation of individual contributions and detection of free riders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this project is to learn the necessary steps to create a finite element model, which can accurately predict the dynamic response of a Kohler Engines Heavy Duty Air Cleaner (HDAC). This air cleaner is composed of three glass reinforced plastic components and two air filters. Several uncertainties arose in the finite element (FE) model due to the HDAC’s component material properties and assembly conditions. To help understand and mitigate these uncertainties, analytical and experimental modal models were created concurrently to perform a model correlation and calibration. Over the course of the project simple and practical methods were found for future FE model creation. Similarly, an experimental method for the optimal acquisition of experimental modal data was arrived upon. After the model correlation and calibration was performed a validation experiment was used to confirm the FE models predictive capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as f-test is performed during each node’s split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.