963 resultados para integer disaggregation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver. The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies the adsorption of molecules with different binding strengths onto copper nanowires with prestabilized conductance values fabricated by an electrochemical method. Since the diameters of these wires are comparable to the wavelength of conduction electrons the conductance of the nanowires is quantized, and the adsorption of even a few molecules onto atomically thin wires changes the conductance from integer values to fractional ones. These changes are proportional to the binding strength of the adsorbed molecules. The decrease in conductance is hypothesized to be caused by the scattering of the conduction electrons by the adsorbed molecules. The sensitivity of molecular adsorption-induced conductance change can be used for the development of a chemical sensor. The stabilized copper nanowires obtained in this thesis may also be used for other purposes, such as interconnecting conductors between nanodevices and digital switches in functional nanoelectronic circuitry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integer programming, simulation, and rules of thumb have been integrated to develop a simulation-based heuristic for short-term assignment of fleet in the car rental industry. It generates a plan for car movements, and a set of booking limits to produce high revenue for a given planning horizon. Three different scenarios were used to validate the heuristic. The heuristic's mean revenue was significant higher than the historical ones, in all three scenarios. Time to run the heuristic for each experiment was within the time limits of three hours set for the decision making process even though it is not fully automated. These findings demonstrated that the heuristic provides better plans (plans that yield higher profit) for the dynamic allocation of fleet than the historical decision processes. Another contribution of this effort is the integration of IP and rules of thumb to search for better performance under stochastic conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The matrix of this dissertation research permeates the design of complex environmental education. Its purpose is the realistic utopia of sustainability focused on the elucidation of a cultural model, a way of life that can guarantee the preservation of the living and non-living wight, to compose a world in which all coexist in harmony. To do so, thinking that model permeates the understanding that we are nature. This understanding can be evidenced both in Karl Marx in his book, the Economic and Philosophical Manuscripts says: "Nature is the essential body of the human being", and Joël de Rosnay, in his book The Symbiotic Man explains that we humans, "are the neurons of the earth." In this same perspective, Elisabet Sahtouris, in his book The Dance of the Earth, has pointed out how little is accurate to say that "there is life on Earth," because we are knowledgeable that our planet is a living organism, then there is the "life of the Earth ". Thus, we are part of this life, we are a part of the whole. With long experience in the environmental field, I seek the collective understanding that environmental education is a process belonging to all areas of knowledge, while margins for its subdivision occasion that begins the role of librarian I am, as a mediator of educational, cultural process, and disseminate information, is an environmental educator. Research conducted lead me to propose a revision of values and attitudes, from a certain reorganization of thought, an ecology of ideas and action, evidenced by readings of education and complexity, especially tuned to the writings of Edgar Morin. This thesis makes use of metaphor as a cognitive operator to emphasize the solar cycle, linking it to the development stages of this work. In summary, we have as a goal to emphasize the importance of the human condition, respect for nature and the principle of natural, cultural and social interdependence. In order for us to have a society that values relationships of solidarity with each other, respect and gratitude for living beings and Mother Earth. What leads me to converge, the reframing of the environment, understanding it as Integer Environment

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decades the study of integer-valued time series has gained notoriety due to its broad applicability (modeling the number of car accidents in a given highway, or the number of people infected by a virus are two examples). One of the main interests of this area of study is to make forecasts, and for this reason it is very important to propose methods to make such forecasts, which consist of nonnegative integer values, due to the discrete nature of the data. In this work, we focus on the study and proposal of forecasts one, two and h steps ahead for integer-valued second-order autoregressive conditional heteroskedasticity processes [INARCH (2)], and in determining some theoretical properties of this model, such as the ordinary moments of its marginal distribution and the asymptotic distribution of its conditional least squares estimators. In addition, we study, via Monte Carlo simulation, the behavior of the estimators for the parameters of INARCH(2) processes obtained using three di erent methods (Yule- Walker, conditional least squares, and conditional maximum likelihood), in terms of mean squared error, mean absolute error and bias. We present some forecast proposals for INARCH(2) processes, which are compared again via Monte Carlo simulation. As an application of this proposed theory, we model a dataset related to the number of live male births of mothers living at Riachuelo city, in the state of Rio Grande do Norte, Brazil.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decades the study of integer-valued time series has gained notoriety due to its broad applicability (modeling the number of car accidents in a given highway, or the number of people infected by a virus are two examples). One of the main interests of this area of study is to make forecasts, and for this reason it is very important to propose methods to make such forecasts, which consist of nonnegative integer values, due to the discrete nature of the data. In this work, we focus on the study and proposal of forecasts one, two and h steps ahead for integer-valued second-order autoregressive conditional heteroskedasticity processes [INARCH (2)], and in determining some theoretical properties of this model, such as the ordinary moments of its marginal distribution and the asymptotic distribution of its conditional least squares estimators. In addition, we study, via Monte Carlo simulation, the behavior of the estimators for the parameters of INARCH(2) processes obtained using three di erent methods (Yule- Walker, conditional least squares, and conditional maximum likelihood), in terms of mean squared error, mean absolute error and bias. We present some forecast proposals for INARCH(2) processes, which are compared again via Monte Carlo simulation. As an application of this proposed theory, we model a dataset related to the number of live male births of mothers living at Riachuelo city, in the state of Rio Grande do Norte, Brazil.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cooperative communication has gained much interest due to its ability to exploit the broadcasting nature of the wireless medium to mitigate multipath fading. There has been considerable amount of research on how cooperative transmission can improve the performance of the network by focusing on the physical layer issues. During the past few years, the researchers have started to take into consideration cooperative transmission in routing and there has been a growing interest in designing and evaluating cooperative routing protocols. Most of the existing cooperative routing algorithms are designed to reduce the energy consumption; however, packet collision minimization using cooperative routing has not been addressed yet. This dissertation presents an optimization framework to minimize collision probability using cooperative routing in wireless sensor networks. More specifically, we develop a mathematical model and formulate the problem as a large-scale Mixed Integer Non-Linear Programming problem. We also propose a solution based on the branch and bound algorithm augmented with reducing the search space (branch and bound space reduction). The proposed strategy builds up the optimal routes from each source to the sink node by providing the best set of hops in each route, the best set of relays, and the optimal power allocation for the cooperative transmission links. To reduce the computational complexity, we propose two near optimal cooperative routing algorithms. In the first near optimal algorithm, we solve the problem by decoupling the optimal power allocation scheme from optimal route selection. Therefore, the problem is formulated by an Integer Non-Linear Programming, which is solved using a branch and bound space reduced method. In the second near optimal algorithm, the cooperative routing problem is solved by decoupling the transmission power and the relay node se- lection from the route selection. After solving the routing problems, the power allocation is applied in the selected route. Simulation results show the algorithms can significantly reduce the collision probability compared with existing cooperative routing schemes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we determine necessary and sufficient conditions for the existence of an equitably ℓ-colourable balanced incomplete block design for any positive integer ℓ > 2. In particular, we present a method for constructing non-trivial equitably ℓ-colourable BIBDs and prove that these designs are the only non-trivial equitably ℓ-colourable BIBDs that exist. We also observe that every equitable ℓ-colouring of a BIBD yields both an equalised ℓ-colouring and a proper 2-colouring of the same BIBD. We also discuss generalisations of these concepts including open questions for further research. The main results presented in this thesis also appear in [7].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Present theories of deep-sea community organization recognize the importance of small-scale biological disturbances, originated partly from the activities of epibenthic megafaunal organisms, in maintaining high benthic biodiversity in the deep sea. However, due to technical difficulties, in situ experimental studies to test hypotheses in the deep sea are lacking. The objective of the present study was to evaluate the potential of cages as tools for studying the importance of epibenthic megafauna for deep-sea benthic communities. Using the deep-diving Remotely Operated Vehicle (ROV) "VICTOR 6000", six experimental cages were deployed at the sea floor at 2500 m water depth and sampled after 2 years (2y) and 4 years (4y) for a variety of sediment parameters in order to test for caging artefacts. Photo and video footage from both experiments showed that the cages were efficient at excluding the targeted fauna. The cage also proved to be appropriate to deep-sea studies considering the fact that there was no fouling on the cages and no evidence of any organism establishing residence on or adjacent to it. Environmental changes inside the cages were dependent on the experimental period analysed. In the 4y experiment, chlorophyll a concentrations were higher in the uppermost centimeter of sediment inside cages whereas in the 2y experiment, it did not differ between inside and outside. Although the cages caused some changes to the sedimentary regime, they are relatively minor compared to similar studies in shallow water. The only parameter that was significantly higher under cages at both experiments was the concentration of phaeopigments. Since the epibenthic megafauna at our study site can potentially affect phytodetritus distribution and availability at the seafloor (e.g. via consumption, disaggregation and burial), we suggest that their exclusion was, at least in part, responsible for the increases in pigment concentrations. Cages might be suitable tools to study the long-term effects of disturbances caused by megafaunal organisms on the diversity and community structure of smaller-sized organisms in the deep sea, although further work employing partial cage controls, greater replication, and evaluating faunal components will be essential to unequivocally establish their utility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Funded by Scottish Government's Rural and Environment Science and Analytical Services (RESAS) Division Food Standards Agency, UK Biscuit, Cake, Chocolate and Confectionery Association, London, UK

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Funded by Scottish Government's Rural and Environment Science and Analytical Services (RESAS) Division Food Standards Agency, UK Biscuit, Cake, Chocolate and Confectionery Association, London, UK

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acknowledgement The first author would like to acknowledge the University of Aberdeen and the Henderson Economics Research Fund for funding his PhD studies in the period 2011-2014 which formed the basis for the research presented in this paper. The first author would also like to acknowledge the Macaulay Development Trust which funds his postdoctoral fellowship with The James Hutton Institute, Aberdeen, Scotland. The authors thank two anonymous referees for valuable comments and suggestions on earlier versions of this paper. All usual caveats apply

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.

In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.

Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.

I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and

discuss some implications for capital regulation policy and stress testing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.