897 resultados para Multicommodity network design problem
Resumo:
Layered steam injection, widely used in Liaohe Oilfield at present, is an effective recovery technique to heavy oil reserves. Which makes the steam front-peak push forward uniformly, the amount of steam injection be assigned rationally, and the effect of injection steam be obtained as expected. To maintain a fixed ratio of layered steam injection and solve the problem of nonadjustable hole diameter with the change of layer pressure in the existing injectors, a new method is proposed in this paper to design layered steam injectors based on the dynamic balance theory According to gas-liquid two-phase flow theory and heat transfer theory, the energy equation and the heat conduction equation in boreholes are developed. By analyzing the energy equilibrium of water-steam passing through the injector hole, we find an expression to describe the relation between the cross-sectional area of injector hole and the layer pressure. With this expression, we provide a new set of calculation methods and write the corresponding computer program to design and calculate the main parameters of a steam injector. The actual measurement data show that the theoretically calculated results are accurate, the software runs reliably, and they provide the design of self-adjustable layered steam injectors with the theoretical foundation.
Resumo:
This technical memorandum documents the design, implementation, data preparation, and descriptive results for the 2006 Annual Economic Survey of Federal Gulf Shrimp Permit Holders. The data collection was designed by the NOAA Fisheries Southeast Fisheries Science Center Social Science Research Group to track the financial and economic status and performance by vessels holding a federal moratorium permit for harvesting shrimp in the Gulf of Mexico. A two page, self-administered mail survey collected total annual costs broken out into seven categories and auxiliary economic data. In May 2007, 580 vessels were randomly selected, stratified by state, from a preliminary population of 1,709 vessels with federal permits to shrimp in offshore waters of the Gulf of Mexico. The survey was implemented during the rest of 2007. After many reminder and verification phone calls, 509 surveys were deemed complete, for an ineligibility-adjusted response rate of 90.7%. The linking of each individual vessel’s cost data to its revenue data from a different data collection was imperfect, and hence the final number of observations used in the analyses is 484. Based on various measures and tests of validity throughout the technical memorandum, the quality of the data is high. The results are presented in a standardized table format, linking vessel characteristics and operations to simple balance sheet, cash flow, and income statements. In the text, results are discussed for the total fleet, the Gulf shrimp fleet, the active Gulf shrimp fleet, and the inactive Gulf shrimp fleet. Additional results for shrimp vessels grouped by state, by vessel characteristics, by landings volume, and by ownership structure are available in the appendices. The general conclusion of this report is that the financial and economic situation is bleak for the average vessels in most of the categories that were evaluated. With few exceptions, cash flow for the average vessel is positive while the net revenue from operations and the “profit” are negative. With negative net revenue from operations, the economic return for average shrimp vessels is less than zero. Only with the help of government payments does the average owner just about break even. In the short-term, this will discourage any new investments in the industry. The financial situation in 2006, especially if it endures over multiple years, also is economically unsustainable for the average established business. Vessels in the active and inactive Gulf shrimp fleet are, on average, 69 feet long, weigh 105 gross tons, are powered by 505 hp motor(s), and are 23 years old. Three-quarters of the vessels have steel hulls and 59% use a freezer for refrigeration. The average market value of these vessels was $175,149 in 2006, about a hundred-thousand dollars less than the average original purchase price. The outstanding loans averaged $91,955, leading to an average owner equity of $83,194. Based on the sample, 85% of the federally permitted Gulf shrimp fleet was actively shrimping in 2006. Of these 386 active Gulf shrimp vessels, just under half (46%) were owner-operated. On average, these vessels burned 52,931 gallons of fuel, landed 101,268 pounds of shrimp, and received $2.47 per pound of shrimp. Non-shrimp landings added less than 1% to cash flow, indicating that the federal Gulf shrimp fishery is very specialized. The average total cash outflow was $243,415 of which $108,775 was due to fuel expenses alone. The expenses for hired crew and captains were on average $54,866 which indicates the importance of the industry as a source of wage income. The resulting average net cash flow is $16,225 but has a large standard deviation. For the population of active Gulf shrimp vessels we can state with 95% certainty that the average net cash flow was between $9,500 and $23,000 in 2006. The median net cash flow was $11,843. Based on the income statement for active Gulf shrimp vessels, the average fixed costs accounted for just under a quarter of operating expenses (23.1%), labor costs for just over a quarter (25.3%), and the non-labor variable costs for just over half (51.6%). The fuel costs alone accounted for 42.9% of total operating expenses in 2006. It should be noted that the labor cost category in the income statement includes both the actual cash payments to hired labor and an estimate of the opportunity cost of owner-operators’ time spent as captain. The average labor contribution (as captain) of an owner-operator is estimated at about $19,800. The average net revenue from operations is negative $7,429, and is statistically different and less than zero in spite of a large standard deviation. The economic return to Gulf shrimping is negative 4%. Including non-operating activities, foremost an average government payment of $13,662, leads to an average loss before taxes of $907 for the vessel owners. The confidence interval of this value straddles zero, so we cannot reject, with 95% certainty, that the population average is zero. The average inactive Gulf shrimp vessel is generally of a smaller scale than the average active vessel. Inactive vessels are physically smaller, are valued much lower, and are less dependent on loans. Fixed costs account for nearly three quarters of the total operating expenses of $11,926, and only 6% of these vessels have hull insurance. With an average net cash flow of negative $7,537, the inactive Gulf shrimp fleet has a major liquidity problem. On average, net revenue from operations is negative $11,396, which amounts to a negative 15% economic return, and owners lose $9,381 on their vessels before taxes. To sustain such losses and especially to survive the negative cash flow, many of the owners must be subsidizing their shrimp vessels with the help of other income or wealth sources or are drawing down their equity. Active Gulf shrimp vessels in all states but Texas exhibited negative returns. The Alabama and Mississippi fleets have the highest assets (vessel values), on average, yet they generate zero cash flow and negative $32,224 net revenue from operations. Due to their high (loan) leverage ratio the negative 11% economic return is amplified into a negative 21% return on equity. In contrast, for Texas vessels, which actually have the highest leverage ratio among the states, a 1% economic return is amplified into a 13% return on equity. From a financial perspective, the average Florida and Louisiana vessels conform roughly to the overall average of the active Gulf shrimp fleet. It should be noted that these results are averages and hence hide the variation that clearly exists within all fleets and all categories. Although the financial situation for the average vessel is bleak, some vessels are profitable. (PDF contains 101 pages)
Resumo:
One of the main problems that public institutions face in the management of protected areas, such as the European Natura 2000 network, is determining how to design and implement sustainable management plans that account for the wide range of marketed and non-marketed benefits they provide to society. This paper presents an application of a stated preference valuation approach aimed at evaluating the social preferences of the population of the Basque Country, Spain, for the key attributes of a regional Natura 2000 network site. According to our results, individuals’ willingness-to-pay (WTP) is higher for attributes associated with non-use values (native tree species and biodiversity conservation) than for attributes associated with use values (agricultural development and commercial forestry). The paper concludes that management policies related to Natura 2000 network sites should account for both for the importance of non-use values and the heterogeneity of the population's preferences in order to minimize potential land use conflicts.
Resumo:
This dissertation contains three essays on mechanism design. The common goal of these essays is to assist in the solution of different resource allocation problems where asymmetric information creates obstacles to the efficient allocation of resources. In each essay, we present a mechanism that satisfactorily solves the resource allocation problem and study some of its properties. In our first essay, ”Combinatorial Assignment under Dichotomous Preferences”, we present a class of problems akin to time scheduling without a pre-existing time grid, and propose a mechanism that is efficient, strategy-proof and envy-free. Our second essay, ”Monitoring Costs and the Management of Common-Pool Resources”, studies what can happen to an existing mechanism — the individual tradable quotas (ITQ) mechanism, also known as the cap-and-trade mechanism — when quota enforcement is imperfect and costly. Our third essay, ”Vessel Buyback”, coauthored with John O. Ledyard, presents an auction design that can be used to buy back excess capital in overcapitalized industries.
Resumo:
Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.
This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.
The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.
Resumo:
With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.
Resumo:
Designing for all requires the adaptation and modification of current design best practices to encompass a broader range of user capabilities. This is particularly the case in the design of the human-product interface. Product interfaces exist everywhere and when designing them, there is a very strong temptation to jump to prescribing a solution with only a cursory attempt to understand the nature of the problem. This is particularly the case when attempting to adapt existing designs, optimised for able-bodied users, for use by disabled users. However, such approaches have led to numerous products that are neither usable nor commercially successful. In order to develop a successful design approach it is necessary consider the fundamental structure of the design process being applied. A three stage design process development strategy which includes problem definition, solution development and solution evaluation, should be adopted. This paper describes the development of a new design approach based on the application of usability heuristics to the design of interfaces. This is illustrated by reference to a particular case study of the re-design of a computer interface for controlling an assistive device.
Resumo:
Cells exhibit a diverse repertoire of dynamic behaviors. These dynamic functions are implemented by circuits of interacting biomolecules. Although these regulatory networks function deterministically by executing specific programs in response to extracellular signals, molecular interactions are inherently governed by stochastic fluctuations. This molecular noise can manifest as cell-to-cell phenotypic heterogeneity in a well-mixed environment. Single-cell variability may seem like a design flaw but the coexistence of diverse phenotypes in an isogenic population of cells can also serve a biological function by increasing the probability of survival of individual cells upon an abrupt change in environmental conditions. Decades of extensive molecular and biochemical characterization have revealed the connectivity and mechanisms that constitute regulatory networks. We are now confronted with the challenge of integrating this information to link the structure of these circuits to systems-level properties such as cellular decision making. To investigate cellular decision-making, we used the well studied galactose gene-regulatory network in \textit{Saccharomyces cerevisiae}. We analyzed the mechanism and dynamics of the coexistence of two stable on and off states for pathway activity. We demonstrate that this bimodality in the pathway activity originates from two positive feedback loops that trigger bistability in the network. By measuring the dynamics of single-cells in a mixed sugar environment, we observe that the bimodality in gene expression is a transient phenomenon. Our experiments indicate that early pathway activation in a cohort of cells prior to galactose metabolism can accelerate galactose consumption and provide a transient increase in growth rate. Together these results provide important insights into strategies implemented by cells that may have been evolutionary advantageous in competitive environments.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
The Advanced LIGO and Virgo experiments are poised to detect gravitational waves (GWs) directly for the first time this decade. The ultimate prize will be joint observation of a compact binary merger in both gravitational and electromagnetic channels. However, GW sky locations that are uncertain by hundreds of square degrees will pose a challenge. I describe a real-time detection pipeline and a rapid Bayesian parameter estimation code that will make it possible to search promptly for optical counterparts in Advanced LIGO. Having analyzed a comprehensive population of simulated GW sources, we describe the sky localization accuracy that the GW detector network will achieve as each detector comes online and progresses toward design sensitivity. Next, in preparation for the optical search with the intermediate Palomar Transient Factory (iPTF), we have developed a unique capability to detect optical afterglows of gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Burst Monitor (GBM). Its comparable error regions offer a close parallel to the Advanced LIGO problem, but Fermi's unique access to MeV-GeV photons and its near all-sky coverage may allow us to look at optical afterglows in a relatively unexplored part of the GRB parameter space. We present the discovery and broadband follow-up observations (X-ray, UV, optical, millimeter, and radio) of eight GBM-IPTF afterglows. Two of the bursts (GRB 130702A / iPTF13bxl and GRB 140606B / iPTF14bfu) are at low redshift (z=0.145 and z = 0.384, respectively), are sub-luminous with respect to "standard" cosmological bursts, and have spectroscopically confirmed broad-line type Ic supernovae. These two bursts are possibly consistent with mildly relativistic shocks breaking out from the progenitor envelopes rather than the standard mechanism of internal shocks within an ultra-relativistic jet. On a technical level, the GBM--IPTF effort is a prototype for locating and observing optical counterparts of GW events in Advanced LIGO with the Zwicky Transient Facility.