4 resultados para Cost Over run
em DRUM (Digital Repository at the University of Maryland)
Resumo:
In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
In this work, the existing understanding of flame spread dynamics is enhanced through an extensive study of the heat transfer from flames spreading vertically upwards across 5 cm wide, 20 cm tall samples of extruded Poly (Methyl Methacrylate) (PMMA). These experiments have provided highly spatially resolved measurements of flame to surface heat flux and material burning rate at the critical length scale of interest, with a level of accuracy and detail unmatched by previous empirical or computational studies. Using these measurements, a wall flame model was developed that describes a flame’s heat feedback profile (both in the continuous flame region and the thermal plume above) solely as a function of material burning rate. Additional experiments were conducted to measure flame heat flux and sample mass loss rate as flames spread vertically upwards over the surface of seven other commonly used polymers, two of which are glass reinforced composite materials. Using these measurements, our wall flame model has been generalized such that it can predict heat feedback from flames supported by a wide range of materials. For the seven materials tested here – which present a varied range of burning behaviors including dripping, polymer melt flow, sample burnout, and heavy soot formation – model-predicted flame heat flux has been shown to match experimental measurements (taken across the full length of the flame) with an average accuracy of 3.9 kW m-2 (approximately 10 – 15 % of peak measured flame heat flux). This flame model has since been coupled with a powerful solid phase pyrolysis solver, ThermaKin2D, which computes the transient rate of gaseous fuel production of constituents of a pyrolyzing solid in response to an external heat flux, based on fundamental physical and chemical properties. Together, this unified model captures the two fundamental controlling mechanisms of upward flame spread – gas phase flame heat transfer and solid phase material degradation. This has enabled simulations of flame spread dynamics with a reasonable computational cost and accuracy beyond that of current models. This unified model of material degradation provides the framework to quantitatively study material burning behavior in response to a wide range of common fire scenarios.
Resumo:
This dissertation studies technological change in the context of energy and environmental economics. Technology plays a key role in reducing greenhouse gas emissions from the transportation sector. Chapter 1 estimates a structural model of the car industry that allows for endogenous product characteristics to investigate how gasoline taxes, R&D subsidies and competition affect fuel efficiency and vehicle prices in the medium-run, both through car-makers' decisions to adopt technologies and through their investments in knowledge capital. I use technology adoption and automotive patents data for 1986-2006 to estimate this model. I show that 92% of fuel efficiency improvements between 1986 and 2006 were driven by technology adoption, while the role of knowledge capital is largely to reduce the marginal production costs of fuel-efficient cars. A counterfactual predicts that an additional $1/gallon gasoline tax in 2006 would have increased the technology adoption rate, and raised average fuel efficiency by 0.47 miles/gallon, twice the annual fuel efficiency improvement in 2003-2006. An R&D subsidy that would reduce the marginal cost of knowledge capital by 25% in 2006 would have raised investment in knowledge capital. This subsidy would have raised fuel efficiency only by 0.06 miles/gallon in 2006, but would have increased variable profits by $2.3 billion over all firms that year. Passenger vehicle fuel economy standards in the United States will require substantial improvements in new vehicle fuel economy over the next decade. Economic theory suggests that vehicle manufacturers adopt greater fuel-saving technologies for vehicles with larger market size. Chapter 2 documents a strong connection between market size, measured by sales, and technology adoption. Using variation consumer demographics and purchasing pattern to account for the endogeneity of market size, we find that a 10 percent increase in market size raises vehicle fuel efficiency by 0.3 percent, as compared to a mean improvement of 1.4 percent per year over 1997-2013. Historically, fuel price and demographic-driven market size changes have had large effects on technology adoption. Furthermore, fuel taxes would induce firms to adopt fuel-saving technologies on their most efficient cars, thereby polarizing the fuel efficiency distribution of the new vehicle fleet.