12 resultados para LEVERAGE

em DRUM (Digital Repository at the University of Maryland)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation explores why some states consistently secure food imports at prices higher than the world market price, thereby exacerbating food insecurity domestically. I challenge the idea that free market economics alone can explain these trade behaviors, and instead argue that states take into account political considerations when engaging in food trade that results in inefficient trade. In particular, states that are dependent on imports of staple food products, like cereals, are wary of the potential strategic value of these goods to exporters. I argue that this consideration, combined with the importing state’s ability to mitigate that risk through its own forms of political or economic leverage, will shape the behavior of the importing state and contribute to its potential for food security. In addition to cross-national analyses, I use case studies of the Gulf Cooperation Council states and Jordan to demonstrate how the political tools available to these importers affect their food security. The results of my analyses suggest that when import dependent states have access to forms of political leverage, they are more likely to trade efficiently, thereby increasing their potential for food security.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Symbolic execution is a powerful program analysis technique, but it is very challenging to apply to programs built using event-driven frameworks, such as Android. The main reason is that the framework code itself is too complex to symbolically execute. The standard solution is to manually create a framework model that is simpler and more amenable to symbolic execution. However, developing and maintaining such a model by hand is difficult and error-prone. We claim that we can leverage program synthesis to introduce a high-degree of automation to the process of framework modeling. To support this thesis, we present three pieces of work. First, we introduced SymDroid, a symbolic executor for Android. While Android apps are written in Java, they are compiled to Dalvik bytecode format. Instead of analyzing an app’s Java source, which may not be available, or decompiling from Dalvik back to Java, which requires significant engineering effort and introduces yet another source of potential bugs in an analysis, SymDroid works directly on Dalvik bytecode. Second, we introduced Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket takes as input the framework API and tutorial programs that exercise the framework. From these artifacts and Pasket's internal knowledge of design patterns, Pasket synthesizes an executable framework model by instantiating design patterns, such that the behavior of a synthesized model on the tutorial programs matches that of the original framework. Lastly, in order to scale program synthesis to framework models, we devised adaptive concretization, a novel program synthesis algorithm that combines the best of the two major synthesis strategies: symbolic search, i.e., using SAT or SMT solvers, and explicit search, e.g., stochastic enumeration of possible solutions. Adaptive concretization parallelizes multiple sub-synthesis problems by partially concretizing highly influential unknowns in the original synthesis problem. Thanks to adaptive concretization, Pasket can generate a large-scale model, e.g., thousands lines of code. In addition, we have used an Android model synthesized by Pasket and found that the model is sufficient to allow SymDroid to execute a range of apps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Terrestrial planets produce crusts as they differentiate. The Earth’s bi-modal crust, with a high-standing granitic continental crust and a low-standing basaltic oceanic crust, is unique in our solar system and links the evolution of the interior and exterior of this planet. Here I present geochemical observations to constrain processes accompanying crustal formation and evolution. My approach includes geochemical analyses, quantitative modeling, and experimental studies. The Archean crustal evolution project represents my perspective on when Earth’s continental crust began forming. In this project, I utilized critical element ratios in sedimentary records to track the evolution of the MgO content in the upper continental crust as a function time. The early Archean subaerial crust had >11 wt. % MgO, whereas by the end of Archean its composition had evolved to about 4 wt. % MgO, suggesting a transition of the upper crust from a basalt-like to a more granite-like bulk composition. Driving this fundamental change of the upper crustal composition is the widespread operation of subduction processes, suggesting the onset of global plate tectonics at ~ 3 Ga (Abstract figure). Three of the chapters in this dissertation leverage the use of Eu anomalies to track the recycling of crustal materials back into the mantle, where Eu anomaly is a sensitive measure of the element’s behavior relative to neighboring lanthanoids (Sm and Gd) during crustal differentiation. My compilation of Sm-Eu-Gd data for the continental crust shows that the average crust has a net negative Eu anomaly. This result requires recycling of Eu-enriched lower continental crust to the mantle. Mass balance calculations require that about three times the mass of the modern continental crust was returned into the mantle over Earth history, possibly via density-driven recycling. High precision measurements of Eu/Eu* in selected primitive glasses of mid-ocean ridge basalt (MORB) from global MORs, combined with numerical modeling, suggests that the recycled lower crustal materials are not found within the MORB source and may have at least partially sank into the lower mantle where they can be sampled by hot spot volcanoes. The Lesser Antilles Li isotope project provides insights into the Li systematics of this young island arc, a representative section of proto-continental crust. Martinique Island lavas, to my knowledge, represent the only clear case in which crustal Li is recycled back into their mantle source, as documented by the isotopically light Li isotopes in Lesser Antilles sediments that feed into the fore arc subduction trench. By corollary, the mantle-like Li signal in global arc lavas is likely the result of broadly similar Li isotopic compositions between the upper mantle and bulk subducting sediments in most arcs. My PhD project on Li diffusion mechanism in zircon is being carried out in extensive collaboration with multiple institutes and employs analytical, experimental and modeling studies. This ongoing project, finds that REE and Y play an important role in controlling Li diffusion in natural zircons, with Li partially coupling to REE and Y to maintain charge balance. Access to state-of-art instrumentation presented critical opportunities to identify the mechanisms that cause elemental fractionation during laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) analysis. My work here elucidates the elemental fractionation associated with plasma plume condensation during laser ablation and particle-ion conversion in the ICP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Peer-to-peer information sharing has fundamentally changed customer decision-making process. Recent developments in information technologies have enabled digital sharing platforms to influence various granular aspects of the information sharing process. Despite the growing importance of digital information sharing, little research has examined the optimal design choices for a platform seeking to maximize returns from information sharing. My dissertation seeks to fill this gap. Specifically, I study novel interventions that can be implemented by the platform at different stages of the information sharing. In collaboration with a leading for-profit platform and a non-profit platform, I conduct three large-scale field experiments to causally identify the impact of these interventions on customers’ sharing behaviors as well as the sharing outcomes. The first essay examines whether and how a firm can enhance social contagion by simply varying the message shared by customers with their friends. Using a large randomized field experiment, I find that i) adding only information about the sender’s purchase status increases the likelihood of recipients’ purchase; ii) adding only information about referral reward increases recipients’ follow-up referrals; and iii) adding information about both the sender’s purchase as well as the referral rewards increases neither the likelihood of purchase nor follow-up referrals. I then discuss the underlying mechanisms. The second essay studies whether and how a firm can design unconditional incentive to engage customers who already reveal willingness to share. I conduct a field experiment to examine the impact of incentive design on sender’s purchase as well as further referral behavior. I find evidence that incentive structure has a significant, but interestingly opposing, impact on both outcomes. The results also provide insights about senders’ motives in sharing. The third essay examines whether and how a non-profit platform can use mobile messaging to leverage recipients’ social ties to encourage blood donation. I design a large field experiment to causally identify the impact of different types of information and incentives on donor’s self-donation and group donation behavior. My results show that non-profits can stimulate group effect and increase blood donation, but only with group reward. Such group reward works by motivating a different donor population. In summary, the findings from the three studies will offer valuable insights for platforms and social enterprises on how to engineer digital platforms to create social contagion. The rich data from randomized experiments and complementary sources (archive and survey) also allows me to test the underlying mechanism at work. In this way, my dissertation provides both managerial implication and theoretical contribution to the phenomenon of peer-to-peer information sharing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Organized interests do not have direct control over the fate of their policy agendas in Congress. They cannot introduce bills, vote on legislation, or serve on House committees. If organized interests want to achieve virtually any of their legislative goals they must rely on and work through members of Congress. As an interest group seeks to move its policy agenda forward in Congress, then, one of the most important challenges it faces is the recruitment of effective legislative allies. Legislative allies are members of Congress who “share the same policy objective as the group” and who use their limited time and resources to advocate for the group’s policy needs (Hall and Deardorff 2006, 76). For all the financial resources that a group can bring to bear as it competes with other interests to win policy outcomes, it will be ineffective without the help of members of Congress that are willing to expend their time and effort to advocate for its policy positions (Bauer, Pool, and Dexter 1965; Baumgartner and Leech 1998b; Hall and Wayman 1990; Hall and Deardorff 2006; Hojnacki and Kimball 1998, 1999). Given the importance of legislative allies to interest group success, are some organized interests better able to recruit legislative allies than others? This question has received little attention in the literature. This dissertation offers an original theoretical framework describing both when we should expect some types of interests to generate more legislative allies than others and how interests vary in their effectiveness at mobilizing these allies toward effective legislative advocacy. It then tests these theoretical expectations on variation in group representation during the stage in the legislative process that many scholars have argued is crucial to policy influence, interest representation on legislative committees. The dissertation uncovers pervasive evidence that interests with a presence across more congressional districts stand a better chance of having legislative allies on their key committees. It also reveals that interests with greater amounts of leverage over jobs and economic investment will be better positioned to win more allies on key committees. In addition, interests with a policy agenda that closely overlaps with the jurisdiction of just one committee in Congress are more likely to have legislative allies on their key committees than are interests that have a policy agenda divided across many committee jurisdictions. In short, how groups are distributed across districts, the leverage that interests have over local jobs and economic investment, and how committee jurisdictions align with their policy goals affects their influence in Congress.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dinoflagellates possess large genomes in which most genes are present in many copies. This has made studies of their genomic organization and phylogenetics challenging. Recent advances in sequencing technology have made deep sequencing of dinoflagellate transcriptomes feasible. This dissertation investigates the genomic organization of dinoflagellates to better understand the challenges of assembling dinoflagellate transcriptomic and genomic data from short read sequencing methods, and develops new techniques that utilize deep sequencing data to identify orthologous genes across a diverse set of taxa. To better understand the genomic organization of dinoflagellates, a genomic cosmid clone of the tandemly repeated gene Alchohol Dehydrogenase (AHD) was sequenced and analyzed. The organization of this clone was found to be counter to prevailing hypotheses of genomic organization in dinoflagellates. Further, a new non-canonical splicing motif was described that could greatly improve the automated modeling and annotation of genomic data. A custom phylogenetic marker discovery pipeline, incorporating methods that leverage the statistical power of large data sets was written. A case study on Stramenopiles was undertaken to test the utility in resolving relationships between known groups as well as the phylogenetic affinity of seven unknown taxa. The pipeline generated a set of 373 genes useful as phylogenetic markers that successfully resolved relationships among the major groups of Stramenopiles, and placed all unknown taxa on the tree with strong bootstrap support. This pipeline was then used to discover 668 genes useful as phylogenetic markers in dinoflagellates. Phylogenetic analysis of 58 dinoflagellates, using this set of markers, produced a phylogeny with good support of all branches. The Suessiales were found to be sister to the Peridinales. The Prorocentrales formed a monophyletic group with the Dinophysiales that was sister to the Gonyaulacales. The Gymnodinales was found to be paraphyletic, forming three monophyletic groups. While this pipeline was used to find phylogenetic markers, it will likely also be useful for finding orthologs of interest for other purposes, for the discovery of horizontally transferred genes, and for the separation of sequences in metagenomic data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The big data era has dramatically transformed our lives; however, security incidents such as data breaches can put sensitive data (e.g. photos, identities, genomes) at risk. To protect users' data privacy, there is a growing interest in building secure cloud computing systems, which keep sensitive data inputs hidden, even from computation providers. Conceptually, secure cloud computing systems leverage cryptographic techniques (e.g., secure multiparty computation) and trusted hardware (e.g. secure processors) to instantiate a “secure” abstract machine consisting of a CPU and encrypted memory, so that an adversary cannot learn information through either the computation within the CPU or the data in the memory. Unfortunately, evidence has shown that side channels (e.g. memory accesses, timing, and termination) in such a “secure” abstract machine may potentially leak highly sensitive information, including cryptographic keys that form the root of trust for the secure systems. This thesis broadly expands the investigation of a research direction called trace oblivious computation, where programming language techniques are employed to prevent side channel information leakage. We demonstrate the feasibility of trace oblivious computation, by formalizing and building several systems, including GhostRider, which is a hardware-software co-design to provide a hardware-based trace oblivious computing solution, SCVM, which is an automatic RAM-model secure computation system, and ObliVM, which is a programming framework to facilitate programmers to develop applications. All of these systems enjoy formal security guarantees while demonstrating a better performance than prior systems, by one to several orders of magnitude.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation explores the effect of innovative knowledge transfer across supply chain partners. My research seeks to understand the manner by which a firm is able to benefit from the innovative capabilities of its supply chain partners and utilize the external knowledge they hold to increase its own levels of innovation. Specifically, I make use of patent data as a proxy for firm-level innovation and develop both independent and dependent variables from the data contained within the patent filings. I further examine the means by which key dyadic and portfolio supply chain relationship characteristics moderate the relationship between supplier innovation and buyer innovation. I investigate factors such as the degree of transactional reciprocity between the buyer and supplier, the similarity of the firms’ knowledge bases, and specific chain characteristics (e.g., geographic propinquity) to provide greater understanding of the means by which the transfer of innovative knowledge across firms in a supply chain can be enhanced or inhibited. This dissertation spans three essays to provide insights into the role that supply chain relationships play in affecting a focal firm’s level of innovation. While innovation has been at the core of a wide body of research, very little empirical work exists that considers the role of vertical buyer-supplier relationships on a firm’s ability to develop new and novel innovations. I begin by considering the fundamental unit of analysis within a supply chain, the buyer-supplier dyad. After developing initial insights based on the interactions between singular buyers and suppliers, essay two extends the analysis to consider the full spectrum of a buyer’s supply base by aggregating the individual buyer-supplier dyad level data into firm-supply network level data. Through this broader level of analysis, I am able to examine how the relational characteristics between a buyer firm and its supply base affect its ability to leverage the full portfolio of its suppliers’ innovative knowledge. Finally, in essay three I further extend the analysis to explore the means by which a buyer firm can use its suppliers to enhance its ability to access distant knowledge held by other organizations that the buyer is only connected to indirectly through its suppliers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While fault-tolerant quantum computation might still be years away, analog quantum simulators offer a way to leverage current quantum technologies to study classically intractable quantum systems. Cutting edge quantum simulators such as those utilizing ultracold atoms are beginning to study physics which surpass what is classically tractable. As the system sizes of these quantum simulators increase, there are also concurrent gains in the complexity and types of Hamiltonians which can be simulated. In this work, I describe advances toward the realization of an adaptable, tunable quantum simulator capable of surpassing classical computation. We simulate long-ranged Ising and XY spin models which can have global arbitrary transverse and longitudinal fields in addition to individual transverse fields using a linear chain of up to 24 Yb+ 171 ions confined in a linear rf Paul trap. Each qubit is encoded in the ground state hyperfine levels of an ion. Spin-spin interactions are engineered by the application of spin-dependent forces from laser fields, coupling spin to motion. Each spin can be read independently using state-dependent fluorescence. The results here add yet more tools to an ever growing quantum simulation toolbox. One of many challenges has been the coherent manipulation of individual qubits. By using a surprisingly large fourth-order Stark shifts in a clock-state qubit, we demonstrate an ability to individually manipulate spins and apply independent Hamiltonian terms, greatly increasing the range of quantum simulations which can be implemented. As quantum systems grow beyond the capability of classical numerics, a constant question is how to verify a quantum simulation. Here, I present measurements which may provide useful metrics for large system sizes and demonstrate them in a system of up to 24 ions during a classically intractable simulation. The observed values are consistent with extremely large entangled states, as much as ~95% of the system entangled. Finally, we use many of these techniques in order to generate a spin Hamiltonian which fails to thermalize during experimental time scales due to a meta-stable state which is often called prethermal. The observed prethermal state is a new form of prethermalization which arises due to long-range interactions and open boundary conditions, even in the thermodynamic limit. This prethermalization is observed in a system of up to 22 spins. We expect that system sizes can be extended up to 30 spins with only minor upgrades to the current apparatus. These results emphasize that as the technology improves, the techniques and tools developed here can potentially be used to perform simulations which will surpass the capability of even the most sophisticated classical techniques, enabling the study of a whole new regime of quantum many-body physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation examines black officeholding in Wilmington, North Carolina, from emancipation in 1865 through 1876, when Democrats gained control of the state government and brought Reconstruction to an end. It considers the struggle for black office holding in the city, the black men who held office, the dynamic political culture of which they were a part, and their significance in the day-to-day lives of their constituents. Once they were enfranchised, black Wilmingtonians, who constituted a majority of the city’s population, used their voting leverage to negotiate the election of black men to public office. They did so by using Republican factionalism or what the dissertation argues was an alternative partisanship. Ultimately, it was not factional divisions, but voter suppression, gerrymandering, and constitutional revisions that made local government appointive rather than elective, Democrats at the state level chipped away at the political gains black Wilmingtonians had made.