5 resultados para one-time passwords
em DRUM (Digital Repository at the University of Maryland)
Resumo:
In the Twentieth Century, the proliferation of cellists and the exceptional development of cello techniques, combined with composers' acceptance of the challenges by these developments, led many British composers to contribute to the enrichment of the cello concert repertoire. A great number of compositions written for the cello in the Twentieth Century England have been long neglected. In comparison with their other works in the genres of concerto, symphony, and opera, works for cello by prominent Twentieth Century English composers Elgar, Walton, and Britten are relatively unknown, except for Elgar's cello concerto. There are also many lesser-known composers like Delius, Bax, Bridge, and Clarke, who flourished in the fmt half of the century, but eventually became disregarded. Some reasons for this neglect may be as follows: the reluctant attitude toward new trends in the English musical establishment around the turn of the century; a lack of readily available editions of these composers' compositions; an over-abundance of fine composers at one time; and lastly, an overly individualistic approach to the music restricting a general public appreciation and recognition. Encountering a recording of the Walton cello concerto prompted me to further study the neglected Twentieth Century English cello repertoire. Many works of the above-mentioned composers still have not been fully valued in the cello repertoire. For this reason, the purpose of this project was to inspire cellists to learn and broaden as well as to appreciate the beauty of the Twentieth Century cello literature. As part of the doctoral performance project, three recitals featuring the works by six English composers were performed. My collaborator in all three recitals was pianist Eunae KO. The fmt recital included the Sonata for cello and piano by Frank Bridge and the Concerto by William Walton. The second recital was comprised of relatively unknown cello works: Sonatina in D major by Arnold Bax, Romance by Frederick Delius, and the Sonata Op. 40 by Rebecca Clarke. The third recital consisted of Folk-Tale by Arnold Bax and the Symphony for Cello and Orchestra Op. 68 by Benjamin Britten.
Resumo:
During the summer of 1994, Archaeology in Annapolis conducted archaeological investigations of the city block bounded by Franklin, South and Cathedral Streets in the city of Annapolis. This Phase III excavation was conducted as a means to identify subsurface cultural resources in the impact area associated with the proposed construction of the Anne Arundel County Courthouse addition. This impact area included both the upper and lower parking lots used by Courthouse employees. Investigations were conducted in the form of mechanical trenching and hand excavated units. Excavations in the upper lot area yielded significant information concerning the interior area of the block. Known as Bellis Court, this series of rowhouses was constructed in the late nineteenth century and was used as rental properties by African-Americans. The dwellings remained until the middle of the twentieth century when they were demolished in preparation for the construction of a Courthouse addition. Portions of the foundation of a house owned by William H. Bellis in the 1870s were also exposed in this area. Construction of this house was begun by William Nicholson around 1730 and completed by Daniel Dulany in 1732/33. It was demolished in 1896 by James Munroe, a Trustee for Bellis. Excavations in the upper lot also revealed the remains of a late seventeenth/early eighteenth century wood-lined cellar, believed to be part of the earliest known structure on Lot 58. After an initially rapid deposition of fill around 1828, this cellar was gradually covered with soil throughout the remainder of the nineteenth century. The fill deposit in the cellar feature yielded a mixed assemblage of artifacts that included sherds of early materials such as North Devon gravel-tempered earthenware, North Devon sgraffito and Northem Italian slipware, along with creamware, pearlware and whiteware. In the lower parking lot, numerous artifacts were recovered from yard scatter associated with the houses that at one time fronted along Cathedral Street and were occupied by African- Americans. An assemblage of late seventeenth century/early eighteenth century materials and several slag deposits from an early forge were recovered from this second area of study. The materials associated with the forge, including portions of a crucible, provided evidence of some of the earliest industry in Annapolis. Investigations in both the upper and lower parking lots added to the knowledge of the changing landscape within the project area, including a prevalence of open space in early periods, a surprising survival of impermanent structures, and a gradual regrading and filling of the block with houses and interior courts. Excavations at the Anne Arundel County Courthouse proved this to be a multi-component site, rich in cultural resources from Annapolis' Early Settlement Period through its Modern Period (as specified by Maryland's Comprehensive Historic Preservation Plan (Weissman 1986)). This report provides detailed interpretations of the archaeological findings of these Phase III investigations.
Resumo:
The last two decades have seen many exciting examples of tiny robots from a few cm3 to less than one cm3. Although individually limited, a large group of these robots has the potential to work cooperatively and accomplish complex tasks. Two examples from nature that exhibit this type of cooperation are ant and bee colonies. They have the potential to assist in applications like search and rescue, military scouting, infrastructure and equipment monitoring, nano-manufacture, and possibly medicine. Most of these applications require the high level of autonomy that has been demonstrated by large robotic platforms, such as the iRobot and Honda ASIMO. However, when robot size shrinks down, current approaches to achieve the necessary functions are no longer valid. This work focused on challenges associated with the electronics and fabrication. We addressed three major technical hurdles inherent to current approaches: 1) difficulty of compact integration; 2) need for real-time and power-efficient computations; 3) unavailability of commercial tiny actuators and motion mechanisms. The aim of this work was to provide enabling hardware technologies to achieve autonomy in tiny robots. We proposed a decentralized application-specific integrated circuit (ASIC) where each component is responsible for its own operation and autonomy to the greatest extent possible. The ASIC consists of electronics modules for the fundamental functions required to fulfill the desired autonomy: actuation, control, power supply, and sensing. The actuators and mechanisms could potentially be post-fabricated on the ASIC directly. This design makes for a modular architecture. The following components were shown to work in physical implementations or simulations: 1) a tunable motion controller for ultralow frequency actuation; 2) a nonvolatile memory and programming circuit to achieve automatic and one-time programming; 3) a high-voltage circuit with the highest reported breakdown voltage in standard 0.5 μm CMOS; 4) thermal actuators fabricated using CMOS compatible process; 5) a low-power mixed-signal computational architecture for robotic dynamics simulator; 6) a frequency-boost technique to achieve low jitter in ring oscillators. These contributions will be generally enabling for other systems with strict size and power constraints such as wireless sensor nodes.
Resumo:
Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.
Resumo:
In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.