8 resultados para complexity metrics
em DRUM (Digital Repository at the University of Maryland)
Resumo:
This was presented during the 2nd annual Library Research and Innovation Practices at the University of Maryland Libraries, McKeldin Library, on June 8, 2016.
Resumo:
This presentation was one of four during a Mid-Atlantic Regional Archives Conference presentation on April 15, 2016. Digitization of collections can help to improve internal workflows, make materials more accessible, and create new and engaging relationships with users. Laurie Gemmill Arp will discuss the LYRASIS Digitization Collaborative, created to assist institutions with their digitization needs, and how it has worked to help institutions increase connections with users. Robin Pike from the University of Maryland will discuss how they factor requests for access into selection for digitization and how they track the use of digitized materials. Laura Drake Davis of James Madison University will discuss the establishment of a formal digitization program, its impact on users, and the resulting increased use of their collections. Linda Tompkins-Baldwin will discuss Digital Maryland’s partnership with the Digital Public Library of America to provide access to archives held by institutions without a digitization program.
Resumo:
In this work we consider several instances of the following problem: "how complicated can the isomorphism relation for countable models be?"' Using the Borel reducibility framework, we investigate this question with regard to the space of countable models of particular complete first-order theories. We also investigate to what extent this complexity is mirrored in the number of back-and-forth inequivalent models of the theory. We consider this question for two large and related classes of theories. First, we consider o-minimal theories, showing that if T is o-minimal, then the isomorphism relation is either Borel complete or Borel. Further, if it is Borel, we characterize exactly which values can occur, and when they occur. In all cases Borel completeness implies lambda-Borel completeness for all lambda. Second, we consider colored linear orders, which are (complete theories of) a linear order expanded by countably many unary predicates. We discover the same characterization as with o-minimal theories, taking the same values, with the exception that all finite values are possible except two. We characterize exactly when each possibility occurs, which is similar to the o-minimal case. Additionally, we extend Schirrman's theorem, showing that if the language is finite, then T is countably categorical or Borel complete. As before, in all cases Borel completeness implies lambda-Borel completeness for all lambda.
Resumo:
As usage metrics continue to attain an increasingly central role in library system assessment and analysis, librarians tasked with system selection, implementation, and support are driven to identify metric approaches that simultaneously require less technical complexity and greater levels of data granularity. Such approaches allow systems librarians to present evidence-based claims of platform usage behaviors while reducing the resources necessary to collect such information, thereby representing a novel approach to real-time user analysis as well as dual benefit in active and preventative cost reduction. As part of the DSpace implementation for the MD SOAR initiative, the Consortial Library Application Support (CLAS) division has begun test implementation of the Google Tag Manager analytic system in an attempt to collect custom analytical dimensions to track author- and university-specific download behaviors. Building on the work of Conrad , CLAS seeks to demonstrate that the GTM approach to custom analytics provides both granular metadata-based usage statistics in an approach that will prove extensible for additional statistical gathering in the future. This poster will discuss the methodology used to develop these custom tag approaches, the benefits of using the GTM model, and the risks and benefits associated with further implementation.
Resumo:
Geographically isolated wetlands, those entirely surrounded by uplands, provide numerous ecological functions, some of which are dependent on the degree to which they are hydrologically connected to nearby waters. There is a growing need for field-validated, landscape-scale approaches for classifying wetlands based on their expected degree of connectivity with stream networks. During the 2015 water year, flow duration was recorded in non-perennial streams (n = 23) connecting forested wetlands and nearby perennial streams on the Delmarva Peninsula (Maryland, USA). Field and GIS-derived landscape metrics (indicators of catchment, wetland, non-perennial stream, and soil characteristics) were assessed as predictors of wetland-stream connectivity (duration, seasonal onset and offset dates). Connection duration was most strongly correlated with non-perennial stream geomorphology and wetland characteristics. A final GIS-based stepwise regression model (adj-R2 = 0.74, p < 0.0001) described wetland-stream connection duration as a function of catchment area, wetland area and number, and soil available water storage.
Resumo:
Image (Video) retrieval is an interesting problem of retrieving images (videos) similar to the query. Images (Videos) are represented in an input (feature) space and similar images (videos) are obtained by finding nearest neighbors in the input representation space. Numerous input representations both in real valued and binary space have been proposed for conducting faster retrieval. In this thesis, we present techniques that obtain improved input representations for retrieval in both supervised and unsupervised settings for images and videos. Supervised retrieval is a well known problem of retrieving same class images of the query. We address the practical aspects of achieving faster retrieval with binary codes as input representations for the supervised setting in the first part, where binary codes are used as addresses into hash tables. In practice, using binary codes as addresses does not guarantee fast retrieval, as similar images are not mapped to the same binary code (address). We address this problem by presenting an efficient supervised hashing (binary encoding) method that aims to explicitly map all the images of the same class ideally to a unique binary code. We refer to the binary codes of the images as `Semantic Binary Codes' and the unique code for all same class images as `Class Binary Code'. We also propose a new class based Hamming metric that dramatically reduces the retrieval times for larger databases, where only hamming distance is computed to the class binary codes. We also propose a Deep semantic binary code model, by replacing the output layer of a popular convolutional Neural Network (AlexNet) with the class binary codes and show that the hashing functions learned in this way outperforms the state of the art, and at the same time provide fast retrieval times. In the second part, we also address the problem of supervised retrieval by taking into account the relationship between classes. For a given query image, we want to retrieve images that preserve the relative order i.e. we want to retrieve all same class images first and then, the related classes images before different class images. We learn such relationship aware binary codes by minimizing the similarity between inner product of the binary codes and the similarity between the classes. We calculate the similarity between classes using output embedding vectors, which are vector representations of classes. Our method deviates from the other supervised binary encoding schemes as it is the first to use output embeddings for learning hashing functions. We also introduce new performance metrics that take into account the related class retrieval results and show significant gains over the state of the art. High Dimensional descriptors like Fisher Vectors or Vector of Locally Aggregated Descriptors have shown to improve the performance of many computer vision applications including retrieval. In the third part, we will discuss an unsupervised technique for compressing high dimensional vectors into high dimensional binary codes, to reduce storage complexity. In this approach, we deviate from adopting traditional hyperplane hashing functions and instead learn hyperspherical hashing functions. The proposed method overcomes the computational challenges of directly applying the spherical hashing algorithm that is intractable for compressing high dimensional vectors. A practical hierarchical model that utilizes divide and conquer techniques using the Random Select and Adjust (RSA) procedure to compress such high dimensional vectors is presented. We show that our proposed high dimensional binary codes outperform the binary codes obtained using traditional hyperplane methods for higher compression ratios. In the last part of the thesis, we propose a retrieval based solution to the Zero shot event classification problem - a setting where no training videos are available for the event. To do this, we learn a generic set of concept detectors and represent both videos and query events in the concept space. We then compute similarity between the query event and the video in the concept space and videos similar to the query event are classified as the videos belonging to the event. We show that we significantly boost the performance using concept features from other modalities.
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
Resumo:
While fault-tolerant quantum computation might still be years away, analog quantum simulators offer a way to leverage current quantum technologies to study classically intractable quantum systems. Cutting edge quantum simulators such as those utilizing ultracold atoms are beginning to study physics which surpass what is classically tractable. As the system sizes of these quantum simulators increase, there are also concurrent gains in the complexity and types of Hamiltonians which can be simulated. In this work, I describe advances toward the realization of an adaptable, tunable quantum simulator capable of surpassing classical computation. We simulate long-ranged Ising and XY spin models which can have global arbitrary transverse and longitudinal fields in addition to individual transverse fields using a linear chain of up to 24 Yb+ 171 ions confined in a linear rf Paul trap. Each qubit is encoded in the ground state hyperfine levels of an ion. Spin-spin interactions are engineered by the application of spin-dependent forces from laser fields, coupling spin to motion. Each spin can be read independently using state-dependent fluorescence. The results here add yet more tools to an ever growing quantum simulation toolbox. One of many challenges has been the coherent manipulation of individual qubits. By using a surprisingly large fourth-order Stark shifts in a clock-state qubit, we demonstrate an ability to individually manipulate spins and apply independent Hamiltonian terms, greatly increasing the range of quantum simulations which can be implemented. As quantum systems grow beyond the capability of classical numerics, a constant question is how to verify a quantum simulation. Here, I present measurements which may provide useful metrics for large system sizes and demonstrate them in a system of up to 24 ions during a classically intractable simulation. The observed values are consistent with extremely large entangled states, as much as ~95% of the system entangled. Finally, we use many of these techniques in order to generate a spin Hamiltonian which fails to thermalize during experimental time scales due to a meta-stable state which is often called prethermal. The observed prethermal state is a new form of prethermalization which arises due to long-range interactions and open boundary conditions, even in the thermodynamic limit. This prethermalization is observed in a system of up to 22 spins. We expect that system sizes can be extended up to 30 spins with only minor upgrades to the current apparatus. These results emphasize that as the technology improves, the techniques and tools developed here can potentially be used to perform simulations which will surpass the capability of even the most sophisticated classical techniques, enabling the study of a whole new regime of quantum many-body physics.