982 resultados para Digital storage


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To determine the theoretical and clinical minimum image pixel resolution and maximum compression appropriate for anterior eye image storage. Methods: Clinical images of the bulbar conjunctiva, palpebral conjunctiva, and corneal staining were taken at the maximum resolution of Nikon:CoolPix990 (2048 × 1360 pixels), DVC:1312C (1280 × 811), and JAI:CV-S3200 (767 × 569) single chip cameras and the JVC:KYF58 (767 × 569) three chip camera. The images were stored in TIFF format and further copies created with reduced resolution or compressed. The images were then ranked for clarity on a 15 inch monitor (resolution 1280 × 1024) by 20 optometrists and analysed by objective image analysis grading. Theoretical calculation of the resolution necessary to detect the smallest objects of clinical interest was also conducted. Results: Theoretical calculation suggested that the minimum resolution should be ≥579 horizontal pixels at 25 × magnification. Image quality was perceived subjectively as being reduced when the pixel resolution was lower than 767 × 569 (p<0.005) or the image was compressed as a BMP or <50% quality JPEG (p<0.005). Objective image analysis techniques were less susceptible to changes in image quality, particularly when using colour extraction techniques. Conclusion: It is appropriate to store anterior eye images at between 1280 × 811 and 767 × 569 pixel resolution and at up to 1:70 JPEG compression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we proposed a new method using long digital straight segments (LDSSs) for fingerprint recognition based on such a discovery that LDSSs in fingerprints can accurately characterize the global structure of fingerprints. Different from the estimation of orientation using the slope of the straight segments, the length of LDSSs provides a measure for stability of the estimated orientation. In addition, each digital straight segment can be represented by four parameters: x-coordinate, y-coordinate, slope and length. As a result, only about 600 bytes are needed to store all the parameters of LDSSs of a fingerprint, as is much less than the storage orientation field needs. Finally, the LDSSs can well capture the structural information of local regions. Consequently, LDSSs are more feasible to apply to the matching process than orientation fields. The experiments conducted on fingerprint databases FVC2002 DB3a and DB4a show that our method is effective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the power management issues in a mobile solar energy storage system. A multi-converter based energy storage system is proposed, in which solar power is the primary source while the grid or the diesel generator is selected as the secondary source. The existence of the secondary source facilitates the battery state of charge detection by providing a constant battery charging current. Converter modeling, multi-converter control system design, digital implementation and experimental verification are introduced and discussed in details. The prototype experiment indicates that the converter system can provide a constant charging current during solar converter maximum power tracking operation, especially during large solar power output variation, which proves the feasibility of the proposed design. © 2014 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): J.2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report presents the project outcomes for digital presentation of historical artefacts from the region of Plovdiv, related to the Balkan War (1912-1913). The selected collections include digitized periodicals, postcards, photographs, museum objects and paintings by Bulgarian artists. Problems related to the digitization, creation, storage and visualization of digital objects from the funds of these cultural institutions are also discussed. The content of this digital library is expected to be completed with other collections at cultural institutions in Plovdiv. The idea is as a next step to integrate the project with the other digital libraries. The project website „Digital library of collections from cultural institutions in Plovdiv” is also presented here - http://plovdivartefacts.com/ (Figure 1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural, unenriched Everglades wetlands are known to be limited by phosphorus (P) and responsive to P enrichment. However, whole-ecosystem evaluations of experimental P additions are rare in Everglades or other wetlands. We tested the response of the Everglades wetland ecosystem to continuous, low-level additions of P (0, 5, 15, and 30 μg L−1 above ambient) in replicate, 100 m flow-through flumes located in unenriched Everglades National Park. After the first six months of dosing, the concentration and standing stock of phosphorus increased in the surface water, periphyton, and flocculent detrital layer, but not in the soil or macrophytes. Of the ecosystem components measured, total P concentration increased the most in the floating periphyton mat (30 μg L−1: mean = 1916 μg P g−1, control: mean = 149 μg P g−1), while the flocculent detrital layer stored most of the accumulated P (30 μg L−1: mean = 1.732 g P m−2, control: mean = 0.769 g P m−2). Significant short-term responses of P concentration and standing stock were observed primarily in the high dose (30 μg L−1 above ambient) treatment. In addition, the biomass and estimated P standing stock of aquatic consumers increased in the 30 and 5 μg L−1 treatments. Alterations in P concentration and standing stock occurred only at the upstream ends of the flumes nearest to the point source of added nutrient. The total amount of P stored by the ecosystem within the flume increased with P dosing, although the ecosystem in the flumes retained only a small proportion of the P added over the first six months. These results indicate that oligotrophic Everglades wetlands respond rapidly to short-term, low-level P enrichment, and the initial response is most noticeable in the periphyton and flocculent detrital layer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently the data storage industry is facing huge challenges with respect to the conventional method of recording data known as longitudinal magnetic recording. This technology is fast approaching a fundamental physical limit, known as the superparamagnetic limit. A unique way of deferring the superparamagnetic limit incorporates the patterning of magnetic media. This method exploits the use of lithography tools to predetermine the areal density. Various nanofabrication schemes are employed to pattern the magnetic material are Focus Ion Beam (FIB), E-beam Lithography (EBL), UV-Optical Lithography (UVL), Self-assembled Media Synthesis and Nanoimprint Lithography (NIL). Although there are many challenges to manufacturing patterned media, the large potential gains offered in terms of areal density make it one of the most promising new technologies on the horizon for future hard disk drives. Thus, this dissertation contributes to the development of future alternative data storage devices and deferring the superparamagnetic limit by designing and characterizing patterned magnetic media using a novel nanoimprint replication process called "Step and Flash Imprint lithography". As opposed to hot embossing and other high temperature-low pressure processes, SFIL can be performed at low pressure and room temperature. Initial experiments carried out, consisted of process flow design for the patterned structures on sputtered Ni-Fe thin films. The main one being the defectivity analysis for the SFIL process conducted by fabricating and testing devices of varying feature sizes (50 nm to 1 μm) and inspecting them optically as well as testing them electrically. Once the SFIL process was optimized, a number of Ni-Fe coated wafers were imprinted with a template having the patterned topography. A minimum feature size of 40 nm was obtained with varying pitch (1:1, 1:1.5, 1:2, and 1:3). The Characterization steps involved extensive SEM study at each processing step as well as Atomic Force Microscopy (AFM) and Magnetic Force Microscopy (MFM) analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disk drives are the bottleneck in the processing of large amounts of data used in almost all common applications. File systems attempt to reduce this by storing data sequentially on the disk drives, thereby reducing the access latencies. Although this strategy is useful when data is retrieved sequentially, the access patterns in real world workloads is not necessarily sequential and this mismatch results in storage I/O performance degradation. This thesis demonstrates that one way to improve the storage performance is to reorganize data on disk drives in the same way in which it is mostly accessed. We identify two classes of accesses: static, where access patterns do not change over the lifetime of the data and dynamic, where access patterns frequently change over short durations of time, and propose, implement and evaluate layout strategies for each of these. Our strategies are implemented in a way that they can be seamlessly integrated or removed from the system as desired. We evaluate our layout strategies for static policies using tree-structured XML data where accesses to the storage device are mostly of two kinds—parent-to-child or child-to-sibling. Our results show that for a specific class of deep-focused queries, the existing file system layout policy performs better by 5–54X. For the non-deep-focused queries, our native layout mechanism shows an improvement of 3–127X. To improve performance of the dynamic access patterns, we implement a self-optimizing storage system that performs rearranges popular block accesses on a dedicated partition based on the observed workload characteristics. Our evaluation shows an improvement of over 80% in the disk busy times over a range of workloads. These results show that applying the knowledge of data access patterns for allocation decisions can substantially improve the I/O performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. ^ Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. ^ This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model’s parsing mechanism. ^ The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, most conventional surveillance networks are based on analog system, which has a lot of constraints like manpower and high-bandwidth requirements. It becomes the barrier for today's surveillance network development. This dissertation describes a digital surveillance network architecture based on the H.264 coding/decoding (CODEC) System-on-a-Chip (SoC) platform. The proposed digital surveillance network architecture includes three major layers: software layer, hardware layer, and the network layer. The following outlines the contributions to the proposed digital surveillance network architecture. (1) We implement an object recognition system and an object categorization system on the software layer by applying several Digital Image Processing (DIP) algorithms. (2) For better compression ratio and higher video quality transfer, we implement two new modules on the hardware layer of the H.264 CODEC core, i.e., the background elimination module and the Directional Discrete Cosine Transform (DDCT) module. (3) Furthermore, we introduce a Digital Signal Processor (DSP) sub-system on the main bus of H.264 SoC platforms as the major hardware support system for our software architecture. Thus we combine the software and hardware platforms to be an intelligent surveillance node. Lab results show that the proposed surveillance node can dramatically save the network resources like bandwidth and storage capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seagrass meadows in Florida Bay and Shark Bay, contain substantial stores of both organic carbon and nutrients. Soils from both systems are predominantly calcium carbonate, with an average of 82.1% CaCO3 in Florida Bay compared to 71.3% in Shark Bay. Soils from Shark Bay had, on average, 21% higher organic carbon content and 35% higher phosphorus content than Florida Bay. Further, soils from Shark Bay had lower mean dry bulk density (0.78 ± 0.01 g mL-1) than those from Florida Bay (0.84 ± 0.02 mg mL-1). The most hypersaline regions of both bays had higher organic carbon content in surficial soils. Profiles of organic carbon and phosphorus from Florida Bay indicate that this system has experienced an increase in P delivery and primary productivity over the last century; in contrast, decreasing organic carbon and phosphorus with depth in the soil profiles in Shark Bay point to a decrease in phosphorus delivery and primary productivity over the last 1000 y. The total ecosystem stocks of stored organic C in Florida Bay averages 163.5 MgCorg ha-1, lower than the average of 243.0 MgCorg ha-1 for Shark Bay; but these values place Shark and Florida Bays among the global hotspots for organic C storage in coastal ecosystems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Storage is a central part of computing. Driven by exponentially increasing content generation rate and a widening performance gap between memory and secondary storage, researchers are in the perennial quest to push for further innovation. This has resulted in novel ways to "squeeze" more capacity and performance out of current and emerging storage technology. Adding intelligence and leveraging new types of storage devices has opened the door to a whole new class of optimizations to save cost, improve performance, and reduce energy consumption. In this dissertation, we first develop, analyze, and evaluate three storage extensions. Our first extension tracks application access patterns and writes data in the way individual applications most commonly access it to benefit from the sequential throughput of disks. Our second extension uses a lower power flash device as a cache to save energy and turn off the disk during idle periods. Our third extension is designed to leverage the characteristics of both disks and solid state devices by placing data in the most appropriate device to improve performance and save power. In developing these systems, we learned that extending the storage stack is a complex process. Implementing new ideas incurs a prolonged and cumbersome development process and requires developers to have advanced knowledge of the entire system to ensure that extensions accomplish their goal without compromising data recoverability. Futhermore, storage administrators are often reluctant to deploy specific storage extensions without understanding how they interact with other extensions and if the extension ultimately achieves the intended goal. We address these challenges by using a combination of approaches. First, we simplify the storage extension development process with system-level infrastructure that implements core functionality commonly needed for storage extension development. Second, we develop a formal theory to assist administrators deploy storage extensions while guaranteeing that the given high level goals are satisfied. There are, however, some cases for which our theory is inconclusive. For such scenarios we present an experimental methodology that allows administrators to pick an extension that performs best for a given workload. Our evaluation demostrates the benefits of both the infrastructure and the formal theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural, unenriched Evergladeswetlands are known to be limited by phosphorus(P) and responsive to P enrichment. However,whole-ecosystem evaluations of experimental Padditions are rare in Everglades or otherwetlands. We tested the response of theEverglades wetland ecosystem to continuous,low-level additions of P (0, 5, 15, and30 μg L−1 above ambient) in replicate,100 m flow-through flumes located in unenrichedEverglades National Park. After the first sixmonths of dosing, the concentration andstanding stock of phosphorus increased in thesurface water, periphyton, and flocculentdetrital layer, but not in the soil or macrophytes. Of the ecosystem components measured, total P concentration increased the most in the floating periphyton mat (30 μg L−1: mean = 1916 μg P g−1, control: mean =149 μg P g−1), while the flocculentdetrital layer stored most of the accumulated P(30 μg L−1: mean = 1.732 g P m−2,control: mean = 0.769 g P m−2). Significant short-term responsesof P concentration and standing stock wereobserved primarily in the high dose (30 μgL−1 above ambient) treatment. Inaddition, the biomass and estimated P standingstock of aquatic consumers increased in the 30and 5 μg L−1 treatments. Alterationsin P concentration and standing stock occurredonly at the upstream ends of the flumes nearestto the point source of added nutrient. Thetotal amount of P stored by the ecosystemwithin the flume increased with P dosing,although the ecosystem in the flumes retainedonly a small proportion of the P added over thefirst six months. These results indicate thatoligotrophic Everglades wetlands respondrapidly to short-term, low-level P enrichment,and the initial response is most noticeable inthe periphyton and flocculent detrital layer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^