905 resultados para Data storage
Resumo:
Accurate control over the spent nuclear fuel content is essential for its safe and optimized transportation, storage and management. Consequently, the reactivity of spent fuel and its isotopic content must be accurately determined.
Resumo:
The accurate prediction of the spent nuclear fuel content is essential for its safe and optimized transportation, storage and management. This isotopic evolution can be predicted using powerful codes and methodologies throughout irradiation as well as cooling time periods. However, in order to have a realistic confidence level in the prediction of spent fuel isotopic content, it is desirable to determine how uncertainties affect isotopic prediction calculations by quantifying their associated uncertainties.
Resumo:
The manipulation and handling of an ever increasing volume of data by current data-intensive applications require novel techniques for e?cient data management. Despite recent advances in every aspect of data management (storage, access, querying, analysis, mining), future applications are expected to scale to even higher degrees, not only in terms of volumes of data handled but also in terms of users and resources, often making use of multiple, pre-existing autonomous, distributed or heterogeneous resources.
Resumo:
Arquitectura de almacenamiento para imágenes JPEG2000 basado en la fragmentación del fichero para poder almacenar los datos en diferentes discos para optimizar el almacenamiento en función de la calidad de los datos y posibilitar el aumento de transferencia.
Resumo:
Data grid services have been used to deal with the increasing needs of applications in terms of data volume and throughput. The large scale, heterogeneity and dynamism of grid environments often make management and tuning of these data services very complex. Furthermore, current high-performance I/O approaches are characterized by their high complexity and specific features that usually require specialized administrator skills. Autonomic computing can help manage this complexity. The present paper describes an autonomic subsystem intended to provide self-management features aimed at efficiently reducing the I/O problem in a grid environment, thereby enhancing the quality of service (QoS) of data access and storage services in the grid. Our proposal takes into account that data produced in an I/O system is not usually immediately required. Therefore, performance improvements are related not only to current but also to any future I/O access, as the actual data access usually occurs later on. Nevertheless, the exact time of the next I/O operations is unknown. Thus, our approach proposes a long-term prediction designed to forecast the future workload of grid components. This enables the autonomic subsystem to determine the optimal data placement to improve both current and future I/O operations.
Resumo:
In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.
Resumo:
In professional video production, users have to access to huge multimedia files simultaneously in an error-free environment, this restriction force the use of expensive disk architectures for video servers. Previous researches proposed different RAID systems for each specific task (ingest, editing, file, play-out, etc.). Video production companies have to acquire different servers with different RAIDs systems in order to support each task in the production workflow. The solution has multiples disadvantages, duplicated material in several RAIDs, duplicated material for different qualities, transfer and transcoding processes, etc. In this work, an architecture for video servers based on the spreading of JPEG200 data in different RAIDs is presented, each individual part of the data structure goes to a specific RAID type depending on the effect that produces the data on the overall image quality, the method provide a redundancy correlated with the data rank. The global storage can be used in all the different tasks of the production workflow saving disk space, redundant files and transfers procedures.
Resumo:
The application of the response of fruits to low energy for mechanical impacts is described, for evaluation of post-harvest ripening of avocadoes of the variety "Hass". An impactor of 50g of weight, provided with an accelerometer, and free-falling from a height of 4 cm, is used; it is interfaced to a computer and uses a special software for retrieving and analyzing the deceleration data. Impact response parameters of individual fruits were compared to firmness of the pulp, measured by the most used method of double-plate puncture, as well as to other physical and physiological parameters: color, skin puncture ethylene production rate and others. Two groups of fruits were carefully selected, stored at 6º C (60 days) and ripened at 20ºC (11 days), and tested during the storage period. It is shown that, as in other types of fruits, impact response can be a good predictor of firmness in avocadoes, obtaining the same accuracy as with destructive firmness measurements. Mathematical and multiple regression models are calculated and compared to measured data, with which a prediction of storage period can be made for these fruits.
Resumo:
Many efforts have been made in order to adequate the production of a solar thermal collector field to the consumption of domestic hot water of the inhabitants of a building. In that sense, much has been achieved in different domains: research agencies, government policies and manufacturers. However, most of the design rules of the solar plants are based on steady state models, whereas solar irradiance, consumption and thermal accumulation are inherently transient processes. As a result of this lack of physical accuracy, thermal storage tanks are sometimes left to be as large as the designer decides without any aforementioned precise recommendation. This can be a problem if solar thermal systems are meant to be implemented in nowadays buildings, where there is a shortage of space. In addition to that, an excessive storage volume could not result more efficient in many residential applications, but costly, extreme in space consumption and in some cases too heavy. A proprietary transient simulation program has been developed and validated with a detailed measurement campaign in an experimental facility. In situ environmental data have been obtained through a whole year of operation. They have been gathered at intervals of 10 min for a solar plant of 50 m2 with a storage tank of 3 m3, including the equipment for domestic hot water production of a typical apartment building. This program has been used to obtain the design and dimensioning criteria of DHW solar plants under daily transient conditions throughout a year and more specifically the size of the storage tank for a multi storey apartment building. Comparison of the simulation results with the current Spanish regulation applicable, “Código Técnico de la Edificación” (CTE 2006), offers fruitful details and establishes solar facilities dimensioning criteria.
Resumo:
In the framework of a global investigation of the Spanish natural analogues of CO2 storage and leakage, four selected sites from the Mazarrón?Gañuelas Tertiary Basin (Murcia, Spain) were studied for computing the diffuse soil CO2 flux, by using the accumulation chamber method. The Basin is characterized by the presence of a deep, saline, thermal (?47 ?C) CO2-rich aquifer intersected by two deep geothermal exploration wells named ?El Saladillo? (535 m) and ?El Reventón? (710 m). The CO2 flux data were processed by means of a graphical?statistical method, kriging estimation and sequential Gaussian simulation algorithms. The results have allowed concluding that the Tertiary marly cap-rock of this CO2-rich aquifer acts as a very effective sealing, preventing any CO2 leak from this natural CO2 storage site, being therefore an excellent scenario to guarantee, by analogy, the safety of a CO2 storage.
Resumo:
Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.
Resumo:
High-Performance Computing, Cloud computing and next-generation applications such e-Health or Smart Cities have dramatically increased the computational demand of Data Centers. The huge energy consumption, increasing levels of CO2 and the economic costs of these facilities represent a challenge for industry and researchers alike. Recent research trends propose the usage of holistic optimization techniques to jointly minimize Data Center computational and cooling costs from a multilevel perspective. This paper presents an analysis on the parameters needed to integrate the Data Center in a holistic optimization framework and leverages the usage of Cyber-Physical systems to gather workload, server and environmental data via software techniques and by deploying a non-intrusive Wireless Sensor Net- work (WSN). This solution tackles data sampling, retrieval and storage from a reconfigurable perspective, reducing the amount of data generated for optimization by a 68% without information loss, doubling the lifetime of the WSN nodes and allowing runtime energy minimization techniques in a real scenario.
Resumo:
There is an increasing tendency of turning the current power grid, essentially unaware of variations in electricity demand and scattered energy sources, into something capable of bringing a degree of intelligence by using tools strongly related to information and communication technologies, thus turning into the so-called Smart Grid. In fact, it could be considered that the Smart Grid is an extensive smart system that spreads throughout any area where power is required, providing a significant optimization in energy generation, storage and consumption. However, the information that must be treated to accomplish these tasks is challenging both in terms of complexity (semantic features, distributed systems, suitable hardware) and quantity (consumption data, generation data, forecasting functionalities, service reporting), since the different energy beneficiaries are prone to be heterogeneous, as the nature of their own activities is. This paper presents a proposal on how to deal with these issues by using a semantic middleware architecture that integrates different components focused on specific tasks, and how it is used to handle information at every level and satisfy end user requests.
Resumo:
Short-term variability in the power generated by large grid-connected photovoltaic (PV) plants can negatively affect power quality and the network reliability. New grid-codes require combining the PV generator with some form of energy storage technology in order to reduce short-term PV power fluctuation. This paper proposes an effective method in order to calculate, for any PV plant size and maximum allowable ramp-rate, the maximum power and the minimum energy storage requirements alike. The general validity of this method is corroborated with extensive simulation exercises performed with real 5-s one year data of 500 kW inverters at the 38.5 MW Amaraleja (Portugal) PV plant and two other PV plants located in Navarra (Spain), at a distance of more than 660 km from Amaraleja.
Resumo:
In this paper, the authors introduce a novel mechanism for data management in a middleware for smart home control, where a relational database and semantic ontology storage are used at the same time in a Data Warehouse. An annotation system has been designed for instructing the storage format and location, registering new ontology concepts and most importantly, guaranteeing the Data Consistency between the two storage methods. For easing the data persistence process, the Data Access Object (DAO) pattern is applied and optimized to enhance the Data Consistency assurance. Finally, this novel mechanism provides an easy manner for the development of applications and their integration with BATMP. Finally, an application named "Parameter Monitoring Service" is given as an example for assessing the feasibility of the system.