925 resultados para Secure Data Storage
Resumo:
The subpolar North Atlantic (SPNA) is important in the global carbon cycle because of the deep water ventilation processes that lead to both high uptake of atmospheric CO2 and large inventories of anthropogenic CO2 (C-ant). Thus, it is crucial to understand its response to increasing anthropogenic pressures. In this work, the budgets of dissolved inorganic carbon (DIC), C-ant and natural DIC (DICnat) in the eastern SPNA in the 2000s, are jointly analyzed using in situ data. The DICnat budget is found to be in steady state, confirming a long-standing hypothesis from in situ data for the first time. The biological activity is driving the uptake of natural CO2 from the atmosphere. The C-ant increase in the ocean is solely responsible of the DIC storage rate which is explained by advection of C-ant from the subtropics (65%) and C-ant air-sea flux (35%). These results demonstrate that the C-ant is accumulating in the SPNA without affecting the natural carbon cycle.
Resumo:
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys several drawbacks. Among them we emphasize: 1) if policies are complex, their enforcement can lead to performance decay of database servers; 2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.
Resumo:
In database applications, access control security layers are mostly developed from tools provided by vendors of database management systems and deployed in the same servers containing the data to be protected. This solution conveys several drawbacks. Among them we emphasize: (1) if policies are complex, their enforcement can lead to performance decay of database servers; (2) when modifications in the established policies implies modifications in the business logic (usually deployed at the client-side), there is no other possibility than modify the business logic in advance and, finally, 3) malicious users can issue CRUD expressions systematically against the DBMS expecting to identify any security gap. In order to overcome these drawbacks, in this paper we propose an access control stack characterized by: most of the mechanisms are deployed at the client-side; whenever security policies evolve, the security mechanisms are automatically updated at runtime and, finally, client-side applications do not handle CRUD expressions directly. We also present an implementation of the proposed stack to prove its feasibility. This paper presents a new approach to enforce access control in database applications, this way expecting to contribute positively to the state of the art in the field.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
Background: Improper handling has been identified as one of the major reasons for the decline in vaccine potency at the time of administration. Loss of potency becomes evident when immunised individuals contract the diseases the vaccines were meant to prevent. Objective: Assessing the factors associated with vaccine handling and storage practices. Methods: This was a cross-sectional study. Three-stage sampling was used to recruit 380 vaccine handlers from 273 health facilities from 11 Local Government areas in Ibadan. Data was analysed using SPSS version 16 Results: Seventy-three percent were aware of vaccine handling and storage guidelines with 68.4% having ever read such guidelines. Only 15.3% read a guideline less than 1 month prior to the study. About 65.0% had received training on vaccine management. Incorrect handling practices reported included storing injections with vaccines (13.7%) and maintaining vaccine temperature using ice blocks (7.6%). About 43.0% had good knowledge of vaccine management, while 66.1% had good vaccine management practices. Respondents who had good knowledge of vaccine handling and storage [OR=10.0, 95%CI (5.28 – 18.94), p < 0.001] and had received formal training on vaccine management [OR=5.3, 95%CI (2.50 – 11.14), p< 0.001] were more likely to have good vaccine handling and storage practices. Conclusion: Regular training is recommended to enhance vaccine handling and storage practices.
Resumo:
significant amount of Expendable Bathythermograph (XBT) data has been collected in the Mediterranean Sea since 1999 in the framework of operational oceanography activities. The management and storage of such a volume of data poses significant challenges and opportunities. The SeaDataNet project, a pan-European infrastructure for marine data diffusion, provides a convenient way to avoid dispersion of these temperature vertical profiles and to facilitate access to a wider public. The XBT data flow, along with the recent improvements in the quality check procedures and the consistence of the available historical data set are described. The main features of SeaDataNet services and the advantage of using this system for long-term data archiving are presented. Finally, focus on the Ligurian Sea is included in order to provide an example of the kind of information and final products devoted to different users can be easily derived from the SeaDataNet web portal.
Resumo:
The CATARINA Leg1 cruise was carried out from June 22 to July 24 2012 on board the B/O Sarmiento de Gamboa, under the scientific supervision of Aida Rios (CSIC-IIM). It included the occurrence of the OVIDE hydrological section that was performed in June 2002, 2004, 2006, 2008 and 2010, as part of the CLIVAR program (name A25) ), and under the supervision of Herlé Mercier (CNRSLPO). This section begins near Lisbon (Portugal), runs through the West European Basin and the Iceland Basin, crosses the Reykjanes Ridge (300 miles north of Charlie-Gibbs Fracture Zone, and ends at Cape Hoppe (southeast tip of Greenland). The objective of this repeated hydrological section is to monitor the variability of water mass properties and main current transports in the basin, complementing the international observation array relevant for climate studies. In addition, the Labrador Sea was partly sampled (stations 101-108) between Greenland and Newfoundland, but heavy weather conditions prevented the achievement of the section south of 53°40’N. The quality of CTD data is essential to reach the first objective of the CATARINA project, i.e. to quantify the Meridional Overturning Circulation and water mass ventilation changes and their effect on the changes in the anthropogenic carbon ocean uptake and storage capacity. The CATARINA project was mainly funded by the Spanish Ministry of Sciences and Innovation and co-funded by the Fondo Europeo de Desarrollo Regional. The hydrological OVIDE section includes 95 surface-bottom stations from coast to coast, collecting profiles of temperature, salinity, oxygen and currents, spaced by 2 to 25 Nm depending on the steepness of the topography. The position of the stations closely follows that of OVIDE 2002. In addition, 8 stations were carried out in the Labrador Sea. From the 24 bottles closed at various depth at each stations, samples of sea water are used for salinity and oxygen calibration, and for measurements of biogeochemical components that are not reported here. The data were acquired with a Seabird CTD (SBE911+) and an SBE43 for the dissolved oxygen, belonging to the Spanish UTM group. The software SBE data processing was used after decoding and cleaning the raw data. Then, the LPO matlab toolbox was used to calibrate and bin the data as it was done for the previous OVIDE cruises, using on the one hand pre and post-cruise calibration results for the pressure and temperature sensors (done at Ifremer) and on the other hand the water samples of the 24 bottles of the rosette at each station for the salinity and dissolved oxygen data. A final accuracy of 0.002°C, 0.002 psu and 0.04 ml/l (2.3 umol/kg) was obtained on final profiles of temperature, salinity and dissolved oxygen, compatible with international requirements issued from the WOCE program.
Resumo:
The majority of the organizations store their historical business information in data warehouses which are queried to make strategic decisions by using online analytical processing (OLAP) tools. This information has to be correctly assured against unauthorized accesses, but nevertheless there are a great amount of legacy OLAP applications that have been developed without considering security aspects or these have been incorporated once the system was implemented. This work defines a reverse engineering process that allows us to obtain the conceptual model corresponding to a legacy OLAP application, and also analyses and represents the security aspects that could have established. This process has been aligned with a model-driven architecture for developing secure OLAP applications by defining the transformations needed to automatically apply it. Once the conceptual model has been extracted, it can be easily modified and improved with security, and automatically transformed to generate the new implementation.
Resumo:
A comprehensive environmental monitoring program was conducted in the Ojo Guareña cave system (Spain), one of the longest cave systems in Europe, to assess the magnitude of the spatiotemporal changes in carbon dioxide gas (CO2) in the cave–soil–atmosphere profile. The key climate-driven processes involved in gas exchange, primarily gas diffusion and cave ventilation due to advective forces, were characterized. The spatial distributions of both processes were described through measurements of CO2 and its carbon isotopic signal (δ13C[CO2]) from exterior, soil and cave air samples analyzed by cavity ring-down spectroscopy (CRDS). The trigger mechanisms of air advection (temperature or air density differences or barometric imbalances) were controlled by continuous logging systems. Radon monitoring was also used to characterize the changing airflow that results in a predictable seasonal or daily pattern of CO2 concentrations and its carbon isotopic signal. Large daily oscillations of CO2 levels, ranging from 680 to 1900 ppm day−1 on average, were registered during the daily oscillations of the exterior air temperature around the cave air temperature. These daily variations in CO2 concentration were unobservable once the outside air temperature was continuously below the cave temperature and a prevailing advective-renewal of cave air was established, such that the daily-averaged concentrations of CO2 reached minimum values close to atmospheric background. The daily pulses of CO2 and other tracer gases such as radon (222Rn) were smoothed in the inner cave locations, where fluctuation of both gases was primarily correlated with medium-term changes in air pressure. A pooled analysis of these data provided evidence that atmospheric air that is inhaled into dynamically ventilated caves can then return to the lower troposphere as CO2-rich cave air.
Resumo:
This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.
Resumo:
This is a redacted version of the the final thesis. Copyright material has been removed to comply with UK Copyright Law.
Experimental and modeling studies of forced convection storage and drying systems for sweet potatoes
Resumo:
Sweet potato is an important strategic agricultural crop grown in many countries around the world. The roots and aerial vine components of the crop are used for both human consumption and, to some extent as a cheap source of animal feed. In spite of its economic value and growing contribution to health and nutrition, harvested sweet potato roots and aerial vine components has limited shelf-life and is easily susceptible to post-harvest losses. Although post-harvest losses of both sweet potato roots and aerial vine components is significant, there is no information available that will support the design and development of appropriate storage and preservation systems. In this context, the present study was initiated to improve scientific knowledge about sweet potato post-harvest handling. Additionally, the study also seeks to develop a PV ventilated mud storehouse for storage of sweet potato roots under tropical conditions. In study one, airflow resistance of sweet potato aerial vine components was investigated. The influence of different operating parameters such as airflow rate, moisture content and bulk depth at different levels on airflow resistance was analyzed. All the operating parameters were observed to have significant (P < 0.01) effect on airflow resistance. Prediction models were developed and were found to adequately describe the experimental pressure drop data. In study two, the resistance of airflow through unwashed and clean sweet potato roots was investigated. The effect of sweet potato roots shape factor, surface roughness, orientation to airflow, and presence of soil fraction on airflow resistance was also assessed. The pressure drop through unwashed and clean sweet potato roots was observed to increase with higher airflow, bed depth, root grade composition, and presence of soil fraction. The physical properties of the roots were incorporated into a modified Ergun model and compared with a modified Shedd’s model. The modified Ergun model provided the best fit to the experimental data when compared with the modified Shedd’s model. In study three, the effect of sweet potato root size (medium and large), different air velocity and temperature on the cooling/or heating rate and time of individual sweet potato roots were investigated. Also, a simulation model which is based on the fundamental solution of the transient equations was proposed for estimating the cooling and heating time at the centre of sweet potato roots. The results showed that increasing air velocity during cooling and heating significantly (P < 0.05) affects the cooling and heating times. Furthermore, the cooling and heating times were significantly different (P < 0.05) among medium and large size sweet potato roots. Comparison of the simulation results with experimental data confirmed that the transient simulation model can be used to accurately estimate the cooling and heating times of whole sweet potato roots under forced convection conditions. In study four, the performance of charcoal evaporative cooling pad configurations for integration into sweet potato roots storage systems was investigated. The experiments were carried out at different levels of air velocity, water flow rates, and three pad configurations: single layer pad (SLP), double layers pad (DLP) and triple layers pad (TLP) made out of small and large size charcoal particles. The results showed that higher air velocity has tremendous effect on pressure drop. Increasing the water flow rate above the range tested had no practical benefits in terms of cooling. It was observed that DLP and TLD configurations with larger wet surface area for both types of pads provided high cooling efficiencies. In study five, CFD technique in the ANSYS Fluent software was used to simulate airflow distribution in a low-cost mud storehouse. By theoretically investigating different geometries of air inlet, plenum chamber, and outlet as well as its placement using ANSYS Fluent software, an acceptable geometry with uniform air distribution was selected and constructed. Experimental measurements validated the selected design. In study six, the performance of the developed PV ventilated system was investigated. Field measurements showed satisfactory results of the directly coupled PV ventilated system. Furthermore, the option of integrating a low-cost evaporative cooling system into the mud storage structure was also investigated. The results showed a reduction of ambient temperature inside the mud storehouse while relative humidity was enhanced. The ability of the developed storage system to provide and maintain airflow, temperature and relative humidity which are the key parameters for shelf-life extension of sweet potato roots highlight its ability to reduce post-harvest losses at the farmer level, particularly under tropical climate conditions.
Resumo:
Hydrogen is considered as an appealing alternative to fossil fuels in the pursuit of sustainable, secure and prosperous growth in the UK and abroad. However there exists a persisting bottleneck in the effective storage of hydrogen for mobile applications in order to facilitate a wide implementation of hydrogen fuel cells in the fossil fuel dependent transportation industry. To address this issue, new means of solid state chemical hydrogen storage are proposed in this thesis. This involves the coupling of LiH with three different organic amines: melamine, urea and dicyandiamide. In principle, thermodynamically favourable hydrogen release from these systems proceeds via the deprotonation of the protic N-H moieties by the hydridic metal hydride. Simultaneously hydrogen kinetics is expected to be enhanced over heavier hydrides by incorporating lithium ions in the proposed binary hydrogen storage systems. Whilst the concept has been successfully demonstrated by the results obtained in this work, it was observed that optimising the ball milling conditions is central in promoting hydrogen desorption in the proposed systems. The theoretical amount of 6.97 wt% by dry mass of hydrogen was released when heating a ball milled mixture of LiH and melamine (6:1 stoichiometry) to 320 °C. It was observed that ball milling introduces a disruption in the intermolecular hydrogen bonding network that exists in pristine melamine. This effect extends to a molecular level electron redistribution observed as a function of shifting IR bands. It was postulated that stable phases form during the first stages of dehydrogenation which contain the triazine skeleton. Dehydrogenation of this system yields a solid product Li2NCN, which has been rehydrogenated back to melamine via hydrolysis under weak acidic conditions. On the other hand, the LiH and urea system (4:1 stoichiometry) desorbed approximately 5.8 wt% of hydrogen, from the theoretical capacity of 8.78 wt% (dry mass), by 270 °C accompanied by undesirable ammonia and trace amount of water release. The thermal dehydrogenation proceeds via the formation of Li(HN(CO)NH2) at 104.5 °C; which then decomposes to LiOCN and unidentified phases containing C-N moieties by 230 °C. The final products are Li2NCN and Li2O (270 °C) with LiCN and Li2CO3 also detected under certain conditions. It was observed that ball milling can effectively supress ammonia formation. Furthermore results obtained from energetic ball milling experiments have indicated that the barrier to full dehydrogenation between LiH and urea is principally kinetic. Finally the dehydrogenation reaction between LiH and dicyandiamide system (4:1 stoichiometry) occurs through two distinct pathways dependent on the ball milling conditions. When ball milled at 450 RPM for 1 h, dehydrogenation proceeds alongside dicyandiamide condensation by 400 °C whilst at a slower milling speed of 400 RPM for 6h, decomposition occurs via a rapid gas desorption (H2 and NH3) at 85 °C accompanied by sample foaming. The reactant dicyandiamide can be generated by hydrolysis using the product Li2NCN.
Resumo:
Reinforcement learning is a particular paradigm of machine learning that, recently, has proved times and times again to be a very effective and powerful approach. On the other hand, cryptography usually takes the opposite direction. While machine learning aims at analyzing data, cryptography aims at maintaining its privacy by hiding such data. However, the two techniques can be jointly used to create privacy preserving models, able to make inferences on the data without leaking sensitive information. Despite the numerous amount of studies performed on machine learning and cryptography, reinforcement learning in particular has never been applied to such cases before. Being able to successfully make use of reinforcement learning in an encrypted scenario would allow us to create an agent that efficiently controls a system without providing it with full knowledge of the environment it is operating in, leading the way to many possible use cases. Therefore, we have decided to apply the reinforcement learning paradigm to encrypted data. In this project we have applied one of the most well-known reinforcement learning algorithms, called Deep Q-Learning, to simple simulated environments and studied how the encryption affects the training performance of the agent, in order to see if it is still able to learn how to behave even when the input data is no longer readable by humans. The results of this work highlight that the agent is still able to learn with no issues whatsoever in small state spaces with non-secure encryptions, like AES in ECB mode. For fixed environments, it is also able to reach a suboptimal solution even in the presence of secure modes, like AES in CBC mode, showing a significant improvement with respect to a random agent; however, its ability to generalize in stochastic environments or big state spaces suffers greatly.
Resumo:
Power-to-Gas storage systems have the potential to address grid-stability issues that arise when an increasing share of power is generated from sources that have a highly variable output. Although the proof-of-concept of these has been promising, the behaviour of the processes in off-design conditions is not easily predictable. The primary aim of this PhD project was to evaluate the performance of an original Power-to-Gas system, made up of innovative components. To achieve this, a numerical model has been developed to simulate the characteristics and the behaviour of the several components when the whole system is coupled with a renewable source. The developed model has been applied to a large variety of scenarios, evaluating the performance of the considered process and exploiting a limited amount of experimental data. The model has been then used to compare different Power-to-Gas concepts, in a real scenario of functioning. Several goals have been achieved. In the concept phase, the possibility to thermally integrate the high temperature components has been demonstrated. Then, the parameters that affect the energy performance of a Power-to-Gas system coupled with a renewable source have been identified, providing general recommendations on the design of hybrid systems; these parameters are: 1) the ratio between the storage system size and the renewable generator size; 2) the type of coupled renewable source; 3) the related production profile. Finally, from the results of the comparative analysis, it is highlighted that configurations with a highly oversized renewable source with respect to the storage system show the maximum achievable profit.