925 resultados para Secure Data Storage
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
Changes in the oceanic heat storage (HS) can reveal important evidences of climate variability related to ocean heat fluxes. Specifically, long-term variations in HS are a powerful indicator of climate change as HS represents the balance between the net surface energy flux and the poleward heat transported by the ocean currents. HS is estimated from sea surface height anomaly measured from the altimeters TOPEX/Poseidon and Jason 1 from 1993 to 2006. To characterize and validate the altimeter-based HS in the Atlantic, we used the data from the Pilot Research Moored Array in the Tropical Atlantic (PIRATA) array. Correlations and rms differences are used as statistical figures of merit to compare the HS estimates. The correlations range from 0.50 to 0.87 in the buoys located at the equator and at the southern part of the array. In that region the rms differences range between 0.40 and 0.51 x 10(9) Jm(-2). These results are encouraging and indicate that the altimeter has the precision necessary to capture the interannual trends in HS in the Atlantic. Albeit relatively small, salinity changes can also have an effect on the sea surface height anomaly. To account for this effect, NCEP/GODAS reanalysis data are used to estimate the haline contraction. To understand which dynamical processes are involved in the HS variability, the total signal is decomposed into nonpropagating basin-scale and seasonal (HS(l)) planetary waves, mesoscale eddies, and small-scale residual components. In general, HS(l) is the dominant signal in the tropical region. Results show a warming trend of HS(l) in the past 13 years almost all over the Atlantic basin with the most prominent slopes found at high latitudes. Positive interannual trends are found in the halosteric component at high latitudes of the South Atlantic and near the Labrador Sea. This could be an indication that the salinity anomaly increased in the upper layers during this period. The dynamics of the South Atlantic subtropical gyre could also be subject to low-frequency changes caused by a trend in the halosteric component on each side of the South Atlantic Current.
Resumo:
The Amazon basin is a region of constant scientific interest due to its environmental importance and its biodiversity and climate on a global scale. The seasonal variations in water volume are one of the examples of topics studied nowadays. In general, the variations in river levels depend primarily on the climate and physics characteristics of the corresponding basins. The main factor which influences the water level in the Amazon Basin is the intensive rainfall over this region as a consequence of the humidity of the tropical climate. Unfortunately, the Amazon basin is an area with lack of water level information due to difficulties in access for local operations. The purpose of this study is to compare and evaluate the Equivalent Water Height (Ewh) from GRACE (Gravity Recovery And Climate Experiment) mission, to study the connection between water loading and vertical variations of the crust due to the hydrologic. In order to achieve this goal, the Ewh is compared with in-situ information from limnimeter. For the analysis it was computed the correlation coefficients, phase and amplitude of GRACE Ewh solutions and in-situ data, as well as the timing of periods of drought in different parts of the basin. The results indicated that vertical variations of the lithosphere due to water mass loading could reach 7 to 5 cm per year, in the sedimentary and flooded areas of the region, where water level variations can reach 10 to 8 m.
Resumo:
The Amazon basin is a region of constant scientific interest due to its environmental importance and its biodiversity and climate on a global scale. The seasonal variations in water volume are one of the examples of topics studied nowadays. In general, the variations in river levels depend primarily on the climate and physics characteristics of the corresponding basins. The main factor which influences the water level in the Amazon Basin is the intensive rainfall over this region as a consequence of the humidity of the tropical climate. Unfortunately, the Amazon basin is an area with lack of water level information due to difficulties in access for local operations. The purpose of this study is to compare and evaluate the Equivalent Water Height (Ewh) from GRACE (Gravity Recovery And Climate Experiment) mission, to study the connection between water loading and vertical variations of the crust due to the hydrologic. In order to achieve this goal, the Ewh is compared with in-situ information from limnimeter. For the analysis it was computed the correlation coefficients, phase and amplitude of GRACE Ewh solutions and in-situ data, as well as the timing of periods of drought in different parts of the basin. The results indicated that vertical variations of the lithosphere due to water mass loading could reach 7 to 5 cm per year, in the sedimentary and flooded areas of the region, where water level variations can reach 10 to 8 m.
Resumo:
Il Cloud Storage è un modello di conservazione dati su computer in rete, dove i dati stessi sono memorizzati su molteplici server, reali e/o virtuali, generalmente ospitati presso strutture di terze parti o su server dedicati. Tramite questo modello è possibile accedere alle informazioni personali o aziendali, siano essi video, fotografie, musica, database o file in maniera “smaterializzata”, senza conoscere l’ubicazione fisica dei dati, da qualsiasi parte del mondo, con un qualsiasi dispositivo adeguato. I vantaggi di questa metodologia sono molteplici: infinita capacita’ di spazio di memoria, pagamento solo dell’effettiva quantità di memoria utilizzata, file accessibili da qualunque parte del mondo, manutenzione estremamente ridotta e maggiore sicurezza in quanto i file sono protetti da furto, fuoco o danni che potrebbero avvenire su computer locali. Google Cloud Storage cade in questa categoria: è un servizio per sviluppatori fornito da Google che permette di salvare e manipolare dati direttamente sull’infrastruttura di Google. In maggior dettaglio, Google Cloud Storage fornisce un’interfaccia di programmazione che fa uso di semplici richieste HTTP per eseguire operazioni sulla propria infrastruttura. Esempi di operazioni ammissibili sono: upload di un file, download di un file, eliminazione di un file, ottenere la lista dei file oppure la dimensione di un dato file. Ogniuna di queste richieste HTTP incapsula l’informazione sul metodo utilizzato (il tipo di richista, come GET, PUT, ...) e un’informazione di “portata” (la risorsa su cui effettuare la richiesta). Ne segue che diventa possibile la creazione di un’applicazione che, facendo uso di queste richieste HTTP, fornisce un servizio di Cloud Storage (in cui le applicazioni salvano dati in remoto generalmene attraverso dei server di terze parti). In questa tesi, dopo aver analizzato tutti i dettagli del servizio Google Cloud Storage, è stata implementata un’applicazione, chiamata iHD, che fa uso di quest’ultimo servizio per salvare, manipolare e condividere dati in remoto (nel “cloud”). Operazioni comuni di questa applicazione permettono di condividere cartelle tra più utenti iscritti al servizio, eseguire operazioni di upload e download di file, eliminare cartelle o file ed infine creare cartelle. L’esigenza di un’appliazione di questo tipo è nata da un forte incremento, sul merato della telefonia mobile, di dispositivi con tecnologie e con funzioni sempre più legate ad Internet ed alla connettività che esso offre. La tesi presenta anche una descrizione delle fasi di progettazione e implementazione riguardanti l’applicazione iHD. Nella fase di progettazione si sono analizzati tutti i requisiti funzionali e non funzionali dell’applicazione ed infine tutti i moduli da cui è composta quest’ultima. Infine, per quanto riguarda la fase di implementazione, la tesi presenta tutte le classi ed i rispettivi metodi presenti per ogni modulo, ed in alcuni casi anche come queste classi sono state effettivamente implementate nel linguaggio di programmazione utilizzato.
Resumo:
The oceans play a critical role in the Earth's climate, but unfortunately, the extent of this role is only partially understood. One major obstacle is the difficulty associated with making high-quality, globally distributed observations, a feat that is nearly impossible using only ships and other ocean-based platforms. The data collected by satellite-borne ocean color instruments, however, provide environmental scientists a synoptic look at the productivity and variability of the Earth's oceans and atmosphere, respectively, on high-resolution temporal and spatial scales. Three such instruments, the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) onboard ORBIMAGE's OrbView-2 satellite, and two Moderate Resolution Imaging Spectroradiometers (MODIS) onboard the National Aeronautic and Space Administration's (NASA) Terra and Aqua satellites, have been in continuous operation since September 1997, February 2000, and June 2002, respectively. To facilitate the assembly of a suitably accurate data set for climate research, members of the NASA Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project and SeaWiFS Project Offices devote significant attention to the calibration and validation of these and other ocean color instruments. This article briefly presents results from the SIMBIOS and SeaWiFS Project Office's (SSPO) satellite ocean color validation activities and describes the SeaWiFS Bio-optical Archive and Storage System (SeaBASS), a state-of-the-art system for archiving, cataloging, and distributing the in situ data used in these activities.
Resumo:
A comprehensive hydroclimatic data set is presented for the 2011 water year to improve understanding of hydrologic processes in the rain-snow transition zone. This type of dataset is extremely rare in scientific literature because of the quality and quantity of soil depth, soil texture, soil moisture, and soil temperature data. Standard meteorological and snow cover data for the entire 2011 water year are included, which include several rain-on-snow events. Surface soil textures and soil depths from 57 points are presented as well as soil texture profiles from 14 points. Meteorological data include continuous hourly shielded, unshielded, and wind corrected precipitation, wind speed, air temperature, relative humidity, dew point temperature, and incoming solar and thermal radiation data. Sub-surface data included are hourly soil moisture data from multiple depths from 7 soil profiles within the catchment, and soil temperatures from multiple depths from 2 soil profiles. Hydrologic response data include hourly stream discharge from the catchment outlet weir, continuous snow depths from one location, intermittent snow depths from 5 locations, and snow depth and density data from ten weekly snow surveys. Though it represents only a single water year, the presentation of both above and below ground hydrologic condition makes it one of the most detailed and complete hydro-climatic datasets from the climatically sensitive rain-snow transition zone for a wide range of modeling and descriptive studies.
Resumo:
We dedicate this paper to the memory of Prof. Andres Perez Estaún, who was a great and committed scientist, wonderful colleague and even better friend. The datasets in this work have been funded by Fundación Ciudad de la Energía (Spanish Government, www.ciuden.es) and by the European Union through the “European Energy Programme 15 for Recovery” and the Compostilla OXYCFB300 project. Dr. Juan Alcalde is currently funded by NERC grant NE/M007251/1. Simon Campbell and Samuel Cheyney are acknowledged for thoughtful comments on gravity inversion
Resumo:
No more published?
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY WITH PRIOR ARRANGEMENT
Resumo:
Acknowledgements The authors would like to thank Jonathan Dick, Josie Geris, Jason Lessels, and Claire Tunaley for data collection and Audrey Innes for lab sample preparation. We also thank Christian Birkel for discussions about the model structure and comments on an earlier draft of the paper. Climatic data were provided by Iain Malcolm and Marine Scotland Fisheries at the Freshwater Lab, Pitlochry. Additional precipitation data were provided by the UK Meteorological Office and the British Atmospheric Data Centre (BADC).We thank the European Research Council ERC (project GA 335910 VEWA) for funding the VeWa project.