905 resultados para DATA STORAGE
Resumo:
30 p.
Resumo:
Riboflavin is employed as the photosensitizer of a novel photopolyrner material for holographic recording, This material has a broad absorption spectrum range (More than 200nm) due to the addition of this dye. The experimental results show that our material has high diffraction efficiency and large refractive index modulation. The maximum diffraction efficiency of the photopolymer is about 56%. The digital data pages are stored in this medium and the reconstructed data page has a good fidelity, with the bit-error-ratio of about 1.8 X 10(-4). it is found that the photopolymer material is suitable for high-density volume holographic digital storage.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): This is a previous presentation of what has been observed in points spread in Mexico. The existing data amount is large enough that an atlas was given out in 1977. This atlas has information which goes back to the beginning of the country. The original data sets from which this atlas was issued exist in a variety of storage forms ranging from simple paper blocks up to books and magnetic tapes.
Resumo:
Cultured Macrobrachium rosenbergii (Scampi, about 30 g each) in headless shell-on form was individually quick frozen in a spiral freezer. The frozen samples were glazed and packed in polythene bags, which were further packed in master carton and stored at -18°C. Samples were drawn at regular intervals and subjected to biochemical, bacteriological and organoleptic analysis to study its storage characteristics. The data on the above parameters showed that the samples were in prime acceptable condition when stored up to 23 weeks. No appreciable change in colour and odour was noticed in the raw muscle. Afterwards, organoleptic evaluation of the cooked muscle revealed slight change in the flavour. Texture also appeared little tougher. These changes in organoleptic characters were well supported by the biochemical bacteriological changes in the muscle.
Resumo:
In order to improve algal biofuel production on a commercial-scale, an understanding of algal growth and fuel molecule accumulation is essential. A mathematical model is presented that describes biomass growth and storage molecule (TAG lipid and starch) accumulation in the freshwater microalga Chlorella vulgaris, under mixotrophic and autotrophic conditions. Biomass growth was formulated based on the Droop model, while the storage molecule production was calculated based on the carbon balance within the algal cells incorporating carbon fixation via photosynthesis, organic carbon uptake and functional biomass growth. The model was validated with experimental growth data of C. vulgaris and was found to fit the data well. Sensitivity analysis showed that the model performance was highly sensitive to variations in parameters associated with nutrient factors, photosynthesis and light intensity. The maximum productivity and biomass concentration were achieved under mixotrophic nitrogen sufficient conditions, while the maximum storage content was obtained under mixotrophic nitrogen deficient conditions.
Resumo:
In order to improve algal biofuel production on a commercial-scale, an understanding of algal growth and fuel molecule accumulation is essential. A mathematical model is presented that describes biomass growth and storage molecule (TAG lipid and starch) accumulation in the freshwater microalga Chlorella vulgaris, under mixotrophic and autotrophic conditions. Biomass growth was formulated based on the Droop model, while the storage molecule production was calculated based on the carbon balance within the algal cells incorporating carbon fixation via photosynthesis, organic carbon uptake and functional biomass growth. The model was validated with experimental growth data of C. vulgaris and was found to fit the data well. Sensitivity analysis showed that the model performance was highly sensitive to variations in parameters associated with nutrient factors, photosynthesis and light intensity. The maximum productivity and biomass concentration were achieved under mixotrophic nitrogen sufficient conditions, while the maximum storage content was obtained under mixotrophic nitrogen deficient conditions. © 2014 Elsevier Ltd.
Resumo:
Abstract—Personal communication devices are increasingly being equipped with sensors that are able to passively collect information from their surroundings – information that could be stored in fairly small local caches. We envision a system in which users of such devices use their collective sensing, storage, and communication resources to query the state of (possibly remote) neighborhoods. The goal of such a system is to achieve the highest query success ratio using the least communication overhead (power). We show that the use of Data Centric Storage (DCS), or directed placement, is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, amorphous placement, in which sensory samples are cached locally and informed exchanges of cached samples is used to diffuse the sensory data throughout the whole network. In handling queries, the local cache is searched first for potential answers. If unsuccessful, the query is forwarded to one or more direct neighbors for answers. This technique leverages node mobility and caching capabilities to avoid the multi-hop communication overhead of directed placement. Using a simplified mobility model, we provide analytical lower and upper bounds on the ability of amorphous placement to achieve uniform field coverage in one and two dimensions. We show that combining informed shuffling of cached samples upon an encounter between two nodes, with the querying of direct neighbors could lead to significant performance improvements. For instance, under realistic mobility models, our simulation experiments show that amorphous placement achieves 10% to 40% better query answering ratio at a 25% to 35% savings in consumed power over directed placement.
Resumo:
Neural network models of working memory, called Sustained Temporal Order REcurrent (STORE) models, are described. They encode the invariant temporal order of sequential events in short term memory (STM) in a way that mimics cognitive data about working memory, including primacy, recency, and bowed order and error gradients. As new items are presented, the pattern of previously stored items is invariant in the sense that, relative activations remain constant through time. This invariant temporal order code enables all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed to design self-organizing temporal recognition and planning systems in which any subsequence of events may need to be categorized in order to to control and predict future behavior or external events. STORE models show how arbitrary event sequences may be invariantly stored, including repeated events. A preprocessor interacts with the working memory to represent event repeats in spatially separate locations. It is shown why at least two processing levels are needed to invariantly store events presented with variable durations and interstimulus intervals. It is also shown how network parameters control the type and shape of primacy, recency, or bowed temporal order gradients that will be stored.
Resumo:
As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p < 0.02) and achieved AUC=0.85 +/- 0.01. The DF-P surpassed the other classifiers in terms of pAUC (p < 0.01) and reached pAUC=0.38 +/- 0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p < 0.04) and achieved AUC=0.94 +/- 0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57 +/- 0.07 to 0.67 +/- 0.05, p > 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p < 0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.
Resumo:
BACKGROUND: Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. METHODOLOGY: The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. PRINCIPAL FINDINGS: The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. CONCLUSIONS: Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.
Resumo:
This article provides a broad overview of project HEED (High-rise Evacuation Evaluation Database) and the methodologies employed in the collection and storage of first-hand accounts of evacuation experiences derived from face-to-face interviews of evacuees from the World Trade Center (WTC) Twin Towers complex on September 11, 2001. In particular, the article describes the development of the HEED database. This is a flexible research tool which contains qualitative type data in the form of coded evacuee experiences along with the full interview transcripts. The data and information captured and stored in the HEED database is not only unique, but provides a means to address current and emerging issues relating to human factors associated with the evacuation of high-rise buildings
Resumo:
The QICS controlled release experiment demonstrates that leaks of carbon dioxide (CO2) gas can be detected by monitoring acoustic, geochemical and biological parameters within a given marine system. However the natural complexity and variability of marine system responses to (artificial) leakage strongly suggests that there are no absolute indicators of leakage or impact that can unequivocally and universally be used for all potential future storage sites. We suggest a multivariate, hierarchical approach to monitoring, escalating from anomaly detection to attribution, quantification and then impact assessment, as required. Given the spatial heterogeneity of many marine ecosystems it is essential that environmental monitoring programmes are supported by a temporally (tidal, seasonal and annual) and spatially resolved baseline of data from which changes can be accurately identified. In this paper we outline and discuss the options for monitoring methodologies and identify the components of an appropriate baseline survey.
Resumo:
Available methods for measuring the impact of ocean acidification (OA) and leakage from carbon capture and storage (CCS) on marine sedimentary pH profiles are unsuitable for replicated experimental setups. To overcome this issue, a novel optical sensor application is presented, using off-the-shelf optode technology (MOPP). The application is validated using microprofiling, during a CCS leakage experiment, where the impact and recovery from a high CO2 plume was investigated in two types of natural marine sediment. MOPP offered user-friendliness, speed of data acquisition, robustness to sediment type, and large sediment depth range. This ensemble of characteristics overcomes many of the challenges found with other pH measuring methods, in OA and CCS research. The impact varied greatly between sediment types, depending on baseline pH variability and sediment permeability. Sedimentary pH profile recovery was quick, with profiles close to control conditions 24 h after the cessation of the leak. However, variability of pH within the finer sediment was still apparent 4 days into the recovery phase. Habitat characteristics need therefore to be considered, to truly disentangle high CO2 perturbation impacts on benthic systems. Impacts on natural communities depend not only on the pH gradient caused by perturbation, but also on other processes that outlive the perturbation, adding complexity to recovery.
Resumo:
The paper presents a new method to extract the chemical transformation rate from reaction–diffusion data with no assumption on the kinetic model (“kinetic model-free procedure”). It is a new non-steady-state kinetic characterization procedure for heterogeneous catalysts. The mathematical foundation of the Y-procedure is a Laplace-domain analysis of the two inert zones in a TZTR followed by transposition to the Fourier domain. When combined with time discretization and filtering the Y-procedure leads to an efficient practical method for reconstructing the concentration and reaction rate in the active zone. Using the Y-procedure the concentration and reaction rate of a non-steady state catalytic process can be determined without any pre-assumption regarding the type of kinetic dependence. The Y-procedure is the basis for advanced software for non-steady state kinetic data interpretation. The Y-procedure can be used to relate changes in the catalytic reaction rate and kinetic parameters to changes in the surface composition (storage) of a catalyst.