916 resultados para data centric storage
Resumo:
Riboflavin is employed as the photosensitizer of a novel photopolyrner material for holographic recording, This material has a broad absorption spectrum range (More than 200nm) due to the addition of this dye. The experimental results show that our material has high diffraction efficiency and large refractive index modulation. The maximum diffraction efficiency of the photopolymer is about 56%. The digital data pages are stored in this medium and the reconstructed data page has a good fidelity, with the bit-error-ratio of about 1.8 X 10(-4). it is found that the photopolymer material is suitable for high-density volume holographic digital storage.
Resumo:
The common 2652 6N del variant in the CASP8 promoter (rs3834129) has been described as a putative low-penetrance risk factor for different cancer types. In particular, some studies suggested that the deleted allele (del) was inversely associated with CRC risk while other analyses failed to confirm this. Hence, to better understand the role of this variant in the risk of developing CRC, we performed a multi-centric case-control study. In the study, the variant 2652 6N del was genotyped in a total of 6,733 CRC cases and 7,576 controls recruited by six different centers located in Spain, Italy, USA, England, Czech Republic and the Netherlands collaborating to the international consortium COGENT (COlorectal cancer GENeTics). Our analysis indicated that rs3834129 was not associated with CRC risk in the full data set. However, the del allele was under-represented in one set of cases with a family history of CRC (per allele model OR = 0.79, 95% CI = 0.69-0.90) suggesting this allele might be a protective factor versus familial CRC. Since this multi-centric case-control study was performed on a very large sample size, it provided robust clarification of the effect of rs3834129 on the risk of developing CRC in Caucasians.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): This is a previous presentation of what has been observed in points spread in Mexico. The existing data amount is large enough that an atlas was given out in 1977. This atlas has information which goes back to the beginning of the country. The original data sets from which this atlas was issued exist in a variety of storage forms ranging from simple paper blocks up to books and magnetic tapes.
Resumo:
Cultured Macrobrachium rosenbergii (Scampi, about 30 g each) in headless shell-on form was individually quick frozen in a spiral freezer. The frozen samples were glazed and packed in polythene bags, which were further packed in master carton and stored at -18°C. Samples were drawn at regular intervals and subjected to biochemical, bacteriological and organoleptic analysis to study its storage characteristics. The data on the above parameters showed that the samples were in prime acceptable condition when stored up to 23 weeks. No appreciable change in colour and odour was noticed in the raw muscle. Afterwards, organoleptic evaluation of the cooked muscle revealed slight change in the flavour. Texture also appeared little tougher. These changes in organoleptic characters were well supported by the biochemical bacteriological changes in the muscle.
Resumo:
In order to improve algal biofuel production on a commercial-scale, an understanding of algal growth and fuel molecule accumulation is essential. A mathematical model is presented that describes biomass growth and storage molecule (TAG lipid and starch) accumulation in the freshwater microalga Chlorella vulgaris, under mixotrophic and autotrophic conditions. Biomass growth was formulated based on the Droop model, while the storage molecule production was calculated based on the carbon balance within the algal cells incorporating carbon fixation via photosynthesis, organic carbon uptake and functional biomass growth. The model was validated with experimental growth data of C. vulgaris and was found to fit the data well. Sensitivity analysis showed that the model performance was highly sensitive to variations in parameters associated with nutrient factors, photosynthesis and light intensity. The maximum productivity and biomass concentration were achieved under mixotrophic nitrogen sufficient conditions, while the maximum storage content was obtained under mixotrophic nitrogen deficient conditions.
Resumo:
In order to improve algal biofuel production on a commercial-scale, an understanding of algal growth and fuel molecule accumulation is essential. A mathematical model is presented that describes biomass growth and storage molecule (TAG lipid and starch) accumulation in the freshwater microalga Chlorella vulgaris, under mixotrophic and autotrophic conditions. Biomass growth was formulated based on the Droop model, while the storage molecule production was calculated based on the carbon balance within the algal cells incorporating carbon fixation via photosynthesis, organic carbon uptake and functional biomass growth. The model was validated with experimental growth data of C. vulgaris and was found to fit the data well. Sensitivity analysis showed that the model performance was highly sensitive to variations in parameters associated with nutrient factors, photosynthesis and light intensity. The maximum productivity and biomass concentration were achieved under mixotrophic nitrogen sufficient conditions, while the maximum storage content was obtained under mixotrophic nitrogen deficient conditions. © 2014 Elsevier Ltd.
Resumo:
Neural network models of working memory, called Sustained Temporal Order REcurrent (STORE) models, are described. They encode the invariant temporal order of sequential events in short term memory (STM) in a way that mimics cognitive data about working memory, including primacy, recency, and bowed order and error gradients. As new items are presented, the pattern of previously stored items is invariant in the sense that, relative activations remain constant through time. This invariant temporal order code enables all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed to design self-organizing temporal recognition and planning systems in which any subsequence of events may need to be categorized in order to to control and predict future behavior or external events. STORE models show how arbitrary event sequences may be invariantly stored, including repeated events. A preprocessor interacts with the working memory to represent event repeats in spatially separate locations. It is shown why at least two processing levels are needed to invariantly store events presented with variable durations and interstimulus intervals. It is also shown how network parameters control the type and shape of primacy, recency, or bowed temporal order gradients that will be stored.
Resumo:
As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p < 0.02) and achieved AUC=0.85 +/- 0.01. The DF-P surpassed the other classifiers in terms of pAUC (p < 0.01) and reached pAUC=0.38 +/- 0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p < 0.04) and achieved AUC=0.94 +/- 0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57 +/- 0.07 to 0.67 +/- 0.05, p > 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p < 0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.
Resumo:
BACKGROUND: Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. METHODOLOGY: The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. PRINCIPAL FINDINGS: The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. CONCLUSIONS: Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.
Resumo:
This article provides a broad overview of project HEED (High-rise Evacuation Evaluation Database) and the methodologies employed in the collection and storage of first-hand accounts of evacuation experiences derived from face-to-face interviews of evacuees from the World Trade Center (WTC) Twin Towers complex on September 11, 2001. In particular, the article describes the development of the HEED database. This is a flexible research tool which contains qualitative type data in the form of coded evacuee experiences along with the full interview transcripts. The data and information captured and stored in the HEED database is not only unique, but provides a means to address current and emerging issues relating to human factors associated with the evacuation of high-rise buildings
Resumo:
This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development Environments (DECADE). A brief discussion sets the background for IoT, and the development of the distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and quantitative analysis carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service architecture, combining a distributed data warehouse, web services for analysis agents, ontology agents and a verification engine, with a centrally verified outcome database maintained by certifying body for qualification/professional status.
Resumo:
The QICS controlled release experiment demonstrates that leaks of carbon dioxide (CO2) gas can be detected by monitoring acoustic, geochemical and biological parameters within a given marine system. However the natural complexity and variability of marine system responses to (artificial) leakage strongly suggests that there are no absolute indicators of leakage or impact that can unequivocally and universally be used for all potential future storage sites. We suggest a multivariate, hierarchical approach to monitoring, escalating from anomaly detection to attribution, quantification and then impact assessment, as required. Given the spatial heterogeneity of many marine ecosystems it is essential that environmental monitoring programmes are supported by a temporally (tidal, seasonal and annual) and spatially resolved baseline of data from which changes can be accurately identified. In this paper we outline and discuss the options for monitoring methodologies and identify the components of an appropriate baseline survey.
Resumo:
Available methods for measuring the impact of ocean acidification (OA) and leakage from carbon capture and storage (CCS) on marine sedimentary pH profiles are unsuitable for replicated experimental setups. To overcome this issue, a novel optical sensor application is presented, using off-the-shelf optode technology (MOPP). The application is validated using microprofiling, during a CCS leakage experiment, where the impact and recovery from a high CO2 plume was investigated in two types of natural marine sediment. MOPP offered user-friendliness, speed of data acquisition, robustness to sediment type, and large sediment depth range. This ensemble of characteristics overcomes many of the challenges found with other pH measuring methods, in OA and CCS research. The impact varied greatly between sediment types, depending on baseline pH variability and sediment permeability. Sedimentary pH profile recovery was quick, with profiles close to control conditions 24 h after the cessation of the leak. However, variability of pH within the finer sediment was still apparent 4 days into the recovery phase. Habitat characteristics need therefore to be considered, to truly disentangle high CO2 perturbation impacts on benthic systems. Impacts on natural communities depend not only on the pH gradient caused by perturbation, but also on other processes that outlive the perturbation, adding complexity to recovery.