873 resultados para Data storage equipment
Resumo:
The effect of adding internal fins to the injection tube of a storage cell target filled with a polarized atomic beam source has been studied. The tube conductance and the atomic beam intensity at the exit of the injection tube have been measured, observing an unexpectedly large beam loss. Simulations of the atomic beam reproduce the observed attenuation only when the non-zero azimuthal component of the atom's velocity is taken into account.
Resumo:
Abstract—Personal communication devices are increasingly being equipped with sensors that are able to passively collect information from their surroundings – information that could be stored in fairly small local caches. We envision a system in which users of such devices use their collective sensing, storage, and communication resources to query the state of (possibly remote) neighborhoods. The goal of such a system is to achieve the highest query success ratio using the least communication overhead (power). We show that the use of Data Centric Storage (DCS), or directed placement, is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, amorphous placement, in which sensory samples are cached locally and informed exchanges of cached samples is used to diffuse the sensory data throughout the whole network. In handling queries, the local cache is searched first for potential answers. If unsuccessful, the query is forwarded to one or more direct neighbors for answers. This technique leverages node mobility and caching capabilities to avoid the multi-hop communication overhead of directed placement. Using a simplified mobility model, we provide analytical lower and upper bounds on the ability of amorphous placement to achieve uniform field coverage in one and two dimensions. We show that combining informed shuffling of cached samples upon an encounter between two nodes, with the querying of direct neighbors could lead to significant performance improvements. For instance, under realistic mobility models, our simulation experiments show that amorphous placement achieves 10% to 40% better query answering ratio at a 25% to 35% savings in consumed power over directed placement.
Resumo:
Neural network models of working memory, called Sustained Temporal Order REcurrent (STORE) models, are described. They encode the invariant temporal order of sequential events in short term memory (STM) in a way that mimics cognitive data about working memory, including primacy, recency, and bowed order and error gradients. As new items are presented, the pattern of previously stored items is invariant in the sense that, relative activations remain constant through time. This invariant temporal order code enables all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed to design self-organizing temporal recognition and planning systems in which any subsequence of events may need to be categorized in order to to control and predict future behavior or external events. STORE models show how arbitrary event sequences may be invariantly stored, including repeated events. A preprocessor interacts with the working memory to represent event repeats in spatially separate locations. It is shown why at least two processing levels are needed to invariantly store events presented with variable durations and interstimulus intervals. It is also shown how network parameters control the type and shape of primacy, recency, or bowed temporal order gradients that will be stored.
Resumo:
The thesis initially gives an overview of the wave industry and the current state of some of the leading technologies as well as the energy storage systems that are inherently part of the power take-off mechanism. The benefits of electrical energy storage systems for wave energy converters are then outlined as well as the key parameters required from them. The options for storage systems are investigated and the reasons for examining supercapacitors and lithium-ion batteries in more detail are shown. The thesis then focusses on a particular type of offshore wave energy converter in its analysis, the backward bent duct buoy employing a Wells turbine. Variable speed strategies from the research literature which make use of the energy stored in the turbine inertia are examined for this system, and based on this analysis an appropriate scheme is selected. A supercapacitor power smoothing approach is presented in conjunction with the variable speed strategy. As long component lifetime is a requirement for offshore wave energy converters, a computer-controlled test rig has been built to validate supercapacitor lifetimes to manufacturer’s specifications. The test rig is also utilised to determine the effect of temperature on supercapacitors, and determine application lifetime. Cycle testing is carried out on individual supercapacitors at room temperature, and also at rated temperature utilising a thermal chamber and equipment programmed through the general purpose interface bus by Matlab. Application testing is carried out using time-compressed scaled-power profiles from the model to allow a comparison of lifetime degradation. Further applications of supercapacitors in offshore wave energy converters are then explored. These include start-up of the non-self-starting Wells turbine, and low-voltage ride-through examined to the limits specified in the Irish grid code for wind turbines. These applications are investigated with a more complete model of the system that includes a detailed back-to-back converter coupling a permanent magnet synchronous generator to the grid. Supercapacitors have been utilised in combination with battery systems for many applications to aid with peak power requirements and have been shown to improve the performance of these energy storage systems. The design, implementation, and construction of coupling a 5 kW h lithium-ion battery to a microgrid are described. The high voltage battery employed a continuous power rating of 10 kW and was designed for the future EV market with a controller area network interface. This build gives a general insight to some of the engineering, planning, safety, and cost requirements of implementing a high power energy storage system near or on an offshore device for interface to a microgrid or grid.
Resumo:
As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p < 0.02) and achieved AUC=0.85 +/- 0.01. The DF-P surpassed the other classifiers in terms of pAUC (p < 0.01) and reached pAUC=0.38 +/- 0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p < 0.04) and achieved AUC=0.94 +/- 0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57 +/- 0.07 to 0.67 +/- 0.05, p > 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p < 0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.
Resumo:
BACKGROUND: Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. METHODOLOGY: The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. PRINCIPAL FINDINGS: The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. CONCLUSIONS: Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.
Resumo:
The electronics industry is developing rapidly together with the increasingly complex problem of microelectronic equipment cooling. It has now become necessary for thermal design engineers to consider the problem of equipment cooling at some level. The use of Computational Fluid Dynamics (CFD) for such investigations is fast becoming a powerful and almost essential tool for the design, development and optimisation of engineering applications. However turbulence models remain a key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt fluctuations experienced by the turbulent energy and other parameters located at near wall regions and shear layers a particularly fine computational mesh is necessary which inevitably increases the computer storage and run-time requirements. This paper will discuss results from an investigation into the accuract of currently used turbulence models. Also a newly formulated transitional hybrid turbulence model will be introduced with comparisonsaagainst experimental data.
Resumo:
This article provides a broad overview of project HEED (High-rise Evacuation Evaluation Database) and the methodologies employed in the collection and storage of first-hand accounts of evacuation experiences derived from face-to-face interviews of evacuees from the World Trade Center (WTC) Twin Towers complex on September 11, 2001. In particular, the article describes the development of the HEED database. This is a flexible research tool which contains qualitative type data in the form of coded evacuee experiences along with the full interview transcripts. The data and information captured and stored in the HEED database is not only unique, but provides a means to address current and emerging issues relating to human factors associated with the evacuation of high-rise buildings
Resumo:
The US National Oceanic and Atmospheric Administration (NOAA) Fisheries Continuous Plankton Recorder (CPR) Survey has sampled four routes: Boston–Nova Scotia (1961–present), New York toward Bermuda (1976–present), Narragansett Bay–Mount Hope Bay–Rhode Island Sound (1998–present) and eastward of Chesapeake Bay (1974–1980). NOAA involvement began in 1974 when it assumed responsibility for the existing Boston–Nova Scotia route from what is now the UK's Sir Alister Hardy Foundation for Ocean Science (SAHFOS). Training, equipment and computer software were provided by SAHFOS to ensure continuity for this and standard protocols for any new routes. Data for the first 14 years of this route were provided to NOAA by SAHFOS. Comparison of collection methods; sample processing; and sample identification, staging and counting techniques revealed near-consistency between NOAA and SAHFOS. One departure involved phytoplankton counting standards. This has since been addressed and the data corrected. Within- and between-survey taxonomic and life-stage names and their consistency through time were, and continue to be, an issue. For this, a cross-reference table has been generated that contains the SAHFOS taxonomic code, NOAA taxonomic code, NOAA life-stage code, National Oceanographic Data Center (NODC) taxonomic code, Integrated Taxonomic Information System (ITIS) serial number and authority and consistent use/route. This table is available for review/use by other CPR surveys. Details of the NOAA and SAHFOS comparison and analytical techniques unique to NOAA are presented.
Resumo:
The QICS controlled release experiment demonstrates that leaks of carbon dioxide (CO2) gas can be detected by monitoring acoustic, geochemical and biological parameters within a given marine system. However the natural complexity and variability of marine system responses to (artificial) leakage strongly suggests that there are no absolute indicators of leakage or impact that can unequivocally and universally be used for all potential future storage sites. We suggest a multivariate, hierarchical approach to monitoring, escalating from anomaly detection to attribution, quantification and then impact assessment, as required. Given the spatial heterogeneity of many marine ecosystems it is essential that environmental monitoring programmes are supported by a temporally (tidal, seasonal and annual) and spatially resolved baseline of data from which changes can be accurately identified. In this paper we outline and discuss the options for monitoring methodologies and identify the components of an appropriate baseline survey.
Resumo:
Available methods for measuring the impact of ocean acidification (OA) and leakage from carbon capture and storage (CCS) on marine sedimentary pH profiles are unsuitable for replicated experimental setups. To overcome this issue, a novel optical sensor application is presented, using off-the-shelf optode technology (MOPP). The application is validated using microprofiling, during a CCS leakage experiment, where the impact and recovery from a high CO2 plume was investigated in two types of natural marine sediment. MOPP offered user-friendliness, speed of data acquisition, robustness to sediment type, and large sediment depth range. This ensemble of characteristics overcomes many of the challenges found with other pH measuring methods, in OA and CCS research. The impact varied greatly between sediment types, depending on baseline pH variability and sediment permeability. Sedimentary pH profile recovery was quick, with profiles close to control conditions 24 h after the cessation of the leak. However, variability of pH within the finer sediment was still apparent 4 days into the recovery phase. Habitat characteristics need therefore to be considered, to truly disentangle high CO2 perturbation impacts on benthic systems. Impacts on natural communities depend not only on the pH gradient caused by perturbation, but also on other processes that outlive the perturbation, adding complexity to recovery.
Resumo:
The paper presents a new method to extract the chemical transformation rate from reaction–diffusion data with no assumption on the kinetic model (“kinetic model-free procedure”). It is a new non-steady-state kinetic characterization procedure for heterogeneous catalysts. The mathematical foundation of the Y-procedure is a Laplace-domain analysis of the two inert zones in a TZTR followed by transposition to the Fourier domain. When combined with time discretization and filtering the Y-procedure leads to an efficient practical method for reconstructing the concentration and reaction rate in the active zone. Using the Y-procedure the concentration and reaction rate of a non-steady state catalytic process can be determined without any pre-assumption regarding the type of kinetic dependence. The Y-procedure is the basis for advanced software for non-steady state kinetic data interpretation. The Y-procedure can be used to relate changes in the catalytic reaction rate and kinetic parameters to changes in the surface composition (storage) of a catalyst.
Resumo:
A Design of Experiments (DoE) analysis was undertaken to generate a list of configurations for CFD numerical simulation of an aircraft crown compartment. Fitted regression models were built to predict the convective heat transfer coefficients of thermally sensitive dissipating elements located inside this compartment. These are namely the SEPDC and the Route G. Currently they are positioned close to the fuselage and it is of interest to optimise the heat transfer for reliability and performance purposes. Their locations and the external fuselage surface temperature were selected as input variables for the DoE. The models fit the CFD data with values ranging from 0.878 to 0.978, and predict that the optimum locations in terms of heat transfer are when the elements are positioned as close to the crown floor as possible ( and ?min. limits), where they come in direct contact with the air flow from the cabin ventilation system, and when they are positioned close to the centreline ( and ?CL). The methodology employed allows aircraft thermal designers to optimise equipment placement in confined areas of an aircraft during the design phase. The determined models should be incorporated into global aircraft numerical models to improve accuracy and reduce model size and computational time. © 2012 Elsevier Masson SAS. All rights reserved.