9 resultados para Data processing and analysis
em Universidad Politécnica de Madrid
Resumo:
In a series of attempts to research and document relevant sloshing type phenomena, a series of experiments have been conducted. The aim of this paper is to describe the setup and data processing of such experiments. A sloshing tank is subjected to angular motion. As a result pressure registers are obtained at several locations, together with the motion data, torque and a collection of image and video information. The experimental rig and the data acquisition systems are described. Useful information for experimental sloshing research practitioners is provided. This information is related to the liquids used in the experiments, the dying techniques, tank building processes, synchronization of acquisition systems, etc. A new procedure for reconstructing experimental data, that takes into account experimental uncertainties, is presented. This procedure is based on a least squares spline approximation of the data. Based on a deterministic approach to the first sloshing wave impact event in a sloshing experiment, an uncertainty analysis procedure of the associated first pressure peak value is described.
Resumo:
Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.
Resumo:
PART I:Cross-section uncertainties under differentneutron spectra. PART II: Processing uncertainty libraries
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
Following the processing and validation of JEFF-3.1 performed in 2006 and presented in ND2007, and as a consequence of the latest updated of this library (JEFF-3.1.2) in February 2012, a new processing and validation of JEFF-3.1.2 cross section library is presented in this paper. The processed library in ACE format at ten different temperatures was generated with NJOY-99.364 nuclear data processing system. In addition, NJOY-99 inputs are provided to generate PENDF, GENDF, MATXSR and BOXER formats. The library has undergone strict QA procedures, being compared with other available libraries (e.g. ENDF/B-VII.1) and processing codes as PREPRO-2000 codes. A set of 119 criticality benchmark experiments taken from ICSBEP-2010 has been used for validation purposes.
Resumo:
In this paper we describe the specification of amodel for the semantically interoperable representation of language resources for sentiment analysis. The model integrates "lemon", an RDF-based model for the specification of ontology-lexica (Buitelaar et al. 2009), which is used increasinglyfor the representation of language resources asLinked Data, with Marl, an RDF-based model for the representation of sentiment annotations (West-erski et al., 2011; Sánchez-Rada et al., 2013)
Resumo:
PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.
Resumo:
Nowadays, devices that monitor the health of structures consume a lot of power and need a lot of time to acquire, process, and send the information about the structure to the main processing unit. To decrease this time, fast electronic devices are starting to be used to accelerate this processing. In this paper some hardware algorithms implemented in an electronic logic programming device are described. The goal of this implementation is accelerate the process and diminish the information that has to be send. By reaching this goal, the time the processor needs for treating all the information is reduced and so the power consumption is reduced too.
Resumo:
An important aspect of Process Simulators for photovoltaics is prediction of defect evolution during device fabrication. Over the last twenty years, these tools have accelerated process optimization, and several Process Simulators for iron, a ubiquitous and deleterious impurity in silicon, have been developed. The diversity of these tools can make it difficult to build intuition about the physics governing iron behavior during processing. Thus, in one unified software environment and using self-consistent terminology, we combine and describe three of these Simulators. We vary structural defect distribution and iron precipitation equations to create eight distinct Models, which we then use to simulate different stages of processing. We find that the structural defect distribution influences the final interstitial iron concentration ([Fe-i]) more strongly than the iron precipitation equations. We identify two regimes of iron behavior: (1) diffusivity-limited, in which iron evolution is kinetically limited and bulk [Fe-i] predictions can vary by an order of magnitude or more, and (2) solubility-limited, in which iron evolution is near thermodynamic equilibrium and the Models yield similar results. This rigorous analysis provides new intuition that can inform Process Simulation, material, and process development, and it enables scientists and engineers to choose an appropriate level of Model complexity based on wafer type and quality, processing conditions, and available computation time.