905 resultados para STEP-NC
Resumo:
The security event correlation scalability has become a major concern for security analysts and IT administrators when considering complex IT infrastructures that need to handle gargantuan amounts of events or wide correlation window spans. The current correlation capabilities of Security Information and Event Management (SIEM), based on a single node in centralized servers, have proved to be insufficient to process large event streams. This paper introduces a step forward in the current state of the art to address the aforementioned problems. The proposed model takes into account the two main aspects of this ?eld: distributed correlation and query parallelization. We present a case study of a multiple-step attack on the Olympic Games IT infrastructure to illustrate the applicability of our approach.
Resumo:
1. Introduction: setting and problem definition 2. The Adaptation Pathway –2.1 Stage 1: appraising risks and opportunities •Step 1: Impact analysis •Step 2: Policy analysis •Step 3: Socio-institutional analysis –2.2 Stage 2: appraising and choosing adaptation opt ions •Step 4: identifying and prioritizing adaptation o ptions 3. Conclusions
Resumo:
This paper presents solutions of the NURISP VVER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes with the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs.TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of GET or Black Box Homogenization type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors.
Resumo:
This work aims at a deeper understanding of the energy loss phenomenon in polysilicon production reactors by the so-called Siemens process. Contributions to the energy consumption of the polysilicon deposition step are studied in this paper, focusing on the radiation heat loss phenomenon. A theoretical model for radiation heat loss calculations is experimentally validated with the help of a laboratory CVD prototype. Following the results of the model, relevant parameters that directly affect the amount of radiation heat losses are put forward. Numerical results of the model applied to a state-of-the-art industrial reactor show the influence of these parameters on energy consumption due to radiation per kilogram of silicon produced; the radiation heat loss can be reduced by 3.8% when the reactor inner wall radius is reduced from 0.78 to 0.70 m, by 25% when the wall emissivity is reduced from 0.5 to 0.3, and by 12% when the final rod diameter is increased from 12 to 15 cm.
Resumo:
A phosphorus diffusion gettering model is used to examine the efficacy of a standard gettering process on interstitial and precipitated iron in multicrystalline silicon. The model predicts a large concentration of precipitated iron remaining after standard gettering for most as-grown iron distributions. Although changes in the precipitated iron distribution are predicted to be small, the simulated post-processing interstitial iron concentration is predicted to depend strongly on the as-grown distribution of precipitates, indicating that precipitates must be considered as internal sources of contamination during processing. To inform and validate the model, the iron distributions before and after a standard phosphorus diffusion step are studied in samples from the bottom, middle, and top of an intentionally Fe-contaminated laboratory ingot. A census of iron-silicide precipitates taken by synchrotron-based X-ray fluorescence microscopy confirms the presence of a high density of iron-silicide precipitates both before and after phosphorus diffusion. A comparable precipitated iron distribution was measured in a sister wafer after hydrogenation during a firing step. The similar distributions of precipitated iron seen after each step in the solar cell process confirm that the effect of standard gettering on precipitated iron is strongly limited as predicted by simulation. Good agreement between the experimental and simulated data supports the hypothesis that gettering kinetics is governed by not only the total iron concentration but also by the distribution of precipitated iron. Finally, future directions based on the modeling are suggested for the improvement of effective minority carrier lifetime in multicrystalline silicon solar cells.
Resumo:
Background Magnetoencephalography (MEG) provides a direct measure of brain activity with high combined spatiotemporal resolution. Preprocessing is necessary to reduce contributions from environmental interference and biological noise. New method The effect on the signal-to-noise ratio of different preprocessing techniques is evaluated. The signal-to-noise ratio (SNR) was defined as the ratio between the mean signal amplitude (evoked field) and the standard error of the mean over trials. Results Recordings from 26 subjects obtained during and event-related visual paradigm with an Elekta MEG scanner were employed. Two methods were considered as first-step noise reduction: Signal Space Separation and temporal Signal Space Separation, which decompose the signal into components with origin inside and outside the head. Both algorithm increased the SNR by approximately 100%. Epoch-based methods, aimed at identifying and rejecting epochs containing eye blinks, muscular artifacts and sensor jumps provided an SNR improvement of 5–10%. Decomposition methods evaluated were independent component analysis (ICA) and second-order blind identification (SOBI). The increase in SNR was of about 36% with ICA and 33% with SOBI. Comparison with existing methods No previous systematic evaluation of the effect of the typical preprocessing steps in the SNR of the MEG signal has been performed. Conclusions The application of either SSS or tSSS is mandatory in Elekta systems. No significant differences were found between the two. While epoch-based methods have been routinely applied the less often considered decomposition methods were clearly superior and therefore their use seems advisable.
Resumo:
With the consolidation of the new solid state lighting LEOs devices, te5t1n9 the compliance 01 lamps based on this technology lor Solar Home Systems (SHS) have been analyzed. The definition of the laboratory procedures to be used with final products 15 a necessary step in arder to be able to assure the quality of the lamps prior to be installed [1]. As well as with CFL technology. particular attention has been given to simplicity and technical affordability in arder to facilitate the implementation of the test with basie and simple laboratory too15 even on the same SHS electrification program locations. The block of test procedures has been applied to a set of 14 low-cost lamps. They apply to lamp resistance, reliability and performance under normal, extreme and abnormal operating conditions as a simple but complete quality meter tool 01 any LEO bulb.
Resumo:
This paper presents the SAILSE Project (Sistema Avanzado de Información en Lengua de Signos Española ? Spanish Sign Language Advanced Information System). This project aims to develop an interactive system for facilitating the communication between a hearing and a deaf person. The first step has been the linguistic study, including a sentence collection, its translation into LSE (Lengua de Signos Española - Spanish Sign Language), and sign generation. After this analysis, the paper describes the interactive system that integrates an avatar to represent the signs, a text to speech converter and several translation technologies. Finally, this paper presents the set up carried out with deaf people and the main conclusions extracted from it.
Resumo:
This work compared the quantification of soluble fibre in feeds using different chemical and in vitro approaches, and studied the potential interference between soluble fibre and mucin determinations. Six ingredients: sugar beet pulp (SBP), SBP pectins, insoluble SBP, wheat straw, sunflower hulls and lignocellulose, and seven rabbit diets, differing in soluble fibre content, were evaluated. In experiment 1, ingredients and diets were analyzed for total dietary fibre (TDF), insoluble dietary fibre (IDF), soluble dietary fibre (SDF), aNDFom (corrected for protein, aNDFom-cp) and 2-step pepsin/pancreatin in vitro DM indigestibility (corrected for ash and protein, ivDMi2). Soluble fibre was estimated by difference using three procedures: TDF?IDF (SDFIDF), TDF?ivDMi2 (SDFivDMi2), and TDF?aNDFom-cp (SDFaNDFom-cp). Soluble fibre determined directly (SDF) or by difference as SDFivDMi2 were not different (109 g/kg DM, on average). However, when it was calculated as SDFaNDFom-cp the value was 40% higher (153 g/kg DM, P menor que 0.05), whereas SDFIDF (124 g/kg DM) did not differ from any of the other methods. The correlation between the four methods was high (r ? 0.96; P ? 0.001; n = 13), but it decreased or even disappeared when SBP pectins and SBP were excluded and a lower and more narrow range of variation of soluble fibre was used. In experiment 2, the ivDMi2 using crucibles (reference method) were compared to those made using individual or collective ankom bags in order to simplify the determination of SDFivDMi2. The ivDMi2 was not different when using crucibles or individual or collective ankom bags. In experiment 3, the potential interference between soluble fibre and intestinal mucin determinations was studied using rabbit intestinal raw mucus, digesta and SBP pectins, lignocelluloses and a rabbit diet. An interference was observed between the determinations of soluble fibre and crude mucin, as contents of TDF and apparent crude mucin were high in SBP pectins (994 and 709 g/kg DM) and rabbit intestinal raw mucus (571 and 739 g/kg DM). After a pectinase treatment, the coefficient of apparent mucin recovery of SBP pectins was close to zero, whereas that of rabbit mucus was not modified. An estimation of the crude mucin carbohydrates retained in digesta TDF is proposed to correct TDF and soluble fibre digestibility. In conclusion, the values of soluble fibre depend on the methodology used. The contamination of crude mucin with soluble fibre is avoided using pectinase.
Resumo:
In the SESAR Step 2 concept of operations a RBT is available and seen by all making it possible to conceive a different operating method than the current ATM system based on Collaborative Decisions Making processes. Currently there is a need to describe in more detail the mechanisms by which actors (ATC, Network Management, Flight Crew, airports and Airline Operation Centre) will negotiate revisions to the RBT. This paper introduces a negotiation model, which uses constraint based programing applied to a mediator to facilitate negotiation process in a SWIM enabled environment. Three processes for modelling the negotiation process are explained as well a preliminary reasoning agent algorithm modelled with constraint satisfaction problem is presented. Computational capability of the model is evaluated in the conclusion.
Resumo:
We demonstrate generating complete and playable card games using evolutionary algorithms. Card games are represented in a previously devised card game description language, a context-free grammar. The syntax of this language allows us to use grammar-guided genetic programming. Candidate card games are evaluated through a cascading evaluation function, a multi-step process where games with undesired properties are progressively weeded out. Three representa- tive examples of generated games are analysed. We observed that these games are reasonably balanced and have skill ele- ments, they are not yet entertaining for human players. The particular shortcomings of the examples are discussed in re- gard to the generative process to be able to generate quality games
Resumo:
Vicinal Ge(100) is the common substrate for state of the art multi-junction solar cells grown by metal-organic vapor phase epitaxy (MOVPE). While triple junction solar cells based on Ge(100) present efficiencies mayor que 40%, little is known about the microscopic III-V/Ge(100) nucleation and its interface formation. A suitable Ge(100) surface preparation prior to heteroepitaxy is crucial to achieve low defect densities in the III-V epilayers. Formation of single domain surfaces with double layer steps is required to avoid anti-phase domains in the III-V films. The step formation processes in MOVPE environment strongly depends on the major process parameters such as substrate temperature, H2 partial pressure, group V precursors [1], and reactor conditions. Detailed investigation of these processes on the Ge(100) surface by ultrahigh vacuum (UHV) based standard surface science tools are complicated due to the presence of H2 process gas. However, in situ surface characterization by reflection anisotropy spectroscopy (RAS) allowed us to study the MOVPE preparation of Ge(100) surfaces directly in dependence on the relevant process parameters [2, 3, 4]. A contamination free MOVPE to UHV transfer system [5] enabled correlation of the RA spectra to results from UHV-based surface science tools. In this paper, we established the characteristic RA spectra of vicinal Ge(100) surfaces terminated with monohydrides, arsenic and phosphorous. RAS enabled in situ control of oxide removal, H2 interaction and domain formation during MOVPE preparation.
Resumo:
The aim of this Thesis is to get in deep in the use of models (conceptual and numerical), as a prediction and analytical tool for hydrogeological studies, mainly from point of view of the mining drainage. In the first place, are developed the basic concepts and the parametric variations range are developed, usually used in the modelization of underground f10w and particle transport, and also the more recommended modelization process, analysing step by step each of its sequences, developed based in the experience of the author, contrasted against the available bibliography. Following MODFLOW is described, as a modelization tool, taking into account the advantages that its more common pre/post-treatment software have (Processing MODFLOW, Mod CAD and Visual MODFLOW). In third place, are introduced the criterions and required parameters to develop a conceptual model, numerical discretization, definition of the boundary and initial conditions, as well as all those factors which affects to the system (antropic or natural), developing the creation process, data introduction, execution of morlel, convergence criterions and calibration and obtaining result, natural of Visual MODFLOUI. Next, five practical cases are analysed, in which the author has been applied MODFLOW, and the different pre/post-treatment software (Processing MODFLOW, Mod CAD and Visual MODFLOW), describing for each one, the objectives, the conceptual model defined, discretization, the parametric definition, sensibility analysis, results reached and future states prediction. In fifth place, are presented a program developed by the author which allow to improve the facilities offered by Mod CAD and Visual MODFLOW, expanding modelization possibilities and connection to other computers. Next step it is presented a series of solutions to the most typical problems which could appear during the modelization with MODFLOW. Finally, the conclusions and recommendation readied are exposed, with the purpose to help in the developing of hydrogeological models both conceptuals and numericals. RESUMEN El objetivo de esta Tesis es profundizar en el empleo de modelos (conceptuales y numéricos), como herramienta de predicción y análisis en estudios hidrogeológicos, fundamentalmente desde el punto de vista de drenaje minero. En primer lugar, se desarrollan los conceptos básicos y los rangos de variación paramétrica, habituales en la modelización de flujos subterráneos y transporte de partículas, así como el proceso de modelización más recomendado, analizando paso a paso cada una de sus secuencias, desarrollado en base a la experiencia del autor, contrastado con la bibliografía disponible. Seguidamente se describe MODFLOW como herramienta de modelización, valorando las ventajas que presentan sus software de pre/post-tratamiento más comunes (Proccesing MODFLOW, Mod CAD y Visual MODFLOW). En tercer lugar, se introducen los criterios y parámetros precisos para desarrollar un modelo conceptual, discretización numérica, definición de las condiciones de contorno e iniciales, así como todos aquellos factores que afectan al sistema (antrópicos o naturales), desarrollando el proceso de creación, introducción de datos, ejecución del modelo, criterios de convergencia y calibración, y obtención de resultados, propios de Visual MODFLOW. A continuación, se analizan cinco casos prácticos, donde el autor ha aplicado MODFLOW, así como diferentes software de pre/post-tratamiento (Proccesing MODFLOW, Mod CAD y Visual MODFLOW), describiendo para cada uno, el objetivo marcado, modelo conceptual definido, discretización, definición paramétrica, análisis de sensibilidad, resultados alcanzados y predicción de estados futuros. En quinto lugar, se presenta un programa desarrollado por el autor, que permite mejorar las prestaciones ofrecidas por MODFLOW y Visual MODFLOW, ampliando las posibilidades de modelización y conexión con otros ordenadores. Seguidamente se plantean una serie de soluciones a los problemas más típicos que pueden producirse durante la modelización con MODFLOW. Por último, se exponen las conclusiones y recomendaciones alcanzadas, con el fin de auxiliar el desarrollo del desarrollo de modelos hidrogeológicos, tanto conceptuales como numéricos.
Resumo:
We apply diffusion strategies to propose a cooperative reinforcement learning algorithm, in which agents in a network communicate with their neighbors to improve predictions about their environment. The algorithm is suitable to learn off-policy even in large state spaces. We provide a mean-square-error performance analysis under constant step-sizes. The gain of cooperation in the form of more stability and less bias and variance in the prediction error, is illustrated in the context of a classical model. We show that the improvement in performance is especially significant when the behavior policy of the agents is different from the target policy under evaluation.
Resumo:
Esta tesis constituye un gran avance en el conocimiento del estudio y análisis de inestabilidades hidrodinámicas desde un punto de vista físico y teórico, como consecuencia de haber desarrollado innovadoras técnicas para la resolución computacional eficiente y precisa de la parte principal del espectro correspondiente a los problemas de autovalores (EVP) multidimensionales que gobiernan la inestabilidad de flujos con dos o tres direcciones espaciales inhomogéneas, denominados problemas de estabilidad global lineal. En el contexto del trabajo de desarrollo de herramientas computacionales presentado en la tesis, la discretización mediante métodos de diferencias finitas estables de alto orden de los EVP bidimensionales y tridimensionales que se derivan de las ecuaciones de Navier-Stokes linealizadas sobre flujos con dos o tres direcciones espaciales inhomogéneas, ha permitido una aceleración de cuatro órdenes de magnitud en su resolución. Esta mejora de eficiencia numérica se ha conseguido gracias al hecho de que usando estos esquemas de diferencias finitas, técnicas eficientes de resolución de problemas lineales son utilizables, explotando el alto nivel de dispersión o alto número de elementos nulos en las matrices involucradas en los problemas tratados. Como más notable consecuencia cabe destacar que la resolución de EVPs multidimensionales de inestabilidad global, que hasta la fecha necesitaban de superordenadores, se ha podido realizar en ordenadores de sobremesa. Además de la solución de problemas de estabilidad global lineal, el mencionado desarrollo numérico facilitó la extensión de las ecuaciones de estabilidad parabolizadas (PSE) lineales y no lineales para analizar la inestabilidad de flujos que dependen fuertemente en dos direcciones espaciales y suavemente en la tercera con las ecuaciones de estabilidad parabolizadas tridimensionales (PSE-3D). Precisamente la capacidad de extensión del novedoso algoritmo PSE-3D para el estudio de interacciones no lineales de los modos de estabilidad, desarrollado íntegramente en esta tesis, permite la predicción de transición en flujos complejos de gran interés industrial y por lo tanto extiende el concepto clásico de PSE, el cuál ha sido empleado exitosamente durante las pasadas tres décadas en el mismo contexto para problemas de capa límite bidimensional. Típicos ejemplos de flujos incompresibles se han analizado en este trabajo sin la necesidad de recurrir a restrictivas presuposiciones usadas en el pasado. Se han estudiado problemas vorticales como es el caso de un vórtice aislado o sistemas de vórtices simulando la estela de alas, en los que la homogeneidad axial no se impone y así se puede considerar la difusión viscosa del flujo. Además, se ha estudiado el chorro giratorio turbulento, cuya inestabilidad se utiliza para mejorar las características de funcionamiento de combustores. En la tesis se abarcan adicionalmente problemas de flujos compresibles. Se presenta el estudio de inestabilidad de flujos de borde de ataque a diferentes velocidades de vuelo. También se analiza la estela formada por un elemento rugoso aislado en capa límite supersónica e hipersónica, mostrando excelentes comparaciones con resultados obtenidos mediante simulación numérica directa. Finalmente, nuevas inestabilidades se han identificado en el flujo hipersónico a Mach 7 alrededor de un cono elíptico que modela el vehículo de pruebas en vuelo HIFiRE-5. Los resultados comparan favorablemente con experimentos en vuelo, lo que subraya aún más el potencial de las metodologías de análisis de estabilidad desarrolladas en esta tesis. ABSTRACT The present thesis constitutes a step forward in advancing the frontiers of knowledge of fluid flow instability from a physical point of view, as a consequence of having been successful in developing groundbreaking methodologies for the efficient and accurate computation of the leading part of the spectrum pertinent to multi-dimensional eigenvalue problems (EVP) governing instability of flows with two or three inhomogeneous spatial directions. In the context of the numerical work presented in this thesis, the discretization of the spatial operator resulting from linearization of the Navier-Stokes equations around flows with two or three inhomogeneous spatial directions by variable-high-order stable finite-difference methods has permitted a speedup of four orders of magnitude in the solution of the corresponding two- and three-dimensional EVPs. This improvement of numerical performance has been achieved thanks to the high-sparsity level offered by the high-order finite-difference schemes employed for the discretization of the operators. This permitted use of efficient sparse linear algebra techniques without sacrificing accuracy and, consequently, solutions being obtained on typical workstations, as opposed to the previously employed supercomputers. Besides solution of the two- and three-dimensional EVPs of global linear instability, this development paved the way for the extension of the (linear and nonlinear) Parabolized Stability Equations (PSE) to analyze instability of flows which depend in a strongly-coupled inhomogeneous manner on two spatial directions and weakly on the third. Precisely the extensibility of the novel PSE-3D algorithm developed in the framework of the present thesis to study nonlinear flow instability permits transition prediction in flows of industrial interest, thus extending the classic PSE concept which has been successfully employed in the same context to boundary-layer type of flows over the last three decades. Typical examples of incompressible flows, the instability of which was analyzed in the present thesis without the need to resort to the restrictive assumptions used in the past, range from isolated vortices, and systems thereof, in which axial homogeneity is relaxed to consider viscous diffusion, as well as turbulent swirling jets, the instability of which is exploited in order to improve flame-holding properties of combustors. The instability of compressible subsonic and supersonic leading edge flows has been solved, and the wake of an isolated roughness element in a supersonic and hypersonic boundary-layer has also been analyzed with respect to its instability: excellent agreement with direct numerical simulation results has been obtained in all cases. Finally, instability analysis of Mach number 7 ow around an elliptic cone modeling the HIFiRE-5 flight test vehicle has unraveled flow instabilities near the minor-axis centerline, results comparing favorably with flight test predictions.