975 resultados para Continuous evaluation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A bench-scale treatability study was conducted on a high-strength wastewater from a chemical plant to develop an alternative for the existing waste stabilization pond treatment system. The objective of this study was to determine the treatability of the wastewater by the activated sludge process and, if treatable, to determine appropriate operating conditions, and to evaluate the degradability of bis(2-chloroethyl)ether (Chlorex) and benzene in the activated sludge system. Four 4-L Plexi-glass, complete mixing, continuous flow activated sludge reactors were operated in parallel under different operating conditions over a 6-month period. The operating conditions examined were hydraulic retention time (HRT), sludge retention time (SRT), nutrient supplement, and Chlorex/benzene spikes. Generally the activated sludge system treating high-strength wastewater was stable under large variations of organic loading and operating conditions. At an HRT of 2 days, more than 90% removal efficiency with good sludge settleability was achieved when the organic loading was less than 0.4 g BOD$\sb5$/g MLVSS/d or 0.8 g COD/g MLVSS/d. At least 20 days of SRT was required to maintain steady operation. Phosphorus addition enhanced the performance of the system especially during stressed operation. On the average, removals of benzene and Chlorex were 73-86% and 37-65%, respectively. In addition, the low-strength wastewater was treatable by activated sludge process, showing more than 90% BOD removal at a HRT of 0.5 days. In general, the sludge had poor settling characteristics. The aerated lagoon process treating high-strength wastewater also provided significant organic reduction, but did not produce an acceptable effluent concentration. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have analyzed the performance of a PET demonstrator formed by two sectors of four monolithic detector blocks placed face-to-face. Both front-end and read-out electronics have been evaluated by means of coincidence measurements using a rotating 22Na source placed at the center of the sectors in order to emulate the behavior of a complete full ring. A continuous training method based on neural network (NN) algorithms has been carried out to determine the entrance points over the surface of the detectors. Reconstructed images from 1 MBq 22Na point source and 22Na Derenzo phantom have been obtained using both filtered back projection (FBP) analytic methods and the OSEM 3D iterative algorithm available in the STIR software package [1]. Preliminary data on image reconstruction from a 22Na point source with Ø = 0.25 mm show spatial resolutions from 1.7 to 2.1 mm FWHM in the transverse plane. The results confirm the viability of this design for the development of a full-ring brain PET scanner compatible with magnetic resonance imaging for human studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth?s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The efficiency of the Iberian Energy Derivatives Market in its first five and a half years is assessed in terms of volume, open interest and price. The continuous market shows steady liquidity growth. Its volume is strongly correlated to that of the Over The Counter (OTC) market, the amount of market makers, the enrolment of financial agents and generation companies belonging to the integrated group of last resort suppliers, and the OTC cleared volume in its clearing house. The hedging efficiency, measured through the ratio between the final open interest and the cleared volume, shows the lowest values for the Spanish base load futures as they are the most liquid contracts. The ex-post forward risk premium has diminished due to the learning curve and the effect of the fixed price retributing the indigenous coal fired generation. This market is quite less developed than the European leaders headquartered in Norway and Germany. Enrolment of more traders, mainly international energy companies, financial agents, energy intensive industries and renewable generation companies is desired. Market monitoring reports by the market operator providing post-trade transparency, OTC data access by the energy regulator, and assessment of the regulatory risk can contribute to efficiency gains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to show the choice of transparency as the guiding principle of the accreditation process, the article evaluates its influence on the fundamental subprocess of self-evaluation, thereby confirming that transparency is an essential tool for continuous improvement of academic processes and those of educational quality management. It fosters educational innovation and permits the sustainability of the continuous accreditation process over time, resulting in greater probabilities of university self-regulation through systemization of the process, with the objective of continuous improvement of university degree programs. The article analyzes the influence of transparency on each activity of the self-evaluation process according to the Peruvian accreditation model prepared under the total quality approach, as a reference for other accreditation models, proposing concrete transparency actions and evaluating its influence on the stakeholder groups in the self-evaluation process, as well as on the efficiency and effectiveness of the process. It is concluded that transparency has a positive influence on the training of human capital and the formation of the university?s organizational culture, facilitating dissemination, understanding and involvement of the stakeholder groups in the continuous improvement of accreditation activities and increasing their acceptance of change and commitment to the process. It is confirmed that transparency contributes toward increasing the efficiency index of the self-evaluation process by reducing operating costs through adequate, accessible, timely contribution of information by the stakeholders and through the optimization of the time spent gathering relevant information. In addition, it is concluded that transparency contributes toward increasing the effectiveness index of self-evaluation by facilitating the achievement of its objectives through synthetic, useful, reliable interpretation of the education situation and the formulation of feasible improvement plans based on the adequacy, relevance, visibility, pertinence and truthfulness of the information analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La gestión de los residuos radiactivos de vida larga producidos en los reactores nucleares constituye uno de los principales desafíos de la tecnología nuclear en la actualidad. Una posible opción para su gestión es la transmutación de los nucleidos de vida larga en otros de vida más corta. Los sistemas subcríticos guiados por acelerador (ADS por sus siglas en inglés) son una de las tecnologías en desarrollo para logar este objetivo. Un ADS consiste en un reactor nuclear subcrítico mantenido en un estado estacionario mediante una fuente externa de neutrones guiada por un acelerador de partículas. El interés de estos sistemas radica en su capacidad para ser cargados con combustibles que tengan contenidos de actínidos minoritarios mayores que los reactores críticos convencionales, y de esta manera, incrementar las tasas de trasmutación de estos elementos, que son los principales responsables de la radiotoxicidad a largo plazo de los residuos nucleares. Uno de los puntos clave que han sido identificados para la operación de un ADS a escala industrial es la necesidad de monitorizar continuamente la reactividad del sistema subcrítico durante la operación. Por esta razón, desde los años 1990 se han realizado varios experimentos en conjuntos subcríticos de potencia cero (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) con el fin de validar experimentalmente estas técnicas. En este contexto, la presente tesis se ocupa de la validación de técnicas de monitorización de la reactividad en el conjunto subcrítico Yalina-Booster. Este conjunto pertenece al Joint Institute for Power and Nuclear Research (JIPNR-Sosny) de la Academia Nacional de Ciencias de Bielorrusia. Dentro del proyecto EUROTRANS del 6º Programa Marco de la UE, en el año 2008 se ha realizado una serie de experimentos en esta instalación concernientes a la monitorización de la reactividad bajo la dirección del CIEMAT. Se han realizado dos tipos de experimentos: experimentos con una fuente de neutrones pulsada (PNS) y experimentos con una fuente continua con interrupciones cortas (beam trips). En el caso de los primeros, experimentos con fuente pulsada, existen dos técnicas fundamentales para medir la reactividad, conocidas como la técnica del ratio bajo las áreas de los neutrones inmediatos y retardados (o técnica de Sjöstrand) y la técnica de la constante de decaimiento de los neutrones inmediatos. Sin embargo, varios experimentos han mostrado la necesidad de aplicar técnicas de corrección para tener en cuenta los efectos espaciales y energéticos presentes en un sistema real y obtener valores precisos de la reactividad. En esta tesis, se han investigado estas correcciones mediante simulaciones del sistema con el código de Montecarlo MCNPX. Esta investigación ha servido también para proponer una versión generalizada de estas técnicas donde se buscan relaciones entre la reactividad el sistema y las cantidades medidas a través de simulaciones de Monte Carlo. El segundo tipo de experimentos, experimentos con una fuente continua e interrupciones del haz, es más probable que sea empleado en un ADS industrial. La versión generalizada de las técnicas desarrolladas para los experimentos con fuente pulsada también ha sido aplicada a los resultados de estos experimentos. Además, el trabajo presentado en esta tesis es la primera vez, en mi conocimiento, en que la reactividad de un sistema subcrítico se monitoriza durante la operación con tres técnicas simultáneas: la técnica de la relación entre la corriente y el flujo (current-to-flux), la técnica de desconexión rápida de la fuente (source-jerk) y la técnica del decaimiento de los neutrones inmediatos. Los casos analizados incluyen la variación rápida de la reactividad del sistema (inserción y extracción de las barras de control) y la variación rápida de la fuente de neutrones (interrupción larga del haz y posterior recuperación). ABSTRACT The management of long-lived radioactive wastes produced by nuclear reactors constitutes one of the main challenges of nuclear technology nowadays. A possible option for its management consists in the transmutation of long lived nuclides into shorter lived ones. Accelerator Driven Subcritical Systems (ADS) are one of the technologies in development to achieve this goal. An ADS consists in a subcritical nuclear reactor maintained in a steady state by an external neutron source driven by a particle accelerator. The interest of these systems lays on its capacity to be loaded with fuels having larger contents of minor actinides than conventional critical reactors, and in this way, increasing the transmutation rates of these elements, that are the main responsible of the long-term radiotoxicity of nuclear waste. One of the key points that have been identified for the operation of an industrial-scale ADS is the need of continuously monitoring the reactivity of the subcritical system during operation. For this reason, since the 1990s a number of experiments have been conducted in zero-power subcritical assemblies (MUSE, RACE, KUCA, Yalina, GUINEVERE/FREYA) in order to experimentally validate these techniques. In this context, the present thesis is concerned with the validation of reactivity monitoring techniques at the Yalina-Booster subcritical assembly. This assembly belongs to the Joint Institute for Power and Nuclear Research (JIPNR-Sosny) of the National Academy of Sciences of Belarus. Experiments concerning reactivity monitoring have been performed in this facility under the EUROTRANS project of the 6th EU Framework Program in year 2008 under the direction of CIEMAT. Two types of experiments have been carried out: experiments with a pulsed neutron source (PNS) and experiments with a continuous source with short interruptions (beam trips). For the case of the first ones, PNS experiments, two fundamental techniques exist to measure the reactivity, known as the prompt-to-delayed neutron area-ratio technique (or Sjöstrand technique) and the prompt neutron decay constant technique. However, previous experiments have shown the need to apply correction techniques to take into account the spatial and energy effects present in a real system and thus obtain accurate values for the reactivity. In this thesis, these corrections have been investigated through simulations of the system with the Monte Carlo code MCNPX. This research has also served to propose a generalized version of these techniques where relationships between the reactivity of the system and the measured quantities are obtained through Monte Carlo simulations. The second type of experiments, with a continuous source with beam trips, is more likely to be employed in an industrial ADS. The generalized version of the techniques developed for the PNS experiments has also been applied to the result of these experiments. Furthermore, the work presented in this thesis is the first time, to my knowledge, that the reactivity of a subcritical system has been monitored during operation simultaneously with three different techniques: the current-to-flux, the source-jerk and the prompt neutron decay techniques. The cases analyzed include the fast variation of the system reactivity (insertion and extraction of a control rod) and the fast variation of the neutron source (long beam interruption and subsequent recovery).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a new automatic evaluation for on-line graphics, its application and the numerous advantages achieved applying this developed correcting method. The software application developed by the Innovation in Education Group “E4”, from the Technical University of Madrid, is oriented for the online self-assessment of the graphic drawings that students carry out as continuous training. The adaptation to the European Higher Educational Area is an important opportunity to research about the possibilities of on-line education assessment. In this way, a new software tool has been developed for continuous self-testing by undergraduates. Using this software it is possible to evaluate the graphical answer of the students. Thus, the drawings made on-line by students are automatically corrected according to the geometry (straight lines, sloping lines or second order curves) and by sizes (depending on the specific values which define the graphics).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the GTH-UPM system for the Albayzin 2014 Search on Speech Evaluation. Teh evaluation task consists of searching a list of terms/queries in audio files. The GTH-UPM system we are presenting is based on a LVCSR (Large Vocabulary Continuous Speech Recognition) system. We have used MAVIR corpus and the Spanish partition of the EPPS (European Parliament Plenary Sessions) database for training both acoustic and language models. The main effort has been focused on lexicon preparation and text selection for the language model construction. The system makes use of different lexicon and language models depending on the task that is performed. For the best configuration of the system on the development set, we have obtained a FOM of 75.27 for the deyword spotting task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A procedure for measuring the overheating temperature (ΔT ) of a p-n junction area in the structure of photovoltaic (PV) cells converting laser or solar radiations relative to the ambient temperature has been proposed for the conditions of connecting to an electric load. The basis of the procedure is the measurement of the open-circuit voltage (VO C ) during the initial time period after the fast disconnection of the external resistive load. The simultaneous temperature control on an external heated part of a PV module gives the means for determining the value of VO C at ambient temperature. Comparing it with that measured after switching OFF the load makes the calculation of ΔT possible. Calibration data on the VO C = f(T ) dependences for single-junction AlGaAs/GaAs and triple-junction InGaP/GaAs/Ge PV cells are presented. The temperature dynamics in the PV cells has been determined under flash illumination and during fast commutation of the load. Temperature measurements were taken in two cases: converting continuous laser power by single-junction cells and converting solar power by triple-junction cells operating in the concentrator modules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficient and safe heparin anticoagulation has remained a problem for continuous renal replacement therapies and intermittent hemodialysis for patients with acute renal failure. To make heparin therapy safer for the patient with acute renal failure at high risk of bleeding, we have proposed regional heparinization of the circuit via an immobilized heparinase I filter. This study tested a device based on Taylor-Couette flow and simultaneous separation/reaction for efficacy and safety of heparin removal in a sheep model. Heparinase I was immobilized onto agarose beads via cyanogen bromide activation. The device, referred to as a vortex flow plasmapheretic reactor, consisted of two concentric cylinders, a priming volume of 45 ml, a microporous membrane for plasma separation, and an outer compartment where the immobilized heparinase I was fluidized separately from the blood cells. Manual white cell and platelet counts, hematocrit, total protein, and fibrinogen assays were performed. Heparin levels were indirectly measured via whole-blood recalcification times (WBRTs). The vortex flow plasmapheretic reactor maintained significantly higher heparin levels in the extracorporeal circuit than in the sheep (device inlet WBRTs were 1.5 times the device outlet WBRTs) with no hemolysis. The reactor treatment did not effect any physiologically significant changes in complete blood cell counts, platelets, and protein levels for up to 2 hr of operation. Furthermore, gross necropsy and histopathology did not show any significant abnormalities in the kidney, liver, heart, brain, and spleen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aerobic Gymnastic is the ability to perform complex movements produced by the traditional aerobic exercises, in a continuous manner, with high intensity, perfectly integrated with soundtracks. This sport is performed in an aerobic/anaerobic lactacid condition and expects the execution of complex movements produced by the traditional aerobic exercises integrated with difficulty elements performed with a high technical level. An inaccuracy about this sport is related to the name itself “aerobic” because Aerobic Gymnastic does not use just the aerobic work during the competition, due to the fact that the exercises last among 1’30” and 1’45” at high rhythm. Agonistic Aerobics exploit the basic movements of amateur Aerobics and its coordination schemes, even though the agonistic Aerobics is so much intense than the amateur Aerobics to need a completely different mix of energetic mechanisms. Due to the complexity and the speed with which you perform the technical elements of Aerobic Gymnastic, the introduction of video analysis is essential for a qualitative and quantitative evaluation of athletes’ performance during the training. The performance analysis can allow the accurate analysis and explanation of the evolution and dynamics of a historical phenomenon and motor sports. The notational analysis is used by technicians to have an objective analysis of performance. Tactics, technique and individual movements can be analyzed to help coaches and athletes to re-evaluate their performance and gain advantage during the competition. The purpose of the following experimental work will be a starting point for analyzing the performance of the athletes in an objective way, not only during competitions, but especially during the phases of training. It is, therefore, advisable to introduce the video analysis and notational analysis for more quantitative and qualitative examination of technical movements. The goal is to lead to an improvement of the technique of the athlete and the teaching of the coach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that sound absorption and sound transmission properties of open porous materials are highly dependent on their airflow resistance values. Low values of airflow resistance indicate little resistance for air streaming through the porous material and high values are a sign that most of the pores inside the material are closed. The laboratory procedures for measuring airflow resistance have been stan- dardized by several organizations, including ISO and ASTM for both alternate flow and continuous flow. However, practical implementation of these standardized methods could be both complex and expensive. In this work, two indirect alternative measurement procedures were compared against the alternate flow standardized technique. The techniques were tested using three families of eco-friendly sound absorbent materials: recycled polyurethane foams, coconut natural fibres, and recycled polyester fibres. It is found that the values of airflow resistance measured using both alternative methods are very similar. There is also a good correlation between the values obtained through alternative and standardized methods.