987 resultados para multi-turn injection
Resumo:
A method was developed using the multi-element graphite furnace atomic absorption spectrometry technique for the direct and simultaneous determination of As, Cu, and Pb in Brazilian sugar cane spirit (cachaqa) samples. Also employed was the end-capped transversely heated graphite atomizer (THGA) with platforms pre-treated with W permanent modifier and co-injection of Pd/Mg(NO3)(2). Pyrolysis and atomization temperature curves were established in a cachaqa medium (1+1; v/v) containing 0.2% (v/v) HNO3 and spiked with 20 mu g L-1 As and Pb and 200 mu g L-1 Cu. The effect of the concentration of major elements usually present in cachaqa matrices (Ca, Mg, Na, and K) and ethanol on the absorbance of As, Cu, and Pb was investigated. Analytical working solutions of As, Cu, and Pb were prepared in 10% (v/v) ethanol plus 5.0 mg L-1 Ca, Mg, Na, and K. Acidified to 0.2% (v/v) HNO3, these solutions were suitable to build calibration curves by matrix matching. The proposed method was applied to the simultaneous determination of As, Cu, and Pb in commercial sugar cane spirits. The characteristic mass for the simultaneous determination was 16 pg As, 119 pg Cu, and 28 pg Pb. The pretreated tube lifetime was about 450 firings. The limit of detection (LOD) was 0.6 mu g L-1 As, 9.2 mu g L-1 Cu, and 0.3 pig L-1 Pb. The found concentrations varied from 0.81 to 4.28 mu g L-1 As, 0.28 to 3.82 mg L-1 Cu and 0.82 to 518 mu g L-1 Pb. The recoveries of the spiked samples varied from 94-112% (As), 97-111% (Cu), and 95-101% (Pb). The relative standard deviation (n=12) was 6.9%, 7.4%, and 7.7% for As, Cu, and Pb, respectively, present in a sample at 0.87 mu g L-1, 0.81 mg L-1, and 38.9 mu g L-1 concentrations.
Resumo:
This paper presents a multi-cell single-phase high power factor boost rectifier in interleave connection, operating in critical conduction mode, employing a soft-switching technique, and controlled by Field Programmable Gate Array (FPGA). The soft-switching technique is based on zero-current-switching (ZCS) cells, providing ZC (zero-current) turn-on and ZCZV (zero-current-zero-voltage) turn-off for the active switches, and ZV (zero-vohage) turn-on and ZC (zero-current) turn-off for the boost diodes. The disadvantages related to reverse recovery effects of boost diodes operated in continuous conduction mode (additional losses, and electromagnetic interference (EMI) problems) are minimized, due to the operation in critical conduction mode. In addition, due to the interleaving technique, the rectifier's features include the reduction in the input current ripple, the reduction in the output voltage ripple, the use of low stress devices, low volume for the EMI input filter, high input power factor (PF), and low total harmonic distortion (THD) in the input current, in compliance with the IEC61000-3-2 standards. The digital controller has been developed using a hardware description language (VHDL) and implemented using a XC2S200E-SpartanII-E/Xilinx FPGA device, performing a true critical conduction operation mode for all interleaved cells, and a closed-loop to provide the output voltage regulation, like as a preregulator rectifier. Experimental results are presented for a implemented prototype with two and with four interleaved cells, 400V nominal output voltage and 220V(rms) nominal input voltage, in order to verify the feasibility and performance of the proposed digital control through the use of a FPGA device.
Resumo:
A method was developed using the multi-element graphite furnace atomic absorption spectrometry technique for the direct and simultaneous determination of As, Cu, and Pb in Brazilian sugar cane spirit (cachaça) samples. Also employed was the end-capped transversely heated graphite atomizer (THGA) with platforms pre-treated with W permanent modifier and co-injection of Pd/Mg(N03)2. Pyrolysis and atomization temperature curves were established in a cachaça medium (1+1; v/v) containing 0.2% (v/v) HN03 and spiked with 20 μg L-1 As and Pb and 200 μg L-1Cu. The effect of the concentration of major elements usually present in cachaça matrices (Ca, Mg, Na, and K) and ethanol on the absorbance of As, Cu, and Pb was investigated. Analytical working solutions of As, Cu, and Pb were prepared in 10% (v/v) ethanol plus 5.0 mg L-1 Ca, Mg, Na, and K. Acidified to 0.2% (v/v) HNO3, these solutions were suitable to build calibration curves by matrix matching. The proposed method was applied to the simultaneous determination of As, Cu, and Pb in commercial sugar cane spirits. The characteristic mass for the simultaneous determination was 16 pg As, 119 pg Cu, and 28 pg Pb. The pretreated tube lifetime was about 450 firings. The limit of detection (LOD) was 0.6 μg L-1As, 9.2 μg L-1 Cu, and 0.3 μg L-1Pb. The found concentrations varied from 0.81 to 4.28 μg L-1As, 0.28 to 382 mg L-1 Cu and 0.82 to 518 μg L-1 Pb. The recoveries of the spiked samples varied from 94-112% (As), 97-111% (Cu), and 95-101% (Pb). The relative standard deviation (n=12) was 6.9%, 7.4%, and 7.7% for As, Cu, and Pb, respectively, present in a sample at 0.87 μgL-1, 0.81 mgL-1, and 38.9 μgL-1concentrations.
Resumo:
A novel AC Biosusceptometry (ACB) system with thirteen sensors it was implemented and characterized in vitro using magnetic phantoms. The system presenting coils in a coaxial arrangement with one pair of excitation coil outside and thirteen pairs of detection coils inside. A first-order gradiometric configuration was utilized for optimal detection of magnetic signals. Several physical parameters such as baseline, number of turns, excitation field and diameters were studied for improvement of the signal/noise ratio. This system exhibits an enhanced sensitivity and spatial resolution, due to the higher density of sensors/area. In the future those characteristics will turn possible to obtain images of magnetic marker or tracer in the gastrointestinal tract focusing on physiological and pharmaceutical studies. ACB emerged due to its interesting nature, noninvasiveness and low cost to investigate gastrointestinal parameters and this system can contribute for more accurate interpretation of biomedical signals and images
Resumo:
[EN]The present study aimed to determine the spawning efficacy, egg quality and quantity of captive breed meagre induced with a single gonadotrophin-releasing hormone agonist (GnRHa) injection of 0, 1, 5, 10, 15, 20, 25, 30, 40 or 50 μg kg–1 to determine a recommended optimum dose to induce spawning. The doses 10, 15 and 20 μg kg–1 gave eggs with the highest quality (measured as: percentage of viability, floating, fertilisation and hatch) and quantity (measured as: total number of eggs, number of viable eggs, number of floating eggs, number of hatched larvae and number of larvae that reabsorbed the yolk sac). All egg quantity parameters were described by Gaussian regression analysis with R2 = 0.89 or R2 = 0.88. The Gaussian regression analysis identified that the optimal dose used was 15 μg kg–1. The regression analysis highlighted that this comprehensive study examined doses that ranged from low doses insufficient to stimulate a high spawning response (significantly lower egg quantities, p < 0.05) compared to 15 μg kg–1 through to high doses that stimulated the spawning of significantly lower egg quantities and eggs with significantly lower quality (egg viability). In addition, the latency period (time from hormone application to spawning) decreased with increasing doses to give a regression (R2 = 0.93), which suggests that higher doses accelerated oocyte development that in turn reduced egg quality and quantity. The identification of an optimal dose for the spawning of meagre, which has high aquaculture potential, represents an important advance for the Mediterranean aquaculture industry.
Resumo:
Balancing the frequently conflicting priorities of conservation and economic development poses a challenge to management of the Swiss Alps Jungfrau-Aletsch World Heritage Site (WHS). This is a complex societal problem that calls for a knowledge-based solution. This in turn requires a transdisciplinary research framework in which problems are defined and solved cooperatively by actors from the scientific community and the life-world. In this article we re-examine studies carried out in the region of the Swiss Alps Jungfrau-Aletsch WHS, covering three key issues prevalent in transdisciplinary settings: integration of stakeholders into participatory processes; perceptions and positions; and negotiability and implementation. In the case of the Swiss Alps Jungfrau-Aletsch WHS the transdisciplinary setting created a situation of mutual learning among stakeholders from different levels and backgrounds. However, the studies showed that the benefits of such processes of mutual learning are continuously at risk of being diminished by the power play inherent in participatory approaches.
Resumo:
Abstract Claystones are considered worldwide as barrier materials for nuclear waste repositories. In the Mont Terri underground research laboratory (URL), a nearly 4-year diffusion and retention (DR) experiment has been performed in Opalinus Clay. It aimed at (1) obtaining data at larger space and time scales than in laboratory experiments and (2) under relevant in situ conditions with respect to pore water chemistry and mechanical stress, (3) quantifying the anisotropy of in situ diffusion, and (4) exploring possible effects of a borehole-disturbed zone. The experiment included two tracer injection intervals in a borehole perpendicular to bedding, through which traced artificial pore water (APW) was circulated, and a pressure monitoring interval. The APW was spiked with neutral tracers (HTO, HDO, H2O-18), anions (Br, I, SeO4), and cations (Na-22, Ba-133, Sr-85, Cs-137, Co-60, Eu-152, stable Cs, and stable Eu). Most tracers were added at the beginning, some were added at a later stage. The hydraulic pressure in the injection intervals was adjusted according to the measured value in the pressure monitoring interval to ensure transport by diffusion only. Concentration time-series in the APW within the borehole intervals were obtained, as well as 2D concentration distributions in the rock at the end of the experiment after overcoring and subsampling which resulted in �250 samples and �1300 analyses. As expected, HTO diffused the furthest into the rock, followed by the anions (Br, I, SeO4) and by the cationic sorbing tracers (Na-22, Ba-133, Cs, Cs-137, Co-60, Eu-152). The diffusion of SeO4 was slower than that of Br or I, approximately proportional to the ratio of their diffusion coefficients in water. Ba-133 diffused only into �0.1 m during the �4 a. Stable Cs, added at a higher concentration than Cs-137, diffused further into the rock than Cs-137, consistent with a non-linear sorption behavior. The rock properties (e.g., water contents) were rather homogeneous at the centimeter scale, with no evidence of a borehole-disturbed zone. In situ anisotropy ratios for diffusion, derived for the first time directly from field data, are larger for HTO and Na-22 (�5) than for anions (�3�4 for Br and I). The lower ionic strength of the pore water at this location (�0.22 M) as compared to locations of earlier experiments in the Mont Terri URL (�0.39 M) had no notable effect on the anion accessible pore fraction for Cl, Br, and I: the value of 0.55 is within the range of earlier data. Detailed transport simulations involving different codes will be presented in a companion paper.
Resumo:
PURPOSE Survivin is a member of the inhibitor-of-apoptosis family. Essential for tumor cell survival and overexpressed in most cancers, survivin is a promising target for anti-cancer immunotherapy. Immunogenicity has been demonstrated in multiple cancers. Nonetheless, few clinical trials have demonstrated survivin-vaccine-induced immune responses. EXPERIMENTAL DESIGN This phase I trial was conducted to test whether vaccine EMD640744, a cocktail of five HLA class I-binding survivin peptides in Montanide(®) ISA 51 VG, promotes anti-survivin T-cell responses in patients with solid cancers. The primary objective was to compare immunologic efficacy of EMD640744 at doses of 30, 100, and 300 μg. Secondary objectives included safety, tolerability, and clinical efficacy. RESULTS In total, 49 patients who received ≥2 EMD640744 injections with available baseline- and ≥1 post-vaccination samples [immunologic-diagnostic (ID)-intention-to-treat] were analyzed by ELISpot- and peptide/MHC-multimer staining, revealing vaccine-activated peptide-specific T-cell responses in 31 patients (63 %). This cohort included the per study protocol relevant ID population for the primary objective, i.e., T-cell responses by ELISpot in 17 weeks following first vaccination, as well as subjects who discontinued the study before week 17 but showed responses to the treatment. No dose-dependent effects were observed. In the majority of patients (61 %), anti-survivin responses were detected only after vaccination, providing evidence for de novo induction. Best overall tumor response was stable disease (28 %). EMD640744 was well tolerated; local injection-site reactions constituted the most frequent adverse event. CONCLUSIONS Vaccination with EMD640744 elicited T-cell responses against survivin peptides in the majority of patients, demonstrating the immunologic efficacy of EMD640744.
Resumo:
An Eulerian multifluid model is used to describe the evolution of an electrospray plume and the flow induced in the surrounding gas by the drag of the electrically charged spray droplets in the space between an injection electrode containing the electrospray source and a collector electrode. The spray is driven by the voltage applied between the two electrodes. Numerical computations and order-of-magnitude estimates for a quiescent gas show that the droplets begin to fly back toward the injection electrode at a certain critical value of the flux of droplets in the spray, which depends very much on the electrical conditions at the injection electrode. As the flux is increased toward its critical value, the electric field induced by the charge of the droplets partially balances the field due to the applied voltage in the vicinity of the injection electrode, leading to a spray that rapidly broadens at a distance from its origin of the order of the stopping distance at which the droplets lose their initial momentum and the effect of their inertia becomes negligible. The axial component of the electric field first changes sign in this region, causing the fly back. The flow induced in the gas significantly changes this picture in the conditions of typical experiments. A gas plume is induced by the drag of the droplets whose entrainment makes the radius of the spray away from the injection electrode smaller than in a quiescent gas, and convects the droplets across the region of negative axial electric field that appears around the origin of the spray when the flux of droplets is increased. This suppresses fly back and allows much higher fluxes to be reached than are possible in a quiescent gas. The limit of large droplet-to-gas mass ratio is discussed. Migration of satellite droplets to the shroud of the spray is reproduced by the Eulerian model, but this process is also affected by the motion of the gas. The gas flow preferentially pushes satellite droplets from the shroud to the core of the spray when the effect of the inertia of the droplets becomes negligible, and thus opposes the well-established electrostatic/inertial mechanism of segregation and may end up concentrating satellite droplets in an intermediate radial region of the spray.
Resumo:
Multi party videoconference systems use MCU (Multipoint Control Unit) devices to forward media streams. In this paper we describe a mechanism that allows the mobility of such streams between MCU devices. This mobility is especially useful when redistribution of streams is needed due to scalability requirements. These requirements are mandatory in Cloud scenarios to adapt the number of MCUs and their capabilities to variations in the user demand. Our mechanism is based on TURN (Traversal Using Relay around NAT) standard and adapts MICE (Mobility with ICE) specification to the requirements of this kind of scenarios. We conclude that this mechanism achieves the stream mobility in a transparent way for client nodes and without interruptions for the users.
Resumo:
Fully integrated semiconductor master-oscillator power-amplifiers (MOPA) with a tapered power amplifier are attractive sources for applications requiring high brightness. The geometrical design of the tapered amplifier is crucial to achieve the required power and beam quality. In this work we investigate by numerical simulation the role of the geometrical design in the beam quality and in the maximum achievable power. The simulations were performed with a Quasi-3D model which solves the complete steady-state semiconductor and thermal equations combined with a beam propagation method. The results indicate that large devices with wide taper angles produce higher power with better beam quality than smaller area designs, but at expenses of a higher injection current and lower conversion efficiency.
Resumo:
La familia de algoritmos de Boosting son un tipo de técnicas de clasificación y regresión que han demostrado ser muy eficaces en problemas de Visión Computacional. Tal es el caso de los problemas de detección, de seguimiento o bien de reconocimiento de caras, personas, objetos deformables y acciones. El primer y más popular algoritmo de Boosting, AdaBoost, fue concebido para problemas binarios. Desde entonces, muchas han sido las propuestas que han aparecido con objeto de trasladarlo a otros dominios más generales: multiclase, multilabel, con costes, etc. Nuestro interés se centra en extender AdaBoost al terreno de la clasificación multiclase, considerándolo como un primer paso para posteriores ampliaciones. En la presente tesis proponemos dos algoritmos de Boosting para problemas multiclase basados en nuevas derivaciones del concepto margen. El primero de ellos, PIBoost, está concebido para abordar el problema descomponiéndolo en subproblemas binarios. Por un lado, usamos una codificación vectorial para representar etiquetas y, por otro, utilizamos la función de pérdida exponencial multiclase para evaluar las respuestas. Esta codificación produce un conjunto de valores margen que conllevan un rango de penalizaciones en caso de fallo y recompensas en caso de acierto. La optimización iterativa del modelo genera un proceso de Boosting asimétrico cuyos costes dependen del número de etiquetas separadas por cada clasificador débil. De este modo nuestro algoritmo de Boosting tiene en cuenta el desbalanceo debido a las clases a la hora de construir el clasificador. El resultado es un método bien fundamentado que extiende de manera canónica al AdaBoost original. El segundo algoritmo propuesto, BAdaCost, está concebido para problemas multiclase dotados de una matriz de costes. Motivados por los escasos trabajos dedicados a generalizar AdaBoost al terreno multiclase con costes, hemos propuesto un nuevo concepto de margen que, a su vez, permite derivar una función de pérdida adecuada para evaluar costes. Consideramos nuestro algoritmo como la extensión más canónica de AdaBoost para este tipo de problemas, ya que generaliza a los algoritmos SAMME, Cost-Sensitive AdaBoost y PIBoost. Por otro lado, sugerimos un simple procedimiento para calcular matrices de coste adecuadas para mejorar el rendimiento de Boosting a la hora de abordar problemas estándar y problemas con datos desbalanceados. Una serie de experimentos nos sirven para demostrar la efectividad de ambos métodos frente a otros conocidos algoritmos de Boosting multiclase en sus respectivas áreas. En dichos experimentos se usan bases de datos de referencia en el área de Machine Learning, en primer lugar para minimizar errores y en segundo lugar para minimizar costes. Además, hemos podido aplicar BAdaCost con éxito a un proceso de segmentación, un caso particular de problema con datos desbalanceados. Concluimos justificando el horizonte de futuro que encierra el marco de trabajo que presentamos, tanto por su aplicabilidad como por su flexibilidad teórica. Abstract The family of Boosting algorithms represents a type of classification and regression approach that has shown to be very effective in Computer Vision problems. Such is the case of detection, tracking and recognition of faces, people, deformable objects and actions. The first and most popular algorithm, AdaBoost, was introduced in the context of binary classification. Since then, many works have been proposed to extend it to the more general multi-class, multi-label, costsensitive, etc... domains. Our interest is centered in extending AdaBoost to two problems in the multi-class field, considering it a first step for upcoming generalizations. In this dissertation we propose two Boosting algorithms for multi-class classification based on new generalizations of the concept of margin. The first of them, PIBoost, is conceived to tackle the multi-class problem by solving many binary sub-problems. We use a vectorial codification to represent class labels and a multi-class exponential loss function to evaluate classifier responses. This representation produces a set of margin values that provide a range of penalties for failures and rewards for successes. The stagewise optimization of this model introduces an asymmetric Boosting procedure whose costs depend on the number of classes separated by each weak-learner. In this way the Boosting procedure takes into account class imbalances when building the ensemble. The resulting algorithm is a well grounded method that canonically extends the original AdaBoost. The second algorithm proposed, BAdaCost, is conceived for multi-class problems endowed with a cost matrix. Motivated by the few cost-sensitive extensions of AdaBoost to the multi-class field, we propose a new margin that, in turn, yields a new loss function appropriate for evaluating costs. Since BAdaCost generalizes SAMME, Cost-Sensitive AdaBoost and PIBoost algorithms, we consider our algorithm as a canonical extension of AdaBoost to this kind of problems. We additionally suggest a simple procedure to compute cost matrices that improve the performance of Boosting in standard and unbalanced problems. A set of experiments is carried out to demonstrate the effectiveness of both methods against other relevant Boosting algorithms in their respective areas. In the experiments we resort to benchmark data sets used in the Machine Learning community, firstly for minimizing classification errors and secondly for minimizing costs. In addition, we successfully applied BAdaCost to a segmentation task, a particular problem in presence of imbalanced data. We conclude the thesis justifying the horizon of future improvements encompassed in our framework, due to its applicability and theoretical flexibility.
Resumo:
Debido al creciente aumento del tamaño de los datos en muchos de los actuales sistemas de información, muchos de los algoritmos de recorrido de estas estructuras pierden rendimento para realizar búsquedas en estos. Debido a que la representacion de estos datos en muchos casos se realiza mediante estructuras nodo-vertice (Grafos), en el año 2009 se creó el reto Graph500. Con anterioridad, otros retos como Top500 servían para medir el rendimiento en base a la capacidad de cálculo de los sistemas, mediante tests LINPACK. En caso de Graph500 la medicion se realiza mediante la ejecución de un algoritmo de recorrido en anchura de grafos (BFS en inglés) aplicada a Grafos. El algoritmo BFS es uno de los pilares de otros muchos algoritmos utilizados en grafos como SSSP, shortest path o Betweeness centrality. Una mejora en este ayudaría a la mejora de los otros que lo utilizan. Analisis del Problema El algoritmos BFS utilizado en los sistemas de computación de alto rendimiento (HPC en ingles) es usualmente una version para sistemas distribuidos del algoritmo secuencial original. En esta versión distribuida se inicia la ejecución realizando un particionado del grafo y posteriormente cada uno de los procesadores distribuidos computará una parte y distribuirá sus resultados a los demás sistemas. Debido a que la diferencia de velocidad entre el procesamiento en cada uno de estos nodos y la transfencia de datos por la red de interconexión es muy alta (estando en desventaja la red de interconexion) han sido bastantes las aproximaciones tomadas para reducir la perdida de rendimiento al realizar transferencias. Respecto al particionado inicial del grafo, el enfoque tradicional (llamado 1D-partitioned graph en ingles) consiste en asignar a cada nodo unos vertices fijos que él procesará. Para disminuir el tráfico de datos se propuso otro particionado (2D) en el cual la distribución se haciá en base a las aristas del grafo, en vez de a los vertices. Este particionado reducía el trafico en la red en una proporcion O(NxM) a O(log(N)). Si bien han habido otros enfoques para reducir la transferecnia como: reordemaniento inicial de los vertices para añadir localidad en los nodos, o particionados dinámicos, el enfoque que se va a proponer en este trabajo va a consistir en aplicar técnicas recientes de compression de grandes sistemas de datos como Bases de datos de alto volume o motores de búsqueda en internet para comprimir los datos de las transferencias entre nodos.---ABSTRACT---The Breadth First Search (BFS) algorithm is the foundation and building block of many higher graph-based operations such as spanning trees, shortest paths and betweenness centrality. The importance of this algorithm increases each day due to it is a key requirement for many data structures which are becoming popular nowadays. These data structures turn out to be internally graph structures. When the BFS algorithm is parallelized and the data is distributed into several processors, some research shows a performance limitation introduced by the interconnection network [31]. Hence, improvements on the area of communications may benefit the global performance in this key algorithm. In this work it is presented an alternative compression mechanism. It differs with current existing methods in that it is aware of characteristics of the data which may benefit the compression. Apart from this, we will perform a other test to see how this algorithm (in a dis- tributed scenario) benefits from traditional instruction-based optimizations. Last, we will review the current supercomputing techniques and the related work being done in the area.
Resumo:
Prototype Selection (PS) algorithms allow a faster Nearest Neighbor classification by keeping only the most profitable prototypes of the training set. In turn, these schemes typically lower the performance accuracy. In this work a new strategy for multi-label classifications tasks is proposed to solve this accuracy drop without the need of using all the training set. For that, given a new instance, the PS algorithm is used as a fast recommender system which retrieves the most likely classes. Then, the actual classification is performed only considering the prototypes from the initial training set belonging to the suggested classes. Results show that this strategy provides a large set of trade-off solutions which fills the gap between PS-based classification efficiency and conventional kNN accuracy. Furthermore, this scheme is not only able to, at best, reach the performance of conventional kNN with barely a third of distances computed, but it does also outperform the latter in noisy scenarios, proving to be a much more robust approach.
Resumo:
Federal Highway Administration, Washington, D.C.