928 resultados para high speed counter-current chromatography
Resumo:
Light in its physical and philosophical sense has captured the imagination of human mind right from the dawn of civilization. The invention of lasers in the 60’s caused a renaissance in the field of optics. This intense, monochromatic, highly directional radiation created new frontiers in science and technology. The strong oscillating electric field of laser radiation creates a. polarisation response that is nonlinear in character in the medium through which it passes and the medium acts as a new source of optical field with alternate properties. It was in this context, that the field of optoelectronics which encompasses the generation, modulation, transmission etc. of optical radiation has gained tremendous importance. Organic molecules and polymeric systems have emerged as a class of promising materials of optoelectronics because they offer the flexibility, both at the molecular and bulk levels, to optimize the nonlinearity and other suitable properties for device applications. Organic nonlinear optical media, which yield large third-order nonlinearities, have been widely studied to develop optical devices like high speed switches, optical limiters etc. Transparent polymeric materials have found one of their most promising applicationsin lasers, in which they can be used as active elements with suitable laser dyes doped in it. The solid-matrix dye lasers make possible combination of the advantages of solid state lasers with the possibility of tuning the radiation over a broad spectral range. The polymeric matrices impregnated with organic dyes have not yet widely used because of the low resistance of the polymeric matrices to laser damage, their low dye photostability, and low dye stability over longer time of operation and storage. In this thesis we investigate the nonlinear and radiative properties of certain organic materials and doped polymeric matrix and their possible role in device development
Resumo:
In a sigma-delta analog to digital (A/D) As most of the sigma-delta ADC applications require converter, the most computationally intensive block is decimation filters with linear phase characteristics, the decimation filter and its hardware implementation symmetric Finite Impulse Response (FIR) filters are may require millions of transistors. Since these widely used for implementation. But the number of FIR converters are now targeted for a portable application, filter coefficients will be quite large for implementing a a hardware efficient design is an implicit requirement. narrow band decimation filter. Implementing decimation In this effect, this paper presents a computationally filter in several stages reduces the total number of filter efficient polyphase implementation of non-recursive coefficients, and hence reduces the hardware complexity cascaded integrator comb (CIC) decimators for and power consumption [2]. Sigma-Delta Converters (SDCs). The SDCs are The first stage of decimation filter can be operating at high oversampling frequencies and hence implemented very efficiently using a cascade of integrators require large sampling rate conversions. The filtering and comb filters which do not require multiplication or and rate reduction are performed in several stages to coefficient storage. The remaining filtering is performed reduce hardware complexity and power dissipation. either in single stage or in two stages with more complex The CIC filters are widely adopted as the first stage of FIR or infinite impulse response (IIR) filters according to decimation due to its multiplier free structure. In this the requirements. The amount of passband aliasing or research, the performance of polyphase structure is imaging error can be brought within prescribed bounds by compared with the CICs using recursive and increasing the number of stages in the CIC filter. The non-recursive algorithms in terms of power, speed and width of the passband and the frequency characteristics area. This polyphase implementation offers high speed outside the passband are severely limited. So, CIC filters operation and low power consumption. The polyphase are used to make the transition between high and low implementation of 4th order CIC filter with a sampling rates. Conventional filters operating at low decimation factor of '64' and input word length of sampling rate are used to attain the required transition '4-bits' offers about 70% and 37% of power saving bandwidth and stopband attenuation. compared to the corresponding recursive and Several papers are available in literature that deals non-recursive implementations respectively. The same with different implementations of decimation filter polyphase CIC filter can operate about 7 times faster architecture for sigma-delta ADCs. Hogenauer has than the recursive and about 3.7 times faster than the described the design procedures for decimation and non-recursive CIC filters.
Resumo:
Residue Number System (RNS) based Finite Impulse Response (FIR) digital filters and traditional FIR filters. This research is motivated by the importance of an efficient filter implementation for digital signal processing. The comparison is done in terms of speed and area requirement for various filter specifications. RNS based FIR filters operate more than three times faster and consumes only about 60% of the area than traditional filter when number of filter taps is more than 32. The area for RNS filter is increasing at a lesser rate than that for traditional resulting in lower power consumption. RNS is a nonweighted number system without carry propogation between different residue digits.This enables simultaneous parallel processing on all the digits resulting in high speed addition and multiplication in the RNS domain
Resumo:
The recent trends envisage multi-standard architectures as a promising solution for the future wireless transceivers to attain higher system capacities and data rates. The computationally intensive decimation filter plays an important role in channel selection for multi-mode systems. An efficient reconfigurable implementation is a key to achieve low power consumption. To this end, this paper presents a dual-mode Residue Number System (RNS) based decimation filter which can be programmed for WCDMA and 802.16e standards. Decimation is done using multistage, multirate finite impulse response (FIR) filters. These FIR filters implemented in RNS domain offers high speed because of its carry free operation on smaller residues in parallel channels. Also, the FIR filters exhibit programmability to a selected standard by reconfiguring the hardware architecture. The total area is increased only by 24% to include WiMAX compared to a single mode WCDMA transceiver. In each mode, the unused parts of the overall architecture is powered down and bypassed to attain power saving. The performance of the proposed decimation filter in terms of critical path delay and area are tabulated.
Resumo:
The recent trends envisage multi-standard architectures as a promising solution for the future wireless transceivers. The computationally intensive decimation filter plays an important role in channel selection for multi-mode systems. An efficient reconfigurable implementation is a key to achieve low power consumption. To this end, this paper presents a dual-mode Residue Number System (RNS) based decimation filter which can be programmed for WCDMA and 802.11a standards. Decimation is done using multistage, multirate finite impulse response (FIR) filters. These FIR filters implemented in RNS domain offers high speed because of its carry free operation on smaller residues in parallel channels. Also, the FIR filters exhibit programmability to a selected standard by reconfiguring the hardware architecture. The total area is increased only by 33% to include WLANa compared to a single mode WCDMA transceiver. In each mode, the unused parts of the overall architecture is powered down and bypassed to attain power saving. The performance of the proposed decimation filter in terms of critical path delay and area are tabulated
Resumo:
In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, nanotechnology and quantum computing. This research proposes quick addition of decimals (QAD) suitable for multi-digit BCD addition, using reversible conservative logic. The design makes use of reversible fault tolerant Fredkin gates only. The implementation strategy is to reduce the number of levels of delay there by increasing the speed, which is the most important factor for high speed circuits.
Resumo:
This paper presents a performance analysis of reversible, fault tolerant VLSI implementations of carry select and hybrid decimal adders suitable for multi-digit BCD addition. The designs enable partial parallel processing of all digits that perform high-speed addition in decimal domain. When the number of digits is more than 25 the hybrid decimal adder can operate 5 times faster than conventional decimal adder using classical logic gates. The speed up factor of hybrid adder increases above 10 when the number of decimal digits is more than 25 for reversible logic implementation. Such highspeed decimal adders find applications in real time processors and internet-based applications. The implementations use only reversible conservative Fredkin gates, which make it suitable for VLSI circuits.
Resumo:
Detection of Objects in Video is a highly demanding area of research. The Background Subtraction Algorithms can yield better results in Foreground Object Detection. This work presents a Hybrid CodeBook based Background Subtraction to extract the foreground ROI from the background. Codebooks are used to store compressed information by demanding lesser memory usage and high speedy processing. This Hybrid method which uses Block-Based and Pixel-Based Codebooks provide efficient detection results; the high speed processing capability of block based background subtraction as well as high Precision Rate of pixel based background subtraction are exploited to yield an efficient Background Subtraction System. The Block stage produces a coarse foreground area, which is then refined by the Pixel stage. The system’s performance is evaluated with different block sizes and with different block descriptors like 2D-DCT, FFT etc. The Experimental analysis based on statistical measurements yields precision, recall, similarity and F measure of the hybrid system as 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus proves the efficiency of the novel system.
Resumo:
The Towed Array electronics is a multi-channel simultaneous real time high speed data acquisition system. Since its assembly is highly manpower intensive, the costs of arrays are prohibitive and therefore any attempt to reduce the manufacturing, assembly, testing and maintenance costs is a welcome proposition. The Network Based Towed Array is an innovative concept and its implementation has remarkably simplified the fabrication, assembly and testing and revolutionised the Towed Array scenario. The focus of this paper is to give a good insight into the Reliability aspects of Network Based Towed Array. A case study of the comparison between the conventional array and the network based towed array is also dealt with
Resumo:
Sowohl die Ressourcenproblematik als auch die drohenden Ausmaße der Klimaänderung lassen einen Umstieg auf andere Energiequellen langfristig unausweichlich erscheinen und mittelfristig als dringend geboten. Unabhängig von der Frage, auf welchem Niveau sich der Energiebedarf stabilisieren lässt, bleibt dabei zu klären, welche Möglichkeiten sich aus technischer und wirtschaftlicher Sicht in Zukunft zur Deckung unseres Energiebedarfs anbieten. Eine aussichtsreiche Option besteht in der Nutzung regenerativer Energien in ihrer ganzen Vielfalt. Die Arbeit "Szenarien zur zukünftigen Stromversorgung, kostenoptimierte Variationen zur Versorgung Europas und seiner Nachbarn mit Strom aus erneuerbaren Energien" konzentriert sich mit der Stromversorgung auf einen Teilaspekt der Energieversorgung, der zunehmend an Wichtigkeit gewinnt und als ein Schlüssel zur nachhaltigen Energieversorgung interpretiert werden kann. Die Stromversorgung ist heute weltweit für etwa die Hälfte des anthropogenen CO2-Ausstoßes verantwortlich. In dieser Arbeit wurden anhand verschiedener Szenarien Möglichkeiten einer weitgehend CO2–neutralen Stromversorgung für Europa und seine nähere Umgebung untersucht, wobei das Szenariogebiet etwa 1,1 Mrd. Einwohner und einen Stromverbrauch von knapp 4000 TWh/a umfasst. Dabei wurde untersucht, wie die Stromversorgung aufgebaut sein sollte, damit sie möglichst kostengünstig verwirklicht werden kann. Diese Frage wurde beispielsweise für Szenarien untersucht, in denen ausschließlich heute marktverfügbare Techniken berücksichtigt wurden. Auch der Einfluss der Nutzung einiger neuer Technologien, die bisher noch in Entwicklung sind, auf die optimale Gestaltung der Stromversorgung, wurde anhand einiger Beispiele untersucht. Die Konzeption der zukünftigen Stromversorgung sollte dabei nach Möglichkeit objektiven Kriterien gehorchen, die auch die Vergleichbarkeit verschiedener Versorgungsansätze gewährleisten. Dafür wurde ein Optimierungsansatz gewählt, mit dessen Hilfe sowohl bei der Konfiguration als auch beim rechnerischen Betrieb des Stromversorgungssystems weitgehend auf subjektive Entscheidungsprozesse verzichtet werden kann. Die Optimierung hatte zum Ziel, für die definierte möglichst realitätsnahe Versorgungsaufgabe den idealen Kraftwerks- und Leitungspark zu bestimmen, der eine kostenoptimale Stromversorgung gewährleistet. Als Erzeugungsoptionen werden dabei u.a. die Nutzung Regenerativer Energien durch Wasserkraftwerke, Windenergiekonverter, Fallwindkraftwerke, Biomassekraftwerke sowie solare und geothermische Kraftwerke berücksichtigt. Abhängig von den gewählten Randbedingungen ergaben sich dabei unterschiedliche Szenarien. Das Ziel der Arbeit war, mit Hilfe unterschiedlicher Szenarien eine breite Basis als Entscheidungsgrundlage für zukünftige politische Weichenstellungen zu schaffen. Die Szenarien zeigen Optionen für eine zukünftige Gestaltung der Stromversorgung auf, machen Auswirkungen verschiedener – auch politischer – Rahmenbedingungen deutlich und stellen so die geforderte Entscheidungsgrundlage bereit. Als Grundlage für die Erstellung der Szenarien mussten die verschiedenen Potentiale erneuerbarer Energien in hoher zeitlicher und räumlicher Auflösung ermittelt werden, mit denen es erstmals möglich war, die Fragen einer großräumigen regenerativen Stromversorgung ohne ungesicherte Annahmen anhand einer verlässlichen Datengrundlage anzugehen. Auch die Charakteristika der verschiedensten Energiewandlungs- und Transportsysteme mussten studiert werden und sind wie deren Kosten und die verschiedenen Potentiale in der vorliegenden Arbeit ausführlich diskutiert. Als Ausgangsszenario und Bezugspunkt dient ein konservatives Grundszenario. Hierbei handelt es sich um ein Szenario für eine Stromversorgung unter ausschließlicher Nutzung erneuerbarer Energien, die wiederum ausschließlich auf heute bereits entwickelte Technologien zurückgreift und dabei für alle Komponenten die heutigen Kosten zugrundelegt. Dieses Grundszenario ist dementsprechend auch als eine Art konservative Worst-Case-Abschätzung für unsere Zukunftsoptionen bei der regenerativen Stromversorgung zu verstehen. Als Ergebnis der Optimierung basiert die Stromversorgung beim Grundszenario zum größten Teil auf der Stromproduktion aus Windkraft. Biomasse und schon heute bestehende Wasserkraft übernehmen den überwiegenden Teil der Backup-Aufgaben innerhalb des – mit leistungsstarker HGÜ (Hochspannungs–Gleichstrom–Übertragung) verknüpften – Stromversorgungsgebiets. Die Stromgestehungskosten liegen mit 4,65 €ct / kWh sehr nahe am heute Üblichen. Sie liegen niedriger als die heutigen Preisen an der Strombörse. In allen Szenarien – außer relativ teuren, restriktiv ”dezentralen” unter Ausschluss großräumig länderübergreifenden Stromtransports – spielt der Stromtransport eine wichtige Rolle. Er wird genutzt, um Ausgleichseffekte bei der dargebotsabhängigen Stromproduktion aus erneuerbaren Quellen zu realisieren, gute kostengünstige Potentiale nutzbar zu machen und um die Speicherwasserkraft sowie die dezentral genutzte Biomasse mit ihrer Speicherfähigkeit für großräumige Backup-Aufgaben zu erschließen. Damit erweist sich der Stromtransport als einer der Schlüssel zu einer kostengünstigen Stromversorgung. Dies wiederum kann als Handlungsempfehlung bei politischen Weichenstellungen interpretiert werden, die demnach gezielt auf internationale Kooperation im Bereich der Nutzung erneuerbarer Energien setzen und insbesondere den großräumigen Stromtransport mit einbeziehen sollten. Die Szenarien stellen detaillierte und verlässliche Grundlagen für wichtige politische und technologische Zukunftsentscheidungen zur Verfügung. Sie zeigen, dass bei internationaler Kooperation selbst bei konservativen Annahmen eine rein regenerative Stromversorgung möglich ist, die wirtschaftlich ohne Probleme zu realisieren wäre und verweisen den Handlungsbedarf in den Bereich der Politik. Eine wesentliche Aufgabe der Politik läge darin, die internationale Kooperation zu organisieren und Instrumente für eine Umgestaltung der Stromversorgung zu entwickeln. Dabei kann davon ausgegangen werden, dass nicht nur ein sinnvoller Weg zu einer CO2–neutralen Stromversorgung beschritten würde, sondern sich darüber hinaus ausgezeichnete Entwicklungsperspektiven für die ärmeren Nachbarstaaten der EU und Europas eröffnen.
Resumo:
Scanning Probe Microscopy (SPM) has become of fundamental importance for research in area of micro and nano-technology. The continuous progress in these fields requires ultra sensitive measurements at high speed. The imaging speed limitation of the conventional Tapping Mode SPM is due to the actuation time constant of piezotube feedback loop that keeps the tapping amplitude constant. In order to avoid this limit a deflection sensor and an actuator have to be integrated into the cantilever. In this work has been demonstrated the possibility of realisation of piezoresistive cantilever with an embedded actuator. Piezoresistive detection provides a good alternative to the usual optical laser beam deflection technique. In frames of this thesis has been investigated and modelled the piezoresistive effect in bulk silicon (3D case) for both n- and p-type silicon. Moving towards ultra-sensitive measurements it is necessary to realize ultra-thin piezoresistors, which are well localized to the surface, where the stress magnitude is maximal. New physical effects such as quantum confinement which arise due to the scaling of the piezoresistor thickness was taken into account in order to model the piezoresistive effect and its modification in case of ultra-thin piezoresistor (2D case). The two-dimension character of the electron gas in n-type piezoresistors lead up to decreasing of the piezoresistive coefficients with increasing the degree of electron localisation. Moreover for p-type piezoresistors the predicted values of the piezoresistive coefficients are higher in case of localised holes. Additionally, to the integration of the piezoresistive sensor, actuator integrated into the cantilever is considered as fundamental for realisation of fast SPM imaging. Actuation of the beam is achieved thermally by relying on differences in the coefficients of thermal expansion between aluminum and silicon. In addition the aluminum layer forms the heating micro-resistor, which is able to accept heating impulses with frequency up to one megahertz. Such direct oscillating thermally driven bimorph actuator was studied also with respect to the bimorph actuator efficiency. Higher eigenmodes of the cantilever are used in order to increase the operating frequencies. As a result the scanning speed has been increased due to the decreasing of the actuation time constant. The fundamental limits to force sensitivity that are imposed by piezoresistive deflection sensing technique have been discussed. For imaging in ambient conditions the force sensitivity is limited by the thermo-mechanical cantilever noise. Additional noise sources, connected with the piezoresistive detection are negligible.
Resumo:
This thesis in Thermal Flow Drilling and Flowtap in thin metal sheet and pipes of copper and copper alloys had as objectives to know the comportment of copper and copper alloys sheet metal during the Thermal Flow Drill processes with normal tools, to know the best Speed and Feed machine data for the best bushing quality, to known the best Speed for Form Tapping processes and to know the best bush long in pure copper pipes for water solar interchange equipment. Thermal Flow Drilling (TFD) and Form Tapping (FT) is one of the research lines of the Institute of Production and Logistics (IPL) at University of Kassel. At December 1995, a work meeting of IPL, Santa Catarina University, Brazil, Buenos Aires University, Argentine, Tarapacá University (UTA), Chile members and the CEO of Flowdrill B.V. was held in Brazil. The group decided that the Manufacturing Laboratory (ML) of UTA would work with pure copper and brass alloys sheet metal and pure copper pipes in order to develop a water interchange solar heater. The Flowdrill BV Company sent tools to Tarapacá University in 1996. In 1999 IPL and the ML carried out an ALECHILE research project promoted by the DAAD and CONICyT in copper sheet metal and copper pipes and sheet metal a-brass alloys. The normal tools are lobed, conical tungsten carbide tool. When rotated at high speed and pressed with high axial force into sheet metal or thin walled tube generated heat softens the metal and allows the drill to feed forward produce a hole and simultaneously form a bushing from the displacement material. In the market exist many features but in this thesis is used short and longs normal tools of TFD. For reach the objectives it was takes as references four qualities of the frayed end bushing, where the best one is the quality class I. It was used pure copper and a-brass alloys sheet metals, with different thickness. It was used different TFD drills diameter for four thread type, from M-5 to M10. Similar to the Aluminium sheet metals studies it was used the predrilling processes with HSS drills around 30% of the TFD diameter (1,5 – 3,0 mm D). In the next step is used only 2,0 mm thick metal sheet, and 9,2 mm TFD diameter for M-10 thread. For the case of pure commercial copper pipes is used for ¾” inch diameter and 12, 8 mm (3/8”) TFD drill for holes for 3/8” pipes and different normal HSS drills for predrilling processes. The chemical sheet metal characteristics were takes as reference for the material behaviour. The Chilean pure copper have 99,35% of Cu and 0,163% of Zinc and the Chilean a-brass alloys have 75,6% of Cu and 24,0% of Zinc. It is used two German a-brass alloys; Nº1 have 61,6% of Cu, 36,03 % of Zinc and 2,2% of Pb and the German a-brass alloys Nº2 have 63,1% of Cu, 36,7% of Zinc and 0% of Pb. The equipments used were a HAAS CNC milling machine centre, a Kistler dynamometer, PC Pentium II, Acquisition card, TESTPOINT and XAct software, 3D measurement machine, micro hardness, universal test machine, and metallographic microscope. During the test is obtained the feed force and momentum curves that shows the material behaviour with TFD processes. In general it is take three phases. It was possible obtain the best machining data for the different sheet of copper and a-brass alloys thick of Chilean materials and bush quality class I. In the case of a-brass alloys, the chemical components and the TFD processes temperature have big influence. The temperature reach to 400º Celsius during the TFD processes and the a-brass alloys have some percents of Zinc the bush quality is class I. But when the a-brass alloys have some percents of Lead who have 200º C melting point is not possible to obtain a bush, because the Lead gasify and the metallographic net broke. During the TFD processes the recrystallization structures occur around the Copper and a-brass alloy bush, who gives more hardness in these zones. When the threads were produce with Form Tapping processes with Flowtap tools, this hardness amount gives a high limit load of the thread when hey are tested in a special support that was developed for it. For eliminated the predrilling processes with normal HSS drills it was developed a compound tool. With this new tool it was possible obtain the best machining data for quality class I bush. For the copper pipes it is made bush without predrilling and the quality class IV was obtained. When it is was used predrilling processes, quality classes I bush were obtained. Then with different HSS drill diameter were obtained different long bush, where were soldering with four types soldering materials between pipes with 3/8” in a big one as ¾”. Those soldering unions were tested by traction test and all the 3/8” pipes broken, and the soldering zone doesn’t have any problem. Finally were developed different solar water interchange heaters and tested. As conclusions, the present Thesis shows that the Thermal Flow Drilling in thinner metal sheets of cooper and cooper alloys needs a predrilling process for frayed end quality class I bushings, similar to thinner sheets of aluminium bushes. The compound tool developed could obtain quality class I bushings and excludes predrilling processes. The bush recrystalization, product of the friction between the tool and the material, the hardness grows and it is advantageous for the Form Tapping. The methodology developed for commercial copper pipes permits to built water solar interchange heaters.
Resumo:
The Transit network provides high-speed, low-latency, fault-tolerant interconnect for high-performance, multiprocessor computers. The basic connection scheme for Transit uses bidelta style, multistage networks to support up to 256 processors. Scaling to larger machines by simply extending the bidelta network topology will result in a uniform degradation of network latency between all processors. By employing a fat-tree network structure in larger systems, the network provides locality and universality properties which can help minimize the impact of scaling on network latency. This report details the topology and construction issues associated with integrating Transit routing technology into fat-tree interconnect topologies.
Resumo:
Fine-grained parallel machines have the potential for very high speed computation. To program massively-concurrent MIMD machines, programmers need tools for managing complexity. These tools should not restrict program concurrency. Concurrent Aggregates (CA) provides multiple-access data abstraction tools, Aggregates, which can be used to implement abstractions with virtually unlimited potential for concurrency. Such tools allow programmers to modularize programs without reducing concurrency. I describe the design, motivation, implementation and evaluation of Concurrent Aggregates. CA has been used to construct a number of application programs. Multi-access data abstractions are found to be useful in constructing highly concurrent programs.
Resumo:
We present an experimental study on the behavior of bubbles captured in a Taylor vortex. The gap between a rotating inner cylinder and a stationary outer cylinder is filled with a Newtonian mineral oil. Beyond a critical rotation speed (ω[subscript c]), Taylor vortices appear in this system. Small air bubbles are introduced into the gap through a needle connected to a syringe pump. These are then captured in the cores of the vortices (core bubble) and in the outflow regions along the inner cylinder (wall bubble). The flow field is measured with a two-dimensional particle imaging velocimetry (PIV) system. The motion of the bubbles is monitored by using a high speed video camera. It has been found that, if the core bubbles are all of the same size, a bubble ring forms at the center of the vortex such that bubbles are azimuthally uniformly distributed. There is a saturation number (N[subscript s]) of bubbles in the ring, such that the addition of one more bubble leads eventually to a coalescence and a subsequent complicated evolution. Ns increases with increasing rotation speed and decreasing bubble size. For bubbles of non-uniform size, small bubbles and large bubbles in nearly the same orbit can be observed to cross due to their different circulating speeds. The wall bubbles, however, do not become uniformly distributed, but instead form short bubble-chains which might eventually evolve into large bubbles. The motion of droplets and particles in a Taylor vortex was also investigated. As with bubbles, droplets and particles align into a ring structure at low rotation speeds, but the saturation number is much smaller. Moreover, at high rotation speeds, droplets and particles exhibit a characteristic periodic oscillation in the axial, radial and tangential directions due to their inertia. In addition, experiments with non-spherical particles show that they behave rather similarly. This study provides a better understanding of particulate behavior in vortex flow structures.