991 resultados para mixed signal coding


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose an original method to geoposition an audio/video stream with multiple emitters that are at the same time receivers of the mixed signal. The achieved method is suitable for those comes where a list of positions within a designated area is encoded with a degree of precision adjusted to the visualization capabilities; and is also easily extensible to support new requirements. This method extends a previously proposed protocol, without incurring in any performance penalty.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose an original method to geoposition an audio/video stream with multiple emitters that are at the same time receivers of the mixed signal. The obtained method is suitable when a list of positions within a known area is encoded with precision tailored to the visualization capabilities of the target device. Nevertheless, it is easily adaptable to new precision requirements, as well as parameterized data precision. This method extends a previously proposed protocol, without incurring in any performance penalty.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The fatigue behaviour of the cold chamber pressure-die-cast alloys: Mazak3, ZA8, ZA27, M3K, ZA8K, ZA27K, K1, K2 and K3 was investigated at temperature of 20°C. The alloys M3K, ZA8K and ZA27K were also examined at temperatures of 50 and 100°C. The ratio between fatigue strength and tensile strength was established at 20°C at 107 cycles. The fatigue life prediction of the alloys M3K, ZA8K and ZA27K was formulated at 20, 50 and 100°C. The prediction formulae were found to be reasonably accurate. All of the experimental alloys were heterogeneous and contained large but varying amounts of pores. These pores were a major contribution and dominated the alloys fatigue failure. Their effect, however, on tensile failure was negligible. The ZA27K possessed the highest tensile strength but the lowest fatigue strength. The relationship between the fracture topography and the microstructure was also determined by the use of a mixed signal of a secondary electron and a back-scattered electron on the SEM. The tensile strength of the experimental alloys was directly proportional to the aluminium content within the alloys. The effect of copper content was also investigated within the alloys K1, K2, ZA8K and K3 which contained 0%, 0.5%, 1.0% and 2.0% respectively. It was determined that the fatigue and tensile strengths improved with higher copper contents. Upon ageing the alloys Mazak3, ZA8 and ZA27 at an ambient temperature for 5 years, copper was also found to influence and maintain the metastable Zn-Al (αm) phase. The copper free Mazak3 upon ageing lost this metastable phase. The 1.0% copper ZA8 alloy had lost almost 50% of its metastable phase. Finally the 2.0% copper ZA27 had merely lost 10% of its metastable phase. The cph zinc contained a limited number of slip systems, therefore twinning deformation was unavoidable in both fatigue and tensile testing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents an up to date review of digital watermarking (WM) from a VLSI designer point of view. The reader is introduced to basic principles and terms in the field of image watermarking. It goes through a brief survey on WM theory, laying out common classification criterions and discussing important design considerations and trade-offs. Elementary WM properties such as robustness, computational complexity and their influence on image quality are discussed. Common attacks and testing benchmarks are also briefly mentioned. It is shown that WM design must take the intended application into account. The difference between software and hardware implementations is explained through the introduction of a general scheme of a WM system and two examples from previous works. A versatile methodology to aid in a reliable and modular design process is suggested. Relating to mixed-signal VLSI design and testing, the proposed methodology allows an efficient development of a CMOS image sensor with WM capabilities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The never-stopping increase in demand for information transmission capacity has been met with technological advances in telecommunication systems, such as the implementation of coherent optical systems, advanced multilevel multidimensional modulation formats, fast signal processing, and research into new physical media for signal transmission (e.g. a variety of new types of optical fibers). Since the increase in the signal-to-noise ratio makes fiber communication channels essentially nonlinear (due to the Kerr effect for example), the problem of estimating the Shannon capacity for nonlinear communication channels is not only conceptually interesting, but also practically important. Here we discuss various nonlinear communication channels and review the potential of different optical signal coding, transmission and processing techniques to improve fiber-optic Shannon capacity and to increase the system reach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reactive iron (oxyhydr)oxide minerals preferentially undergo early diagenetic redox cycling which can result in the production of dissolved Fe(II), adsorption of Fe(II) onto particle surfaces, and the formation of authigenic Fe minerals. The partitioning of iron in sediments has traditionally been studied by applying sequential extractions that target operationally-defined iron phases. Here, we complement an existing sequential leaching method by developing a sample processing protocol for d56Fe analysis, which we subsequently use to study Fe phase-specific fractionation related to dissimilatory iron reduction in a modern marine sediment. Carbonate-Fe was extracted by acetate, easily reducible oxides (e.g. ferrihydrite and lepidocrocite) by hydroxylamine-HCl, reducible oxides (e.g. goethite and hematite) by dithionite-citrate, and magnetite by ammonium oxalate. Subsequently, the samples were repeatedly oxidized, heated and purified via Fe precipitation and column chromatography. The method was applied to surface sediments collected from the North Sea, south of the Island of Helgoland. The acetate-soluble fraction (targeting siderite and ankerite) showed a pronounced downcore d56Fe trend. This iron pool was most depleted in 56Fe close to the sediment-water interface, similar to trends observed for pore-water Fe(II). We interpret this pool as surface-reduced Fe(II), rather than siderite or ankerite, that was open to electron and atom exchange with the oxide surface. Common extractions using 0.5 M HCl or Na-dithionite alone may not resolve such trends, as they dissolve iron from isotopically distinct pools leading to a mixed signal. Na-dithionite leaching alone, for example, targets the sum of reducible Fe oxides that potentially differ in their isotopic fingerprint. Hence, the development of a sequential extraction Fe isotope protocol provides a new opportunity for detailed study of the behavior of iron in a wide-range of environmental settings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Periplatform ooze is an admixture of pelagic carbonate and sediment derived from neritic carbonate platforms. Compositional variations of periplatform ooze allow the rectonstruction of past sea-level changes. Periplatform ooze formed during sea-level highstands is finer grained and richer in aragonit through the elevated input of material from the flooded platform compared to periplatform ooze formed during the episodes of lowered sea level. In many cases, however, the sea floor around carbonate platforms is subjected to bottom currents which are expected to affect sediment composition, i.e. through winnowing of the fine fraction. The interaction of sea-level driven highstand shedding and current impact on the formation of periplatform ooze is influenced or even distorted by changing current activity, an integrated study using seismic, hydroacoustic and sedimentological data has been performed on periplatform ooze deposited in the Inner Sea of the Maldives. The Miocene to Pleistocene succession of drift deposits is subdivided into nine units; limits of seismostratigraphic units correspond to changes or turnarounds in grain size trends in cores recovered at ODP Site 716 and NEOMA Site 1143. For the Pleistocene it can be shown how changes in grain size occur in concert with sea-level changes and changes of the monsoonal system, which is thought to be a major driver bottom currents in the Maldives. A clear hightstand shedding pattern only appears in the data at a time of of relaxation of monsoonal strength during the last 315 ky. Results imply (1) that drift sediments provide a potential target for analyzing past changes in oceanic currents and (2) that the ooze composition bears a mixed signal of input and physical winnowing at the sea floor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The last two decades have seen many exciting examples of tiny robots from a few cm3 to less than one cm3. Although individually limited, a large group of these robots has the potential to work cooperatively and accomplish complex tasks. Two examples from nature that exhibit this type of cooperation are ant and bee colonies. They have the potential to assist in applications like search and rescue, military scouting, infrastructure and equipment monitoring, nano-manufacture, and possibly medicine. Most of these applications require the high level of autonomy that has been demonstrated by large robotic platforms, such as the iRobot and Honda ASIMO. However, when robot size shrinks down, current approaches to achieve the necessary functions are no longer valid. This work focused on challenges associated with the electronics and fabrication. We addressed three major technical hurdles inherent to current approaches: 1) difficulty of compact integration; 2) need for real-time and power-efficient computations; 3) unavailability of commercial tiny actuators and motion mechanisms. The aim of this work was to provide enabling hardware technologies to achieve autonomy in tiny robots. We proposed a decentralized application-specific integrated circuit (ASIC) where each component is responsible for its own operation and autonomy to the greatest extent possible. The ASIC consists of electronics modules for the fundamental functions required to fulfill the desired autonomy: actuation, control, power supply, and sensing. The actuators and mechanisms could potentially be post-fabricated on the ASIC directly. This design makes for a modular architecture. The following components were shown to work in physical implementations or simulations: 1) a tunable motion controller for ultralow frequency actuation; 2) a nonvolatile memory and programming circuit to achieve automatic and one-time programming; 3) a high-voltage circuit with the highest reported breakdown voltage in standard 0.5 μm CMOS; 4) thermal actuators fabricated using CMOS compatible process; 5) a low-power mixed-signal computational architecture for robotic dynamics simulator; 6) a frequency-boost technique to achieve low jitter in ring oscillators. These contributions will be generally enabling for other systems with strict size and power constraints such as wireless sensor nodes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The GRAIN detector is part of the SAND Near Detector of the DUNE neutrino experiment. A new imaging technique involving the collection of the scintillation light will be used in order to reconstruct images of particle tracks in the GRAIN detector. Silicon photomultiplier (SiPM) matrices will be used as photosensors for collecting the scintillation light emitted at 127 nm by liquid argon. The readout of SiPM matrices inside the liquid argon requires the use of a multi-channel mixed-signal ASIC, while the back-end electronics will be implemented in FPGAs outside the cryogenic environment. The ALCOR (A Low-power Circuit for Optical sensor Readout) ASIC, developed by Torino division of INFN, is under study, since it is optimized to readout SiPMs at cryogenic temperatures. I took part in the realization of a demonstrator of the imaging system, which consists of a SiPM matrix connected to a custom circuit board, on which an ALCOR ASIC is mounted. The board communicates with an FPGA. The first step of the present project that I have accomplished was the development of an emulator for the ALCOR ASIC. This emulator allowed me to verify the correct functioning of the initial firmware before the real ASIC itself was available. I programmed the emulator using VHDL and I also developed test benches in order to test its correct working. Furthermore, I developed portions of the DAQ software, which I used for the acquisition of data and the slow control of the ASICs. In addition, I made some parts of the DAQ firmware for the FPGAs. Finally, I tested the complete SiPMs readout system at both room and cryogenic temperature in order to ensure its full functionality.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most research on Distributed Space-Time Block Coding (D-STBC) has so far focused on the case of 2 relay nodes and assumed that the relay nodes are perfectly synchronised at the symbol level. This paper applies STBC to 4-relaynode systems under quasi-synchronisation and derives a new detector based on parallel interference cancellation, which proves to be very effective in suppressing the impact of imperfect synchronisation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One major assumption in all orthogonal space-time block coding (O-STBC) schemes is that the channel remains static over the length of the code word. However, time-selective fading channels do exist, and in such case conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. As a sequel to the authors' previous papers on this subject, this paper aims to eliminate the error floor of the H(i)-coded O-STBC system (i = 3 and 4) by employing the techniques of: 1) zero forcing (ZF) and 2) parallel interference cancellation (PIC). It is. shown that for an H(i)-coded system the PIC is a much better choice than the ZF in terms of both performance and computational complexity. Compared with the, conventional H(i) detector, the PIC detector incurs a moderately higher computational complexity, but this can well be justified by the enormous improvement.