958 resultados para Adaptative Edge Detection
Resumo:
Multicode operation in space-time block coded (STBC) multiple input multiple output (MIMO) systems can provide additional degrees of freedom in code domain to achieve high data rates. In such multicode STBC systems, the receiver experiences code domain interference (CDI) in frequency selective fading. In this paper, we propose a linear parallel interference cancellation (LPIC) approach to cancel the CDI in multicode STBC in frequency selective fading. The proposed detector first performs LPIC followed by STBC decoding. We evaluate the bit error performance of the detector and show that it effectively cancels the CDI and achieves improved error performance. Our results further illustrate how the combined effect of interference cancellation, transmit diversity, and RAKE diversity affect the bit error performance of the system.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
In the first half of the twentieth century the dematerializing of boundaries between enclosure and exposure problematized traditional acts of “occupation” and understandings of the domestic environment. As a space of escalating technological control, the modern domestic interior offered new potential to re-define the meaning and means of habitation. This shift is clearly expressed in the transformation of electric lighting technology and applications for the modern interior in the mid-twentieth century. Addressing these issues, this paper examines the critical role of electric lighting in regulating and framing both the public and private occupation of Philip Johnson’s New Canaan estate. Exploring the dialectically paired transparent Glass House and opaque Guest House (both 1949), this study illustrates how Johnson employed artificial light to control the visual environment of the estate as well as to aestheticize the performance of domestic space. Looking closely at the use of artificial light to create emotive effects as well as to intensify the experience of occupation, this revisiting of the iconic Glass House and lesser-known Guest House provides a more complex understanding of Johnson’s work and the means with which he inhabited his own architecture. Calling attention to the importance of Johnson serving as both architect and client, and his particular interest in exploring the new potential of architectural lighting in this period, this paper investigates Johnson’s use of electric light to support architectural narratives, maintain visual order and control, and to suit the nuanced desires of domestic occupation.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.
Resumo:
We consider the problem of quickest detection of an intrusion using a sensor network, keeping only a minimal number of sensors active. By using a minimal number of sensor devices, we ensure that the energy expenditure for sensing, computation and communication is minimized (and the lifetime of the network is maximized). We model the intrusion detection (or change detection) problem as a Markov decision process (MDP). Based on the theory of MDP, we develop the following closed loop sleep/wake scheduling algorithms: (1) optimal control of Mk+1, the number of sensors in the wake state in time slot k + 1, (2) optimal control of qk+1, the probability of a sensor in the wake state in time slot k + 1, and an open loop sleep/wake scheduling algorithm which (3) computes q, the optimal probability of a sensor in the wake state (which does not vary with time), based on the sensor observations obtained until time slot k. Our results show that an optimum closed loop control on Mk+1 significantly decreases the cost compared to keeping any number of sensors active all the time. Also, among the three algorithms described, we observe that the total cost is minimum for the optimum control on Mk+1 and is maximum for the optimum open loop control on q.
Resumo:
Novel chromogenic thiourea based sensors 4,4'-bis-[3-(4-nitrophenyl) thiourea] diphenyl ether 1 and 4,4'-bis-[3-(4-nitrophenyl) thiourea] diphenyl methane 2 having nitrophenyl group as signaling unit have been synthesized and characterized by spectroscopic techniques and X-ray crystallography. The both sensors show visual detection, UV-vis and NMR spectral changes in presence of fluoride and cyanide anions in organic solvent as well as in aqueous medium. The absorption spectra indicated the formation of complex between host and guest is in 1:2 stoichiometric ratios. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
Resumo:
By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.
Resumo:
The dissertation deals with remote narrowband measurements of the electromagnetic radiation emitted by lightning flashes. A lightning flash consists of a number of sub-processes. The return stroke, which transfers electrical charge from the thundercloud to to the ground, is electromagnetically an impulsive wideband process; that is, it emits radiation at most frequencies in the electromagnetic spectrum, but its duration is only some tens of microseconds. Before and after the return stroke, multiple sub-processes redistribute electrical charges within the thundercloud. These sub-processes can last for tens to hundreds of milliseconds, many orders of magnitude longer than the return stroke. Each sub-process causes radiation with specific time-domain characteristics, having maxima at different frequencies. Thus, if the radiation is measured at a single narrow frequency band, it is difficult to identify the sub-processes, and some sub-processes can be missed altogether. However, narrowband detectors are simple to design and miniaturize. In particular, near the High Frequency band (High Frequency, 3 MHz to 30 MHz), ordinary shortwave radios can, in principle, be used as detectors. This dissertation utilizes a prototype detector which is essentially a handheld AM radio receiver. Measurements were made in Scandinavia, and several independent data sources were used to identify lightning sub-processes, as well as the distance to each individual flash. It is shown that multiple sub-processes radiate strongly near the HF band. The return stroke usually radiates intensely, but it cannot be reliably identified from the time-domain signal alone. This means that a narrowband measurement is best used to characterize the energy of the radiation integrated over the whole flash, without attempting to identify individual processes. The dissertation analyzes the conditions under which this integrated energy can be used to estimate the distance to the flash. It is shown that flash-by-flash variations are large, but the integrated energy is very sensitive to changes in the distance, dropping as approximately the inverse cube root of the distance. Flashes can, in principle, be detected at distances of more than 100 km, but since the ground conductivity can vary, ranging accuracy drops dramatically at distances larger than 20 km. These limitations mean that individual flashes cannot be ranged accurately using a single narrowband detector, and the useful range is limited to 30 kilometers at the most. Nevertheless, simple statistical corrections are developed, which enable an accurate estimate of the distance to the closest edge of an active storm cell, as well as the approach speed. The results of the dissertation could therefore have practical applications in real-time short-range lightning detection and warning systems.
Resumo:
DNA amplification using Polymerase Chain Reaction (PCR) in a small volume is used in Lab-on-a-chip systems involving DNA manipulation. For few microliters of volume of liquid, it becomes difficult to measure and monitor the thermal profile accurately and reproducibly, which is an essential requirement for successful amplification. Conventional temperature sensors are either not biocompatible or too large and hence positioned away from the liquid leading to calibration errors. In this work we present a fluorescence based detection technique that is completely biocompatible and measures directly the liquid temperature. PCR is demonstrated in a 3 ILL silicon-glass microfabricated device using non-contact induction heating whose temperature is controlled using fluorescence feedback from SYBR green I dye molecules intercalated within sensor DNA. The performance is compared with temperature feedback using a thermocouple sensor. Melting curve followed by gel electrophoresis is used to confirm product specificity after the PCR cycles. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
A nanoscale-sized cage with a trigonal prismatic shape is prepared by coordination-driven self-assembly of a predesigned organometallic Pt-3 acceptor with an organic clip-type ligand. This trigonal prism is fluorescent and undergoes efficient fluorescence quenching by nitroaromatics, which are the chemical signatures of many explosives.