940 resultados para LCL filters
Resumo:
We propose an economic mechanism to reduce the incidence of malware that delivers spam. Earlier research proposed attention markets as a solution for unwanted messages, and showed they could provide more net benefit than alternatives such as filtering and taxes. Because it uses a currency system, Attention Bonds faces a challenge. Zombies, botnets, and various forms of malware might steal valuable currency instead of stealing unused CPU cycles. We resolve this problem by taking advantage of the fact that the spam-bot problem has been reduced to financial fraud. As such, the large body of existing work in that realm can be brought to bear. By drawing an analogy between sending and spending, we show how a market mechanism can detect and prevent spam malware. We prove that by using a currency (i) each instance of spam increases the probability of detecting infections, and (ii) the value of eradicating infections can justify insuring users against fraud. This approach attacks spam at the source, a virtue missing from filters that attack spam at the destination. Additionally, the exchange of currency provides signals of interest that can improve the targeting of ads. ISPs benefit from data management services and consumers benefit from the higher average value of messages they receive. We explore these and other secondary effects of attention markets, and find them to offer, on the whole, attractive economic benefits for all – including consumers, advertisers, and the ISPs.
Resumo:
Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.
Resumo:
An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. The segmentation is performed by three "copies" of the BCS and FCS, of small, medium, and large scales, wherein the "short-range" and "long-range" interactions within each scale occur over smaller or larger distances, corresponding to the size of the early filters of each scale. A diffusive filling-in operation within the segmented regions at each scale produces coherent surface representations. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.
Resumo:
The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.
Resumo:
A neural model of peripheral auditory processing is described and used to separate features of coarticulated vowels and consonants. After preprocessing of speech via a filterbank, the model splits into two parallel channels, a sustained channel and a transient channel. The sustained channel is sensitive to relatively stable parts of the speech waveform, notably synchronous properties of the vocalic portion of the stimulus it extends the dynamic range of eighth nerve filters using coincidence deteectors that combine operations of raising to a power, rectification, delay, multiplication, time averaging, and preemphasis. The transient channel is sensitive to critical features at the onsets and offsets of speech segments. It is built up from fast excitatory neurons that are modulated by slow inhibitory interneurons. These units are combined over high frequency and low frequency ranges using operations of rectification, normalization, multiplicative gating, and opponent processing. Detectors sensitive to frication and to onset or offset of stop consonants and vowels are described. Model properties are characterized by mathematical analysis and computer simulations. Neural analogs of model cells in the cochlear nucleus and inferior colliculus are noted, as are psychophysical data about perception of CV syllables that may be explained by the sustained transient channel hypothesis. The proposed sustained and transient processing seems to be an auditory analog of the sustained and transient processing that is known to occur in vision.
Resumo:
A neural network model, called an FBF network, is proposed for automatic parallel separation of multiple image figures from each other and their backgrounds in noisy grayscale or multi-colored images. The figures can then be processed in parallel by an array of self-organizing Adaptive Resonance Theory (ART) neural networks for automatic target recognition. An FBF network can automatically separate the disconnected but interleaved spirals that Minsky and Papert introduced in their book Perceptrons. The network's design also clarifies why humans cannot rapidly separate interleaved spirals, yet can rapidly detect conjunctions of disparity and color, or of disparity and motion, that distinguish target figures from surrounding distractors. Figure-ground separation is accomplished by iterating operations of a Feature Contour System (FCS) and a Boundary Contour System (BCS) in the order FCS-BCS-FCS, hence the term FBF, that have been derived from an analysis of biological vision. The FCS operations include the use of nonlinear shunting networks to compensate for variable illumination and nonlinear diffusion networks to control filling-in. A key new feature of an FBF network is the use of filling-in for figure-ground separation. The BCS operations include oriented filters joined to competitive and cooperative interactions designed to detect, regularize, and complete boundaries in up to 50 percent noise, while suppressing the noise. A modified CORT-X filter is described which uses both on-cells and off-cells to generate a boundary segmentation from a noisy image.
Resumo:
A feedforward neural network for invariant image preprocessing is proposed that represents the position1 orientation and size of an image figure (where it is) in a multiplexed spatial map. This map is used to generate an invariant representation of the figure that is insensitive to position1 orientation, and size for purposes of pattern recognition (what it is). A multiscale array of oriented filters followed by competition between orientations and scales is used to define the Where filter.
Resumo:
An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to two large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. Finally, a diffusive filling-in operation within the segmented regions produces coherent visible structures. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.
Resumo:
RNA editing is a biological phenomena that alters nascent RNA transcripts by insertion, deletion and/or substitution of one or a few nucleotides. It is ubiquitous in all kingdoms of life and in viruses. The predominant editing event in organisms with a developed central nervous system is Adenosine to Inosine deamination. Inosine is recognized as Guanosine by the translational machinery and reverse-transcriptase. In primates, RNA editing occurs frequently in transcripts from repetitive regions of the genome. In humans, more than 500,000 editing instances have been identified, by applying computational pipelines on available ESTs and high-throughput sequencing data, and by using chemical methods. However, the functions of only a small number of cases have been studied thoroughly. RNA editing instances have been found to have roles in peptide variants synthesis by non-synonymous codon substitutions, transcript variants by alterations in splicing sites and gene silencing by miRNAs sequence modifications. We established the Database of RNA EDiting (DARNED) to accommo-date the reference genomic coordinates of substitution editing in human, mouse and fly transcripts from published literatures, with additional information on edited genomic coordinates collected from various databases e.g. UCSC, NCBI. DARNED contains mostly Adenosine to Inosine editing and allows searches based on genomic region, gene ID, and user provided sequence. The Database is accessible at http://darned.ucc.ie RNA editing instances in coding region are likely to result in recoding in protein synthesis. This encouraged me to focus my research on the occurrences of RNA editing specific CDS and non-Alu exonic regions. By applying various filters on discrepancies between available ESTs and their corresponding reference genomic sequences, putative RNA editing candidates were identified. High-throughput sequencing was used to validate these candidates. All predicted coordinates appeared to be either SNPs or unedited.
Resumo:
Vascular smooth muscle cells (VSMC) are one of the key players in the pathogenesis of cardiovascular diseases. The origin of neointimal VSMC has thus become a prime focus of research. VSMC originate from multiple progenitors cell types. In embryo the well-defined sources of VSMC include; neural crest cells, proepicardial cells and EPC. In adults, though progenitor cells from bone marrow (BM), circulation and tissues giving rise to SMC have been identified, no progress has been made in terms of isolating highly proliferative clonal population of adult stem cells with potential to differentiate into SMC. Smooth muscle like stem progenitor cells (SMSPC) were isolated from cardiopulmonary bypass filters of adult patients undergoing CABG. Rat SMSPC have previously been isolated by our group from the bone marrow of Fischer rats and also from the peripheral blood of monocrotaline induced pulmonary hypertension (MCT-PHTN) animal model. Characterization of novel SMSPC exhibited stem cell characteristics and machinery for differentiation into SMC. The expression of Isl-1 on SMSPC provided unique molecular identity to these circulating stem progenitor cells. The functional potential of SMSPC was determined by monitoring adoptive transfer of GFP+ SMSPC in rodent models of vascular injury; carotid injury and MCT-PHTN. The participation of SMSPC in vascular pathology was confirmed by quantifying the peripheral blood, and engrafted levels of SMSPC using RT-PCR. In terms of translating into clinical practice, SMSPC could be a good tool for detecting the atherosclerotic plaque burden. The current study demonstrates the existence of novel adult stem progenitor cells in circulation, with the potential role in vascular pathology.
Resumo:
Leachate may be defined as any liquid percolating through deposited waste and emitted from or contained within a landfill. If leachate migrates from a site it may pose a severe threat to the surrounding environment. Increasingly stringent environmental legislation both at European level and national level (Republic of Ireland) regarding the operation of landfill sites, control of associated emissions, as well as requirements for restoration and aftercare management (up to 30 years) has prompted research for this project into the design and development of a low cost, low maintenance, low technology trial system to treat landfill leachate at Kinsale Road Landfill Site, located on the outskirts of Cork city. A trial leachate treatment plant was constructed consisting of 14 separate treatment units (10 open top cylindrical cells [Ø 1.8 m x 2.0 high] and four reed beds [5.0m x 5.0m x 1.0m]) incorporating various alternative natural treatment processes including reed beds (vertical flow [VF] and horizontal flow [HF]), grass treatment planes, compost units, timber chip units, compost-timber chip units, stratified sand filters and willow treatment plots. High treatment efficiencies were achieved in units operating in sequence containing compost and timber chip media, vertical flow reed beds and grass treatment planes. Pollutant load removal rates of 99% for NH4, 84% for BOD5, 46% for COD, 63% for suspended solids, 94% for iron and 98% for manganese were recorded in the final effluent of successfully operated sequences at irrigation rates of 945 l/m2/day in the cylindrical cells and 96 l/m2/day in the VF reed beds and grass treatment planes. Almost total pathogen removal (E. coli) occurred in the final effluent of the same sequence. Denitrification rates of 37% were achieved for a limited period. A draft, up-scaled leachate treatment plant is presented, based on treatment performance of the trial plant.
Resumo:
A digital differentiator simply involves the derivation of an input signal. This work includes the presentation of first-degree and second-degree differentiators, which are designed as both infinite-impulse-response (IIR) filters and finite-impulse-response (FIR) filters. The proposed differentiators have low-pass magnitude response characteristics, thereby rejecting noise frequencies higher than the cut-off frequency. Both steady-state frequency-domain characteristics and Time-domain analyses are given for the proposed differentiators. It is shown that the proposed differentiators perform well when compared to previously proposed filters. When considering the time-domain characteristics of the differentiators, the processing of quantized signals proved especially enlightening, in terms of the filtering effects of the proposed differentiators. The coefficients of the proposed differentiators are obtained using an optimization algorithm, while the optimization objectives include magnitude and phase response. The low-pass characteristic of the proposed differentiators is achieved by minimizing the filter variance. The low-pass differentiators designed show the steep roll-off, as well as having highly accurate magnitude response in the pass-band. While having a history of over three hundred years, the design of fractional differentiator has become a ‘hot topic’ in recent decades. One challenging problem in this area is that there are many different definitions to describe the fractional model, such as the Riemann-Liouville and Caputo definitions. Through use of a feedback structure, based on the Riemann-Liouville definition. It is shown that the performance of the fractional differentiator can be improved in both the frequency-domain and time-domain. Two applications based on the proposed differentiators are described in the thesis. Specifically, the first of these involves the application of second degree differentiators in the estimation of the frequency components of a power system. The second example concerns for an image processing, edge detection application.
Resumo:
Photonic integration has become an important research topic in research for applications in the telecommunications industry. Current optical internet infrastructure has reached capacity with current generation dense wavelength division multiplexing (DWDM) systems fully occupying the low absorption region of optical fibre from 1530 nm to 1625 nm (the C and L bands). This is both due to an increase in the number of users worldwide and existing users demanding more bandwidth. Therefore, current research is focussed on using the available telecommunication spectrum more efficiently. To this end, coherent communication systems are being developed. Advanced coherent modulation schemes can be quite complex in terms of the number and array of devices required for implementation. In order to make these systems viable both logistically and commercially, photonic integration is required. In traditional DWDM systems, arrayed waveguide gratings (AWG) are used to both multiplex and demultiplex the multi-wavelength signal involved. AWGs are used widely as they allow filtering of the many DWDM wavelengths simultaneously. However, when moving to coherent telecommunication systems such as coherent optical frequency division multiplexing (OFDM) smaller FSR ranges are required from the AWG. This increases the size of the device which is counter to the miniaturisation which integration is trying to achieve. Much work was done with active filters during the 1980s. This involved using a laser device (usually below threshold) to allow selective wavelength filtering of input signals. By using more complicated cavity geometry devices such as distributed feedback (DFB) and sampled grating distributed Bragg gratings (SG-DBR) narrowband filtering is achievable with high suppression (>30 dB) of spurious wavelengths. The active nature of the devices also means that, through carrier injection, the index can be altered resulting in tunability of the filter. Used above threshold, active filters become useful in filtering coherent combs. Through injection locking, the coherence of the filtered wavelengths with the original comb source is retained. This gives active filters potential application in coherent communication system as demultiplexers. This work will focus on the use of slotted Fabry-Pérot (SFP) semiconductor lasers as active filters. Experiments were carried out to ensure that SFP lasers were useful as tunable active filters. In all experiments in this work the SFP lasers were operated above threshold and so injection locking was the mechanic by which the filters operated. Performance of the lasers under injection locking was examined using both single wavelength and coherent comb injection. In another experiment two discrete SFP lasers were used simultaneously to demultiplex a two-line coherent comb. The relative coherence of the comb lines was retained after demultiplexing. After showing that SFP lasers could be used to successfully demultiplex coherent combs a photonic integrated circuit was designed and fabricated. This involved monolithic integration of a MMI power splitter with an array of single facet SFP lasers. This device was tested much in the same way as the discrete devices. The integrated device was used to successfully demultiplex a two line coherent comb signal whilst retaining the relative coherence between the filtered comb lines. A series of modelling systems were then employed in order to understand the resonance characteristics of the fabricated devices, and to understand their performance under injection locking. Using this information, alterations to the SFP laser designs were made which were theoretically shown to provide improved performance and suitability for use in filtering coherent comb signals.
Resumo:
This dissertation addressed the issue of sustainable development at the level of individual behaviors. Environmental perceptions were obtained from people living around the biosphere reserve Chamela-Cuixmala in Jalisco, Mexico. Several environmental issues were identified by the people, such as garbage and grey water on the streets, burning plastics, and the lack of usage of recreational areas. All these issues could be addressed with a change in behavior by the villagers. Familiarization activities were conducted to gain people's trust in order to conduct a community forum. These activities included giving talks to school children and organizing workshops. Four different methodologies were generated using memetics and participation to test which would ameliorate those environmental issues identified by the people through a change in behavior. The methodologies were 1) Memes; 2) Participation and Memes; 3) Participation; 4) Neither Participation nor Memes. A meme is an idea expressed within a linguistic structure or architecture that provides it with self-disseminating and self-protecting characteristics within and among the minds of individuals congruent with their values, beliefs and filters. Four villages were chosen as the treatments, and one as the control, for a total of five experimental villages. A different behavior was addressed in each treatment village (garbage, grey-water, burning plastics, recreation.) A nonequivalent control-group design was established. A pretest was conducted in all five villages; the methodologies were tested in the four treatment villages; a posttest was conducted on the five villages. The pretest and posttest consisted in measuring sensory specific indicators which are manifestations of behavior that can either be seen, smelled, touched, heard or tasted. Statistically significant differences in behavior from the control were found for two of the methodologies 1) Memes (p=0.0403) and 2) Participation and Memes (p=0.0064). For the methodologies of 3) Participation alone and 4) Neither, the differences were not significant (p=0.8827, p=0.5627 respectively). When using memes, people's behavior improved when compared to the control. Participation alone did not generate a significant difference. Participation aided in the generation of the memes. Memetics is a tool that can be used to establish a linkage between human behavior and ecological health.
Resumo:
Thermal-optical analysis is a conventional method for classifying carbonaceous aerosols as organic carbon (OC) and elemental carbon (EC). This article examines the effects of three different temperature protocols on the measured EC. For analyses of parallel punches from the same ambient sample, the protocol with the highest peak helium-mode temperature (870°C) gives the smallest amount of EC, while the protocol with the lowest peak helium-mode temperature (550°C) gives the largest amount of EC. These differences are observed when either sample transmission or reflectance is used to define the OC/EC split. An important issue is the effect of the peak helium-mode temperature on the relative rate at which different types of carbon with different optical properties evolve from the filter. Analyses of solvent-extracted samples are used to demonstrate that high temperatures (870°C) lead to premature EC evolution in the helium-mode. For samples collected in Pittsburgh, this causes the measured EC to be biased low because the attenuation coefficient of pyrolyzed carbon is consistently higher than that of EC. While this problem can be avoided by lowering the peak helium-mode temperature, analyses of wood smoke dominated ambient samples and levoglucosan-spiked filters indicate that too low helium-mode peak temperatures (550°C) allow non-light absorbing carbon to slip into the oxidizing mode of the analysis. If this carbon evolves after the OC/EC split, it biases the EC measurements high. Given the complexity of ambient aerosols, there is unlikely to be a single peak helium-mode temperature at which both of these biases can be avoided. Copyright © American Association for Aerosol Research.