936 resultados para pitch interpolation
Resumo:
Z. Huang and Q. Shen. Fuzzy interpolative reasoning via scale and move transformation. IEEE Transactions on Fuzzy Systems, 14(2):340-359.
Resumo:
Z. Huang and Q. Shen. Scale and move transformation-based fuzzy interpolative reasoning: A revisit. Proceedings of the 13th International Conference on Fuzzy Systems, pages 623-628, 2004.
Resumo:
Langstaff, David; Chase, T., (2007) 'A multichannel detector array with 768 pixels developed for electron spectroscopy', Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 573(1-2) pp.169-171 RAE2008
Resumo:
Neoplastic tissue is typically highly vascularized, contains abnormal concentrations of extracellular proteins (e.g. collagen, proteoglycans) and has a high interstitial fluid pres- sure compared to most normal tissues. These changes result in an overall stiffening typical of most solid tumors. Elasticity Imaging (EI) is a technique which uses imaging systems to measure relative tissue deformation and thus noninvasively infer its mechanical stiffness. Stiffness is recovered from measured deformation by using an appropriate mathematical model and solving an inverse problem. The integration of EI with existing imaging modal- ities can improve their diagnostic and research capabilities. The aim of this work is to develop and evaluate techniques to image and quantify the mechanical properties of soft tissues in three dimensions (3D). To that end, this thesis presents and validates a method by which three dimensional ultrasound images can be used to image and quantify the shear modulus distribution of tissue mimicking phantoms. This work is presented to motivate and justify the use of this elasticity imaging technique in a clinical breast cancer screening study. The imaging methodologies discussed are intended to improve the specificity of mammography practices in general. During the development of these techniques, several issues concerning the accuracy and uniqueness of the result were elucidated. Two new algorithms for 3D EI are designed and characterized in this thesis. The first provides three dimensional motion estimates from ultrasound images of the deforming ma- terial. The novel features include finite element interpolation of the displacement field, inclusion of prior information and the ability to enforce physical constraints. The roles of regularization, mesh resolution and an incompressibility constraint on the accuracy of the measured deformation is quantified. The estimated signal to noise ratio of the measured displacement fields are approximately 1800, 21 and 41 for the axial, lateral and eleva- tional components, respectively. The second algorithm recovers the shear elastic modulus distribution of the deforming material by efficiently solving the three dimensional inverse problem as an optimization problem. This method utilizes finite element interpolations, the adjoint method to evaluate the gradient and a quasi-Newton BFGS method for optimiza- tion. Its novel features include the use of the adjoint method and TVD regularization with piece-wise constant interpolation. A source of non-uniqueness in this inverse problem is identified theoretically, demonstrated computationally, explained physically and overcome practically. Both algorithms were test on ultrasound data of independently characterized tissue mimicking phantoms. The recovered elastic modulus was in all cases within 35% of the reference elastic contrast. Finally, the preliminary application of these techniques to tomosynthesis images showed the feasiblity of imaging an elastic inclusion.
Resumo:
Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.
Resumo:
The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.
Resumo:
There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Practical realisation of quantum information science is a challenge being addressed by researchers employing various technologies. One of them is based on quantum dots (QD), usually referred to as artificial atoms. Being capable to emit single and polarization entangled photons, they are attractive as sources of quantum bits (qubits) which can be relatively easily integrated into photonic circuits using conventional semiconductor technologies. However, the dominant self-assembled QD systems suffer from asymmetry related problems which modify the energetic structure. The main issue is the degeneracy lifting (the fine-structure splitting, FSS) of an optically allowed neutral exciton state which participates in a polarization-entanglement realisation scheme. The FSS complicates polarization-entanglement detection unless a particular FSS manipulation technique is utilized to reduce it to vanishing values, or a careful selection of intrinsically good candidates from the vast number of QDs is carried out, preventing the possibility of constructing vast arrays of emitters on the same sample. In this work, site-controlled InGaAs QDs grown on (111)B oriented GaAs substrates prepatterned with 7.5 μm pitch tetrahedrons were studied in order to overcome QD asymmetry related problems. By exploiting an intrinsically high rotational symmetry, pyramidal QDs were shown as polarization-entangled photon sources emitting photons with the fidelity of the expected maximally entangled state as high as 0.721. It is the first site-controlled QD system of entangled photon emitters. Moreover, the density of such emitters was found to be as high as 15% in some areas: the density much higher than in any other QD system. The associated physical phenomena (e.g., carrier dynamic, QD energetic structure) were studied, as well, by different techniques: photon correlation spectroscopy, polarization-resolved microphotoluminescence and magneto-photoluminescence.
Resumo:
Both the emission properties and the evolution of the radio jets of Active Galactic Nuclei are dependent on the magnetic (B) fields that thread them. A number of observations of AGN jets suggest that the B fields they carry have a significant helical component, at least on parsec scales. This thesis uses a model, first proposed by Laing and then developed by Papageorgiou, to explore how well the observed properties of AGN jets can be reproduced by assuming a helical B field with three parameters; pitch angle, viewing angle and degree of entanglement. This model has been applied to multifrequency Very Long Baseline Interferometry (VLBI) observations of the AGN jets of Markarian 501 and M87, making it possible to derive values for the helical pitch angle, the viewing angle and the degree of entanglement for these jets. Faraday rotation measurements are another important tool for investigating the B fields of AGN jets. A helical B field component should result in a systematic gradient in the observed Faraday rotation across the jet. Real observed radio images have finite resolution; typical beam sizes for cm-wavelength VLBI observations are often comparable to or larger than the intrinsic jet widths, raising questions about how well resolved a jet must be in the transverse direction in order to reliably detect transverse Faraday-rotation structure. This thesis presents results of Monte Carlo simulations of Faraday rotation images designed to directly investigate this question, together with a detailed investigation into the probabilities of observing spurious Faraday Rotation gradients as a result of random noise and finite resolution. These simulations clearly demonstrate the possibility of detecting transverse Faraday-rotation structures even when the intrinsic jet widths are appreciably smaller than the beam width.
Resumo:
The artistic play of light seen on a pyramid in some Mayan ruins located in Cancun, Mexico provided the inspiration for Vision of Equinox. On both the spring and autumn equinox days, the sunlight projected on the pyramid forms a shape which looks like a serpent moving on the stairway of the pyramid. Vision of Equinox was composed with an image of light as the model for the artistic transfiguration of sound. The light image of sound changes its shape in each stage of the piece, using the orchestra in different ways - sometimes like a chamber ensemble, sometimes like one big instrument. The image of light casting on a pyramid is expressed by descending melodic lines that can be heard several times in the piece. At the final climax of the work, a complete and embodied artistic figure is formed and stated, expressing the appearance of the Mayan god Quetzalcoatl, the serpent, in my own imagination. The light and shadow which comprise this pyramid art are treated as two contrasting elements in my composition and become the two main motives in this piece. To express these two contrasting elements, I picked the numbers "5" and "2," and used them as "key numbers" in this piece. As a result, the intervals of a fifth and a second (sometimes inverted as a seventh) are the two main intervals used in the structure. The interval of a fifth was taken into account for the construction of the pyramid, which has five points of contact. The interval of a second was selected as a contrasting sonority to the fifth. Further, the numbers "5" and "2" are used as the number of notes which form the main motives in this piece; quintuplets are used throughout this piece, and the short motive made by two sixteenth notes is used as one of the main motives in this piece. Moreover, the shape of the pyramid provided a concept of symmetry, which is expressed by the setting of a central point of the music (pitch center) as well as the use of retrograde and inversion in this piece.
Resumo:
We demonstrate that interferometric lithography provides a fast, simple approach to the production of patterns in self-assembled monolayers (SAMs) with high resolution over square centimeter areas. As a proof of principle, two-beam interference patterns, formed using light from a frequency-doubled argon ion laser (244 nm), were used to pattern methyl-terminated SAMs on gold, facilitating the introduction of hydroxyl-terminated adsorbates and yielding patterns of surface free energy with a pitch of ca. 200 nm. The photopatterning of SAMs on Pd has been demonstrated for the first time, with interferometric exposure yielding patterns of surface free energy with similar features sizes to those obtained on gold. Gold nanostructures were formed by exposing SAMs to UV interference patterns and then immersing the samples in an ethanolic solution of mercaptoethylamine, which etched the metal substrate in exposed areas while unoxidized thiols acted as a resist and protected the metal from dissolution. Macroscopically extended gold nanowires were fabricated using single exposures and arrays of 66 nm gold dots at 180 nm centers were formed using orthogonal exposures in a fast, simple process. Exposure of oligo(ethylene glycol)-terminated SAMs to UV light caused photodegradation of the protein-resistant tail groups in a substrate-independent process. In contrast to many protein patterning methods, which utilize multiple steps to control surface binding, this single step process introduced aldehyde functional groups to the SAM surface at exposures as low as 0.3 J cm(-2), significantly less than the exposure required for oxidation of the thiol headgroup. Although interferometric methods rely upon a continuous gradient of exposure, it was possible to fabricate well-defined protein nanostructures by the introduction of aldehyde groups and removal of protein resistance in nanoscopic regions. Macroscopically extended, nanostructured assemblies of streptavidin were formed. Retention of functionality in the patterned materials was demonstrated by binding of biotinylated proteins.
Resumo:
Use of phase transfer catalysts such as 18-crown-6 enables ionic, linear conjugated poly[2,6-{1,5-bis(3-propoxysulfonicacidsodiumsalt)}naphthylene]ethynylene (PNES) to efficiently disperse single-walled carbon nanotubes (SWNTs) in multiple organic solvents under standard ultrasonication methods. Steady-state electronic absorption spectroscopy, atomic force microscopy (AFM), and transmission electron microscopy (TEM) reveal that these SWNT suspensions are composed almost exclusively of individualized tubes. High-resolution TEM and AFM data show that the interaction of PNES with SWNTs in both protic and aprotic organic solvents provides a self-assembled superstructure in which a PNES monolayer helically wraps the nanotube surface with periodic and constant morphology (observed helical pitch length = 10 ± 2 nm); time-dependent examination of these suspensions indicates that these structures persist in solution over periods that span at least several months. Pump-probe transient absorption spectroscopy reveals that the excited state lifetimes and exciton binding energies of these well-defined nanotube-semiconducting polymer hybrid structures remain unchanged relative to analogous benchmark data acquired previously for standard sodium dodecylsulfate (SDS)-SWNT suspensions, regardless of solvent. These results demonstrate that the use of phase transfer catalysts with ionic semiconducting polymers that helically wrap SWNTs provide well-defined structures that solubulize SWNTs in a wide range of organic solvents while preserving critical nanotube semiconducting and conducting properties.
Resumo:
It has long been recognized that whistler-mode waves can be trapped in plasmaspheric whistler ducts which guide the waves. For nonguided cases these waves are said to be "nonducted", which is dominant for L < 1.6. Wave-particle interactions are affected by the wave being ducted or nonducted. In the field-aligned ducted case, first-order cyclotron resonance is dominant, whereas nonducted interactions open up a much wider range of energies through equatorial and off-equatorial resonance. There is conflicting information as to whether the most significant particle loss processes are driven by ducted or nonducted waves. In this study we use loss cone observations from the DEMETER and POES low-altitude satellites to focus on electron losses driven by powerful VLF communications transmitters. Both satellites confirm that there are well-defined enhancements in the flux of electrons in the drift loss cone due to ducted transmissions from the powerful transmitter with call sign NWC. Typically, ∼80% of DEMETER nighttime orbits to the east of NWC show electron flux enhancements in the drift loss cone, spanning a L range consistent with first-order cyclotron theory, and inconsistent with nonducted resonances. In contrast, ∼1% or less of nonducted transmissions originate from NPM-generated electron flux enhancements. While the waves originating from these two transmitters have been predicted to lead to similar levels of pitch angle scattering, we find that the enhancements from NPM are at least 50 times smaller than those from NWC. This suggests that lower-latitude, nonducted VLF waves are much less effective in driving radiation belt pitch angle scattering. Copyright 2010 by the American Geophysical Union.
Resumo:
The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L»l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (~25-km grid spacing) to the same resolution as the NCEP stage IV products (~4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied. © 2010 American Meteorological Society.